I am deal with threading and related topics like: processes, context switching... I understand that on a system with one multicore processer real work of more than one processes isn't real. We have just an illusion of such work, because of process context switching.
But, what about threads within one process, that runs on a multicore processor. Does they really work simultaneously or it's also just an illusion of such work? Does processor with 2 hardware cores can work over two threads at a time? If not, what is the point in multicore processors?
Multiple cores do actually perform work in parallel (at least on all mainstream modern CPU architecture). Processes have one or multiple threads. The OS scheduler schedule active tasks, which are generally threads, to available core. When there are more active tasks than available cores, the OS use preemption so execute tasks concurrently on each core.
In practice, software applications can perform synchronization that may cause some cores to be inactive for a given period of time. Hardware operation can also cause this (eg. waiting for memory data to be retrieved, doing an atomic operation).
Moreover, on modern processors, physical cores are often split in multiple hardware threads that can each execute different tasks. This is called SMT (aka Hyper-threading). On quite recent x86 processors, 2 hardware threads of a same core can simultaneously execute 2 tasks in parallel. The tasks can share parts of the physical core like execution units so using 2 hardware thread can be faster than 1 for some tasks (typically the ones not using fully the processor cores).
Having 2 hardware threads that cannot truly run in parallel but run concurrently at a low granularity can still beneficial for performance. In fact, it was the case for a long time (during the last decade). For example, when a task is latency bound (eg. waiting for data to be retrieved from the RAM), another task can be scheduled so to do some work, improving the overall efficiency. This was the initial goal of SMT. The same is true for pre-empted tasks on a same core (though the granularity need to be much bigger): one process can perform a networking operation and be pre-empted so another process can do some work before being pre-empted again because of data being received from the network.