USUALLY most processes use only a single core. Multithreading in most languages and environments requires extra work and thought to be put into the software architecture and DEFINITELY is not something that happens automatically.
I do not know how Unity and Unreal handle things, but I would be quite shocked if most modern games used more than a few threads (which may or may not actually be mapped to separate cores) TOPS and 99% of indie games using only a single thread.
A game engine is traditionally not something that benefits immensely from multithreading, as no game in its right mind is blocking on IO consistently... Then most highly-parallelizable problem sets within gaming tend to lend themselves better to data-level parallelism rather than instruction-level parallelism. That's where GPGPU is coming into play for things like hardware accelerating a physics engine... It's parallelized on the GPU, not CPU.
Then you also have to keep in mind that a multithreading environment comes with its own inherent overhead as well... Obviously you're duplicating the runtime stack and local data, but more importantly there is a large amount of programming complexity that becomes introduced for synchronization and data protection. Quite often this is enough to destroy any benefits you would get from parallelization.
The few scenarios where multithreading MAY make sense in game development are:
- Streaming Assets - whether it be textures, audio, or video, it's useful to defer file accesses to separate threads, so the entire engine doesn't have to block on a disk read. Elysian Shadows gives each simultaneously playing OGG track its own thread, so that the main engine isn't blocking while the OS tries to load OGG data from the hard drive... but honestly, I'm mostly doing it this way for mobile devices, consoles, and because I'm a perfectionist. A modern engine can TOTALLY just load the entire fucking OGG into RAM and not need to bother with streaming.
- Networking - this is one of the main ones. It's very useful to defer networking logic to another thread so that the main logic is not blocking while the OS transfers packets... But actually even then, I just did the networking code for EVMU with all nonblocking OS calls being polled every frame... I have no doubt this is actually superior to a multithreaded implementation, as it's not like I will ever need the updated data mid-frame, I'm already not waiting on incoming data, and I would have to introduce additional complexity to synchronize the threads and data.
- Rendering - Despite what people may think/say, this is actually not usually beneficial in game development... It's ONLY beneficial when you have a shitty driver or are transferring a massive amount of data to or from the GPU. In that scenario, it allows the game logic to continue executing without having to wait for the transfer to the GPU to succeed... But in actuality, MOST OpenGL drivers are multithreaded to begin with, and are usually queueing GPU commands to be flushed asynchronously later, so this is already done under-the-hood behind-the-scenes for you... Also modern GPU usage paradigms these days are moving more and more data to the GPU, so that there is less and less need for these kinds of frame-by-frame transactions anymore. But even in the olden days of yore and on older consoles with a fixed-function pipeline, where each frame requires a massive amount of data transfer, a separate thread for GPU transfers STILL didn't make much sense. Don't forget that it's not like you are calling memcpy() or are using code to transfer that kind of data. That's where asychronous DMA transfers come into the picture... That's why the DMA exists. It's an entire piece of hardware dedicated to offloading data transfers asychronously. That's how most CPU/GPU transactions are handled on the DC, and I'm sure that's still how most every console, including the PC (including multithreaded implementations) are handling it.
There are probably a few other good examples out there too, but video games do not tend to make heavy use of threads.