AMD disagrees with you. Wait, are we talking about AMD Ryzen or some other Ryzen?
Consider the example of the AMD Ryzen™ 7 1700 processor. It has a base clock of 3.0GHz
Just checked here. 25 peak consecutive players lol:p
Noone plays this game, literally.
It isnt bad if everything else is the same. But everything else isnt. The 1700 is 3 GHz and 7700K is 4.2 GHz. So that is 20% on top of the clock advantage, which is 40% in this particular case.
Overclocking wont change the situation either. You get twice as much chance of getting a Kaby Lake to 5 GHz than a Ryzen to 4 GHz.
As of 3/13/17, the top 29% of 1700Xs were able to hit 4.0GHz or greater.
And remember it was the 1700X, which is a lot more expensive than the 7700K.
As of 2/22/17, the top 59% of tested 7700Ks were able to hit 5.0GHz or greater.
If you noticed here
Its noticeable if you have 85 Hz monitor and up. Not to mention games will definitely be more demanding in the future. I give a year till the number goes down to 60, which is used by the vast majority of computer users.
The only thing demanding in the scenario you mentioned is streaming, which can be easily offloaded to the GPU. It is is much more efficient at encoding anyway. That is why AMD themselves released the ReLive driver a couple of months ago, which was a blatant copy of Nvidia's ShadowPlay that's been around for years.
The rest can be done even on a $99 smartphone CPU.
Sorry I didnt word it properly, I meant that while additional cores has an advantage, per-core performance is still the most important thing to consider. As @iKirin mentioned, single-core performance is not going away anytime soon. You can only offload so many things to the other core without spending significantly more effort. Programming is hard. Programming for multithreading is a LOT harder. Anyway, workloads that can be spread across many cores like video encoding or 3D rendering, most of the time, is better done on the GPU anyway as it can do that an order of magnitude faster, making the CPU irrelevant in this case as I mentioned in the streaming example above.
In addition to that, major applications may seem to scale well across cores, but may or may not have all its components inside behave that way.
Here's a good example
I use Autodesk Maya for animation, and after a ton of research, I found that many of the specific processes I work with (which are character rig deformations and viewport playback speed) are single threaded by default.
The proportion is even more staggering once you actually step in the actual world. Many still use older version of the software for cost or compatibility reasons, which, you guessed it, are still single-threaded
I think comparing the two companies with different cultures, budgeting, strategy, etc. are a bit like comparing apples to oranges. Its a bit like saying "Fourth gen Eve V will have small battery" just because the Surface was done that way.
I personally view Ryzen similarly as the first iteration of the FX, which started is life as the codename Bulldozer. We all know how it ended up.
Well to be fair, i3 has 2 less cores than i5 or i7 and therefore produces way less heat. That is why Intel ships i3 with smaller coolers than its bigger brothers.
Its not just AMD.
Its the common strategy when you don't have as much R&D budget as your main competitor. You probably have seen the MediaTek SoC that advertises "true 8 core" as opposed to Qualcomm with quad-core or big.LITTLE 8-core configuration. Making a core runs faster is way more difficult, and costly, than just putting more cores.
The benefit is, well, you score higher in benchmarks.
But we all know who provides better overall experience at the end of the day.
(edited to reflect clarifications from https://eve.community/t/amds-ryzen-is-here/5497/90)