seems to be the real deal http://wccftech.com/asus-rog-strix-notebook-amd-ryzen-7-8-core-cpu/
Nah, 65W is like Intel’s desktop CPUs. There are laptops with overclockable desktop-grade 95W CPUs, but those are not really what we usually mean when we say “laptop”… This one is similar, so I wouldn’t call it a mobile CPU.
AMD’s Raven Ridge got announced or whatever, i need to go sleep, here the news
Seems to perform incredibly well, we will see.
It would be very interesting to get benchmarks about a 9W Ryzen with fanless cooling.
Anandtech article (as usual) is a bit more in depth than others, worth a read in my mind .
And Notebookcheck has a short article - not a preview/review - about one of the first AMD Raven Ridge notebooks.
Straight from AnandTech
So what is interesting here is that in each of the benchmarks, the Core i5 scored higher than the Core i7. The Core i7 was in an Acer Spin 5, whereas the Core i5 was in an Acer Swift 3. It is likely that the Swift 3 is a bulkier system capable of dissipating more energy, however it does mean that the Core i5 outperforms the Core i7 in all benchmarks, and it also outperforms the Ryzen 5 in all the benchmarks except TrueCrypt.
So essentially they took a gimped i7, the worst possible i7 example that performs even worse than its i5 counterparts, just to make the Ryzen look good. Deceptive marketing at best.
Nothing borderline about it. In the US it had to settle with AMD for $1.3 billion and in the EU it has been fined roughly the same amount. The question is - has it learned its lesson yet?
HP has a really interested right now but their quality control is awful enough to keep me away.
if the system has a lot of cores available and a background process performs some very light work (such as checking for updates), rather than dropping 500-800 MHz because more cores are loaded, the system will keep at the high frequency. AMD is stating that this has a big effect on real-world workloads, typically those that have variable thread workloads such as gaming.
Sounds like pretty bad power management on paper… Of course, we don’t have real world examples yet, but it makes hell of a lot less sense than Intel’s methods. So with Ryzen mobile the CPU will stay at high clock rates and generate heat as well as waste battery life without any purpose, while Intel’s CPU will go down to conserve power and preserve low temperature.
I’m surprised that Anandtech described this positively.
[The new turbo model] will provide the best turbo frequency it can, regardless of if one thread is being used or all threads are being used.
That’s another example of bad power management, because even if you run one single-threaded application that is CPU heavy, it will push the whole CPU to the limit, generating unnecessary heat and shortening battery life, instead of boosting that one core that is needed.
In addition to what @Patrick_Hermawan noted, this is quite incomprehensible to me.
I really have no idea why they would do this, but maybe they just failed to make a mobile processor that could compete with Intel and decided to make their marketing the opposite of all Intel’s advantages hoping to win some customers this way.
I think you misunderstood the article. With Intel CPU, when a second core is loaded, the first core has to drop by 500-800 MHz (1-core turbo vs all-core turbo). For example, for the 8700K, when 2 or more cores are loaded, none of them can exceed 4.3 GHz (all-core turbo) no matter how much load is given to each core. When, and only when one core is active, it could boost all the way to 4.7 GHz (1-core turbo). Several reviewers have noted this issue, as Windows 10 often fires up some background task at random times, and that affects the gaming benchmark by quite a noticeable amount.
I mean, if the one you said was true, I would also be surprised to see Anandtech described this positively.
I understood it very well. In mobile chips, it’s crucial to save energy and heat. You can’t afford having all the cores running at top boost frequency, it will generate too much heat. Either that, or the boost frequency has to be very low.
I think in short what it means is that the second core has no impact on the boost clock of first core, unlike on Intel platform where any kind of load the second core will bring the first core down by a few hundred megahertz. The clock speed of the second core itself is not being discussed there.
I think you shouldn’t compare intel turboboost technology to what AMD claims with having only an all core boost and than just claim something on a hypothetical use case. The benefit from intel is that they will have a higher single core score compared to what would be achieved with the lower clock speed of the all core boost. Does it matter? It kind of does and doesn’t. For gaming, chance is high that the games will use the 4 cores available. Only really old games would optimized for single/dual thread loads, for which the turboboost would kinda help. I say kinda, since I don’t know if the extra clockspeed would really matter for those old games. Once we go with 6 or 8 cores, the higher turboboosting when not all cores are used will be beneficial since today there are still many games being released that are poorly optimized to use the extra cores, and since they are so recent, the clockspeed can really matter for the game experience.
So the only big difference is that intel let the clockspeed rise when fewer cores are used, since there is some headroom left in the TDP. On the other hand, AMD decided to not do that. For multitasking and gaming and such, you are more likely to go the multithreaded route and than the higher turbo clock at fewer core use from intel would matter less. I don’t know single threaded applications that would split their threads once they fill up a core, so it is not like AMD would ‘waste battery’ by activating an extra core once the first one is filled. Instead, you hit the performance limit from AMD at the all core turbo, and not at a special single core turbo. You can always show me where that would matter for programs, and how that would drain your battery faster. I don’t know every use case possible.
For me, both would be used mostly on all core turbo (gaming is mostly moved to 4core usage, video editing and rendering uses all possible cores). And looking at that use case, the 707 cinebench score from the R7 2700U is very impressive, beating my 47W 4720HQ at 694. If it does that, I hope it comes to wacom-compatible convertibles. I am also wondering how it would compare to a lenovo yoga 720 than (7700HQ and GTX 1050). And how video rendering would go with it, since the iGPU would than be used to accelerate the process instead of a dGPU (at least, that is the point with those APU’s).
I also can’t wait for their desktop APU’s. Since that would save me buying a GPU until the next gen is released, and with freesync it could probably game like a system with GTX 1050, which would be very nice.
I think its more like this:
Let’s say we have a dual-core CPU with 2 GHz base clock, 2.5 GHz all-core turbo, and 3 GHz single-core turbo, sitting below an overkill cooler (so temperature and TDP is not an issue)
Scenario 1: Core #1 is fully loaded, core #2 has no load.
Intel: Core #1: 3 GHz, Core #2: 0.8 GHz
AMD: Core #1: 3 GHz, Core #2: 0.8 GHz
Scenario 2: Core #1 is fully loaded, core #2 is also fully loaded.
Intel: Core #1: 2.5 GHz, Core #2: 2.5 GHz
AMD: Core #1: 2.5 GHz, Core #2: 2.5 GHz
Scenario 3: Core #1 is fully loaded, core #2 is lightly loaded
Intel: Core #1: 2.5 GHz, Core #2: 1.2 GHz
AMD: Core #1: 3.0 GHz, Core #2: 1.2 GHz
Yeah like the 4-5 games that are finally optimized for multi-threading… Seriously, single core performance matters a lot for games.
Funny that most current games have big improvement when switching from an i3 to an i7 (with exception of the 8th gen), so I guess there are more than only 4-5 games optimized for multithreading. For real multithreading where the 1800X would be an improvement over the 1600X there is indead not that many games available.
I can imagine that the higher clocks when less cores are used is usefull, if you are gaming with a CPU with more than 4 cores. For the 4 core CPU’s, it would only matter for outdated game engines, or retro gaming, and than I am wondering if it would even matter. At the time of those game engines that only stress 1-2 cores, the CPU’s were lower clocked than the current gen CPU’s.
Which doesn’t really mean a different power consumption.
Which also doesn’t mean a different power consumption
It is the other way around, with Intel boosting higher. But I don’t know if it would boost a core if the other one is partially loaded. So it should be more like this than:
Case 1: max single threaded load
Intel: core1 3,5Ghz, core2 0,8Ghz
AMD: core1 3Ghz, core2 0,8Ghz
Case 2: max multithreaded load:
Intel: core1 3Ghz, core2 3Ghz
AMD: core1 3Ghz, core 2 3Ghz
Case 3: a strong single thread load and a light background loads
Intel: core1 3Ghz, core2 1,2Ghz
AMD: core1 3Ghz, core2 1,2Ghz
Why I think it is more this way? Since Intel turbo boost implies that the turbo boost applies to the stressed cores, so I understand that as a 'once another core is loaded with a thread, the turbo boost would be lowered to what the turbo boost for the total of the loaded cores is.
A dual core is a bad example, since there will always be background programs running on the not used core when the first core is fully loaded.
Another possible way that turbo boost could work, is that it would still boost the fully loaded core even if another core is slightly loaded, but it won’t go all the way (since there is another core running and producing heat, the situation would be suboptimal for reaching full single core turbo boost). So than case 3 would become:
Intel: core1 3,3Ghz, core2 1,2Ghz
AMD: core1 3,0Ghz, core2 1,2Ghz
But I still can’t see a power saving in that higher single core boost, since on one hand AMD is running at lower clockspeed in case 1 (very hypothetical) and case 2 (more realistic), a lower clockspeed requires less power and less heat production, but it has also lower performance so it will need power for a longer period of time. So on the power side, I can’t think of a clear answer. On the performance side it is easier, since IPC of AMD and Intel is close this time, than you can see that the single core turbo would increase single core performance. Which has its use cases, like old unoptimized games, but for recent games, that would depend on the core count of the CPU compared to what the game is optimized for (GTA V dual core, more recent games are more optimized for quad cores, and there are already games having benefit from more than 4 cores, but those remain exceptions at the moment). Also it depends on how the turbo boost works, completely lowering a complete level once an extra core is activating, or only lowering a bit as long as it fits in the TDP settings. The latter one, games that have 1 big primary thread and the other ones are very small, in that case with the latter interpretation of turbo boost, it would have a slightly better gaming performance because the primary thread will get boosted as far as the situation allows. But if it is notable is something else.
I think the example by @Patrick_Hermawan was just to showcase how AMDs boost works compared to the one Intel uses - and he used the same numbers to make it easier to read.
Also with your example I’m not sure where you’re pulling those numbers from, as it depends highly on the CPU how high the ‘all-core boost’ from Intel is.
Finally since you were mentioning power consumption - it’s completely dependent on the CPU architecture (and the node) how much power it takes on a certain frequency
i7s usually have higher clock speeds too…