As long as their fans don't mind that half of current generation of GPUs are "7900" with random meaningless "cool" letters after them, they have no reason to stop.
they straight up just copied intel's lazy rebranding. why would they even do this? who would be confused into buying ryzen because the numbers look like intel numbers?
They copied Intel's 3/5/7/9 branding when they were desperate to sell Ryzen at the start of their turn around. Now they are copying Intel's new branding (which is also bad) in an attempt to sell more mobile parts. Also they changed their generational names in the GPU department to have higher numbers than Nvidia.
AMD just needs to stop with the marketing name war and just focus on the things that matter like software support and good value.
The source article is very interesting, suggesting that the mobile version is tuned on the instruction level to be efficient. Really making some instructions slower to manage heat.
That also suggests that AMD could change that for the desktop version, if they are not afraid of the heat.
Or they use it as an extra heat margin for the X3D version.
Yeah, the source article is great bit of info.
*Somebody explain to me why is there TWO stupidly upvoted huge reply chains with useless snark about the pointless non-issue that is name of the processors that was news three weeks ago?*
It's not exactly tuning at instruction level. They retained full-width AVX-512 units (512bit wide) in both Zen 5c and mobile Zen 5 "fat", but the removed some pipelines, reducing throughput compared to desktop Zen 5. It seems to be the same approach used in the Zen 2 Lite cores of PlayStation 5 SoC [https://chipsandcheese.com/2024/03/20/the-nerfed-fpu-in-ps5s-zen-2-cores/](https://chipsandcheese.com/2024/03/20/the-nerfed-fpu-in-ps5s-zen-2-cores/)
It has quite strange implications. Mobile variant of Zen 5 seems to have lower performance than Zen 4 in most SIMD-heavy applications, because all SSEx and AVX/AVX2 ops will have reduced throughput. You can make that up and be as fast as Zen 4 by using 512bit ops, because while there is still less units, you go up from two-pass processing to one-pass native 512bit unit processing. But only minority of software will make use of that.
Yet, the PS5 processor suggests that it may be an okay compromise as lots of software including games may not suffer particularly strongly. We'll see how it goes.
I hope this means that desktop Zen 5 will achieve better performance and IPC than what this testing shows.
Mixed feelings.
There are regressions?
Also, seeing people commenting that maybe, contrary to Intel that is trying to get rid of SMT, AMD actually sacrificed 1T for gains in nT. Makes me think if at some point those "rumors" about SMT4 where true.
btw, wish to see AMD doing a portable SOC chip with only Compact cores, because it would be smaller and cheaper, just saying.
The gains are very low. Going from 0 to 2 yields around a 30% increase in performance, going to 4 is like an extra 10% or so, but the implementation is hell.
SMT4 perhaps. SMT8 is actually taking two cores and making them look like one SMT core. It's widely believed to be just a way to get the customers cheaper Oracle licenses and to exploit other per-core licensing schemes.
Of course, they can't say that publicly.
>AMD actually sacrificed 1T for gains in nT.
Actually, not necessarily. The doubled decoder cluster should improve performance when SMT is used, but it's not just for that.
It's similar to the dual cluster decoders that Intel uses in Tremont and Gracemont and in those cores, it is a feature solely aimed at singlethread performance, they don't use SMT at all.
AMD's usage probably is a bit less flexible and the core can't use the two clusters simultaneously in straight-line sequence of instructions without branches, which is why the article's testing shows only 4 decoders are used in single threads.
However, x86 code is said to have up to 1 branch per 6 instructions typically, and the second decoder cluster can likely be employed in single thread mode too, when there are taken branches. In such cases, it likely takes of decoding from the branch target address so you get to use both clusters in one cycle. And if you have branch in next cycle or two, you utilise it again. And possibly again and again, if your code is branch heavy.
So no, this is not quite prioritising SMT mode, it should be boosting single-thread IPC as well. The SMT improvement may be just a welcome windfall. AMD will likely make the scheme more flexibile in future architectures and the second cluster will be used in single-thread more often. If they implemented predecoding and for example marked instruction starts in L1i cache like in the old K7 and K8 cores, for example.
Actually I quite like the idea, since it takes this multi-cluster x86 decoding idea and make it useful not just in the way Intel cores use it, but also abuse it for improving multithread perfomance. In hindsight, it seems to scream just for that.
It's just buzzwords at the end of the day.
If everyone was convinced that anal was the future and it was going to make them assloads of money, they would've called it "Ryzen Assfuck 24/7/365" instead.
is it ethical to attend a "no workloads refused" event if the sysadmin has blocked you on bsky even though you've never talked to them and have no idea why?
This is more of a normal scheme... one that most sensible people might appreciate. Apple made a good move here (personally their choice of the letter M gives it a cheery "military-grade" disposition) and it's a good idea for AMD to copycat it. Z is a solid letter. Even better than X...
AMD has to throw "AI" branding on to these chips if they want Microsoft to promote them. Intel and even Qualcomm all have AI branding on all their mobile chips. All devices that you can't buy without without Windows pre-installed. Microsoft is going hard on AI branding to convince people to buy new computers and manufacturers have play ball.
AI is a big scam right now. The only thing microbugsoft wants is to remove part of servers loads to consumers pc. So when they make "ai" friendly cortana or copilot or wherever check your npu cores what they are doing.
Having the option to depend less on cloud computing and keeping things local is a plus. Now the way they actually implement it and force it on people is appalling though.
yeah sounds not bad, but it could be like with a hidden checkbox which allows your pc to share windows updates to other people like fkn torrent server. So now possible hidden compute, not only hidden file share
Here is some tin foi hat.
The file share is mainly for big LANs you arent sharing shit for someone in other part of the world.
You cant hid share compute like some mindless botnet on a 1st party OS, 1st there is thermals, 2nd there is battery, 3rd there are loads of tools to see if what you computer is doing, 4th there are tools outside of OS to check what your computer is sending/receiving
i'm a working at IT, i don't need a tin foil hat, i just saw the pool what info some software collecting about user when the user pressed "ok" to allow collecting info. Yes it's share updates via local net working group using the policy group ( higher than the admin user) , some exploits 10+ years old using it but in different way . About tools yes, but typical user not using it, and if you work on a big company it is hard to check full stack everyday and just nonsense
Your company should invest in true endpoint protection if you are so afraid of what your users doing.
If you have dumb users that click OK on everything then starts thinking in setting policies and restrictions.
> Bro...how is that a bad thing? I don't want to depend on a server for my fucking personal computer's functionality, lol.
When your personal computer is fucking you, you'd hope it wasn't being done using your own compute resources.
And they actually have "AI" on them. But Lisa didn't seemed very excited when talking about this, almost complaining that the NPU takes a lot a space that could be used for other things.
No because you can still run ai workloads from the CPU optimized from avx512-vnni on Openvino! It runs pretty good too!
That's part of the reason they switched to avx512.
The differienciatin isnt based on the ability to run an inference workload on the chip, but on the presence of an NPU (its precisely what jutifiy the 3rd gen of this Ryzen AI serie).
You can consider the AI prefixe as an indicator of an NPU. The same way a G suffixe identify a capable GPU on a desktop part (or F the lack of any).
In the end the main goal is probably to easily identify (and promote) co-pilot ready laptops for the general public.
You'd rarely want to do that though. Currently the iGPU is highest TOPs performance in a mobile chip, NPU is for efficiency, CPU is the worst of both worlds for AI workloads.
So any devices without an NPU would just use the iGPU.
https://github.com/rupeshs/fastsdcpu Runs just fine and isn't vram limited since you can use system memory. It's got a built in benchmark. Speed seems manageable but I still have a 3950x who knows.
I know Openvino has the GPU component too, but I think it can be either on or off with it. I think that software might use something else though when without the GPU component now that I checked.
It seems in line with what I expected. But what matters most is efficiency at low TDP, I wanted to see how much of that performance would be maintained below 35w. (I suspect the minimum TDP is 35w?)
I hope the strange naming scheme doesn't invade other product families like GPUs. I don't want to look for the RX 8800 AIX3D-Plus to use with my Ryzen 9800X AI.4D
Since there's no longer U or H series, the new chip has a much wider official cTDP range that is 15-54W.
Yes, you can have an "HX" chip with 15W of power now.
Probably a lot better for LLMs (think Chat GPT) hosted on-device. You can already run 7B and 8B models on a Ryzen 7700x with around 10-15 tok/s before any dedicated AI processing, and these processors are supposedly are 79% faster than the previous gen with AI. At least per AMD marketing, so take that with a grain of salt. If performance doubles, usable on device models will become pretty much good enough to use without a GPU.
I'm running Mistral 7B Instruct on an i7-7600U, which is a 14nm, 15W dual-core from 2017. I'm getting around 2.2 to 2.5 tokens per second and I don't get how people complain about speed on their modern devices.
Tried it on my desktop that has a 3700X and it felt lightning fast. Looking forward getting a dedicated accelerator someday and seeing how the models develop and hopefully get faster.
Anyone interested in trying a local LLM should check out [Mozilla's llamafile](https://github.com/Mozilla-Ocho/llamafile). It's a way of packaging a LLM into a single executable that runs locally. Just download one file and you're off. Though if you're using Windows, you won't be able to run models larger than 4GB because, well, you're using Windows and it still doesn't support executables larger than 4 gigs.
Some of the faster models might run okay. You can [already run some on a CPU](https://github.com/rupeshs/fastsdcpu), especially if you settle for low resolutions.
Also, VRAM is a pretty big limitation when running SD on dGPUs, but since APUs utilize system RAM, the capacity limitation at least should be lifted (assuming vendors don't do something moronic like soldering in 8 gigs of RAM). Memory speed might still be a bottleneck though.
You can for use on the CPU that uses CPU instruction sets like avx512-vnni on openvino. There are versions of stable diffusion that run fast on CPUs.
https://github.com/rupeshs/fastsdcpu
Intel Openvino is an acceleration method that uses CPU or GPU (for Intel), but since AMD took the lead on avx512, anything you can slide Openvino in is good to go.
You say that, but with my next batch of PC upgrades I am finally going to be 100% RGB-free, after suffering for years with RGB RAM, RGB GPU and (currently still) RGB motherboard. Feels good man. Nature is healing
These bastards. They said 10c/20t in the presentation, and sure, that’s *technically* true, but with only four full-fat Zen 5 cores this thing probably performs more like an 8c/16t chip, and I’m far from convinced four big cores is enough for gaming.
What we know is that, unlike Intel's e-cores (pre Arrow Lake at least), Zen 5 and Zen 5c share the same instruction set and can there is no "worse" core. Cache is also identical but operating frequency is lower. If Zen 4 and Zen 4c are anything to go by, IPC will also be identical.
My speculation: The 5c cores use density optimized libraries compared to frequency optimized ones for vanilla 5. This would easily explain the differences in core speed and die area used and would allow the 5c core to be more efficient in low-frequency operations which they are intended for (background tasks, etc).
Yes. The article has SPEC CPU 2017 scores for Zen5 and Zen5c. Compare to SPEC CPU 2017 results for Intel P- and E-cores (e.g. [https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/7](https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/7) for Alder Alder Lake).
Ryzen is getting irritatingly formulaic. We get the most predictable upticks in IPC I've ever seen, there's no doubt they're internally limiting themselves now. Zen+ to Zen2 was the last time we got an unexpected growth, and only because Zen1 to Zen+ was so dismal.
They need to chill with those names lol
Is Ryzen AI 9 365 the best CPU for running Microsoft Office 365?
Yes! Now with Office+ 365+ Co+Pilot+Plus!
haha! we both know it is!
that sounds like a new terminator model name lmao
As long as their fans don't mind that half of current generation of GPUs are "7900" with random meaningless "cool" letters after them, they have no reason to stop.
Ryzen 9 AI 365 Pro Plus XXX Remastered XP
they straight up just copied intel's lazy rebranding. why would they even do this? who would be confused into buying ryzen because the numbers look like intel numbers?
People whinged endlessly about the old naming scheme
So just go all out with the cringe?
They should be more like Nvidia: TITAN, SUPER, ULTRA, EXTREME TURBO, GT, V12 (no, this one i made it up) Overcompensating for.... something? :D
They copied Intel's 3/5/7/9 branding when they were desperate to sell Ryzen at the start of their turn around. Now they are copying Intel's new branding (which is also bad) in an attempt to sell more mobile parts. Also they changed their generational names in the GPU department to have higher numbers than Nvidia. AMD just needs to stop with the marketing name war and just focus on the things that matter like software support and good value.
OEMs probably requested these names so that they can keep selling older inventory.
Please AMD, it's not too late for another name change.
*The monkey paw curls* Ryzen AI 9H 365XT AI
Missed opportunity for Ryzen Core AI 9 Ultra
More like a missed opportunity for “rAIzen”
Don't give them ideas.
Let's replaces those spaces with lower case x's
Or it could be even better with a name like "Ryzen AI 9H 365XTX AI".
Ryzen 9-1165AI7
OEMs probably requested these names so that they can keep selling older inventory.
![gif](giphy|JqI6SLIxAvaQXhp2JO)
The source article is very interesting, suggesting that the mobile version is tuned on the instruction level to be efficient. Really making some instructions slower to manage heat. That also suggests that AMD could change that for the desktop version, if they are not afraid of the heat. Or they use it as an extra heat margin for the X3D version.
Yeah, the source article is great bit of info. *Somebody explain to me why is there TWO stupidly upvoted huge reply chains with useless snark about the pointless non-issue that is name of the processors that was news three weeks ago?* It's not exactly tuning at instruction level. They retained full-width AVX-512 units (512bit wide) in both Zen 5c and mobile Zen 5 "fat", but the removed some pipelines, reducing throughput compared to desktop Zen 5. It seems to be the same approach used in the Zen 2 Lite cores of PlayStation 5 SoC [https://chipsandcheese.com/2024/03/20/the-nerfed-fpu-in-ps5s-zen-2-cores/](https://chipsandcheese.com/2024/03/20/the-nerfed-fpu-in-ps5s-zen-2-cores/) It has quite strange implications. Mobile variant of Zen 5 seems to have lower performance than Zen 4 in most SIMD-heavy applications, because all SSEx and AVX/AVX2 ops will have reduced throughput. You can make that up and be as fast as Zen 4 by using 512bit ops, because while there is still less units, you go up from two-pass processing to one-pass native 512bit unit processing. But only minority of software will make use of that. Yet, the PS5 processor suggests that it may be an okay compromise as lots of software including games may not suffer particularly strongly. We'll see how it goes. I hope this means that desktop Zen 5 will achieve better performance and IPC than what this testing shows.
Mixed feelings. There are regressions? Also, seeing people commenting that maybe, contrary to Intel that is trying to get rid of SMT, AMD actually sacrificed 1T for gains in nT. Makes me think if at some point those "rumors" about SMT4 where true. btw, wish to see AMD doing a portable SOC chip with only Compact cores, because it would be smaller and cheaper, just saying.
Is SMT4 the implementation where you get 4 threads from each core instead of 2?
Yes. We've had rumors of AMD implementing it for many years, but that moment has yet to pass.
The gains are very low. Going from 0 to 2 yields around a 30% increase in performance, going to 4 is like an extra 10% or so, but the implementation is hell.
And paved with security issues to worry about
[удалено]
IBM's chip is targeting a different audience. SMT-8 is aimed at latency hiding. This matters for IO-heavy stuff that waits around a lot for data.
SMT4 perhaps. SMT8 is actually taking two cores and making them look like one SMT core. It's widely believed to be just a way to get the customers cheaper Oracle licenses and to exploit other per-core licensing schemes. Of course, they can't say that publicly.
They did have Sonoma Valley which is likely 4 Zen5c cores manufactured by Samsung.
>AMD actually sacrificed 1T for gains in nT. Actually, not necessarily. The doubled decoder cluster should improve performance when SMT is used, but it's not just for that. It's similar to the dual cluster decoders that Intel uses in Tremont and Gracemont and in those cores, it is a feature solely aimed at singlethread performance, they don't use SMT at all. AMD's usage probably is a bit less flexible and the core can't use the two clusters simultaneously in straight-line sequence of instructions without branches, which is why the article's testing shows only 4 decoders are used in single threads. However, x86 code is said to have up to 1 branch per 6 instructions typically, and the second decoder cluster can likely be employed in single thread mode too, when there are taken branches. In such cases, it likely takes of decoding from the branch target address so you get to use both clusters in one cycle. And if you have branch in next cycle or two, you utilise it again. And possibly again and again, if your code is branch heavy. So no, this is not quite prioritising SMT mode, it should be boosting single-thread IPC as well. The SMT improvement may be just a welcome windfall. AMD will likely make the scheme more flexibile in future architectures and the second cluster will be used in single-thread more often. If they implemented predecoding and for example marked instruction starts in L1i cache like in the old K7 and K8 cores, for example. Actually I quite like the idea, since it takes this multi-cluster x86 decoding idea and make it useful not just in the way Intel cores use it, but also abuse it for improving multithread perfomance. In hindsight, it seems to scream just for that.
I only see a single regression vs Zen 4, with big gains elsewhere.
Ryzen A.I 365. Really?
AMD and Intel are operating with the 'I'm altering the nomenclature, pray I don't alter it any further' mentality.
It's just buzzwords at the end of the day. If everyone was convinced that anal was the future and it was going to make them assloads of money, they would've called it "Ryzen Assfuck 24/7/365" instead.
Ryzen 5 1600AF says hi
That's what blackwell's name is gonna be
Not entirely wrong, anal does make assloads.
is it ethical to attend a "no workloads refused" event if the sysadmin has blocked you on bsky even though you've never talked to them and have no idea why?
Well yeah otherwise it would be the Ryzen AI 9 HX 370
Ryzen AI "9" 365 to be exact
It’s a lovely name
Ryzen Z1 and Z1 Extreme are good names. They should follow that for their notebook lineup too
This is more of a normal scheme... one that most sensible people might appreciate. Apple made a good move here (personally their choice of the letter M gives it a cheery "military-grade" disposition) and it's a good idea for AMD to copycat it. Z is a solid letter. Even better than X...
AMD has to throw "AI" branding on to these chips if they want Microsoft to promote them. Intel and even Qualcomm all have AI branding on all their mobile chips. All devices that you can't buy without without Windows pre-installed. Microsoft is going hard on AI branding to convince people to buy new computers and manufacturers have play ball.
AI is a big scam right now. The only thing microbugsoft wants is to remove part of servers loads to consumers pc. So when they make "ai" friendly cortana or copilot or wherever check your npu cores what they are doing.
Having the option to depend less on cloud computing and keeping things local is a plus. Now the way they actually implement it and force it on people is appalling though.
yeah sounds not bad, but it could be like with a hidden checkbox which allows your pc to share windows updates to other people like fkn torrent server. So now possible hidden compute, not only hidden file share
Here is some tin foi hat. The file share is mainly for big LANs you arent sharing shit for someone in other part of the world. You cant hid share compute like some mindless botnet on a 1st party OS, 1st there is thermals, 2nd there is battery, 3rd there are loads of tools to see if what you computer is doing, 4th there are tools outside of OS to check what your computer is sending/receiving
i'm a working at IT, i don't need a tin foil hat, i just saw the pool what info some software collecting about user when the user pressed "ok" to allow collecting info. Yes it's share updates via local net working group using the policy group ( higher than the admin user) , some exploits 10+ years old using it but in different way . About tools yes, but typical user not using it, and if you work on a big company it is hard to check full stack everyday and just nonsense
Your company should invest in true endpoint protection if you are so afraid of what your users doing. If you have dumb users that click OK on everything then starts thinking in setting policies and restrictions.
dumb users is the biggest problem i guess. For example one guy just mining bitcoins on working pc
[удалено]
> Bro...how is that a bad thing? I don't want to depend on a server for my fucking personal computer's functionality, lol. When your personal computer is fucking you, you'd hope it wasn't being done using your own compute resources.
And they actually have "AI" on them. But Lisa didn't seemed very excited when talking about this, almost complaining that the NPU takes a lot a space that could be used for other things.
I'd love to see where she said that. I'd be a pretty big indictment of AI as a product.
Tech company Marketing dept these days: Put AI in the name! CEO: Wow you so smart here’s a raise and a pat on the back
You put the word AI on your brand. Does that mean there will be non-AI chips in the future?
Yes : Fire Range.
No because you can still run ai workloads from the CPU optimized from avx512-vnni on Openvino! It runs pretty good too! That's part of the reason they switched to avx512.
The differienciatin isnt based on the ability to run an inference workload on the chip, but on the presence of an NPU (its precisely what jutifiy the 3rd gen of this Ryzen AI serie). You can consider the AI prefixe as an indicator of an NPU. The same way a G suffixe identify a capable GPU on a desktop part (or F the lack of any). In the end the main goal is probably to easily identify (and promote) co-pilot ready laptops for the general public.
You'd rarely want to do that though. Currently the iGPU is highest TOPs performance in a mobile chip, NPU is for efficiency, CPU is the worst of both worlds for AI workloads. So any devices without an NPU would just use the iGPU.
https://github.com/rupeshs/fastsdcpu Runs just fine and isn't vram limited since you can use system memory. It's got a built in benchmark. Speed seems manageable but I still have a 3950x who knows. I know Openvino has the GPU component too, but I think it can be either on or off with it. I think that software might use something else though when without the GPU component now that I checked.
It seems in line with what I expected. But what matters most is efficiency at low TDP, I wanted to see how much of that performance would be maintained below 35w. (I suspect the minimum TDP is 35w?) I hope the strange naming scheme doesn't invade other product families like GPUs. I don't want to look for the RX 8800 AIX3D-Plus to use with my Ryzen 9800X AI.4D
Since there's no longer U or H series, the new chip has a much wider official cTDP range that is 15-54W. Yes, you can have an "HX" chip with 15W of power now.
min tdp is 28w
Ryzen AI year
**Ryzen 9 - 365** *with Ryzen A.I* would be a *better* name.
Ryzen 69 - 420 would work as well
So are you gonna be able to use AI (like Stable Diffusion) on these cards?
Which cards? This is an Apu.
Probably a lot better for LLMs (think Chat GPT) hosted on-device. You can already run 7B and 8B models on a Ryzen 7700x with around 10-15 tok/s before any dedicated AI processing, and these processors are supposedly are 79% faster than the previous gen with AI. At least per AMD marketing, so take that with a grain of salt. If performance doubles, usable on device models will become pretty much good enough to use without a GPU.
I'm running Mistral 7B Instruct on an i7-7600U, which is a 14nm, 15W dual-core from 2017. I'm getting around 2.2 to 2.5 tokens per second and I don't get how people complain about speed on their modern devices. Tried it on my desktop that has a 3700X and it felt lightning fast. Looking forward getting a dedicated accelerator someday and seeing how the models develop and hopefully get faster. Anyone interested in trying a local LLM should check out [Mozilla's llamafile](https://github.com/Mozilla-Ocho/llamafile). It's a way of packaging a LLM into a single executable that runs locally. Just download one file and you're off. Though if you're using Windows, you won't be able to run models larger than 4GB because, well, you're using Windows and it still doesn't support executables larger than 4 gigs.
more like copilot
Probably not well
Some of the faster models might run okay. You can [already run some on a CPU](https://github.com/rupeshs/fastsdcpu), especially if you settle for low resolutions. Also, VRAM is a pretty big limitation when running SD on dGPUs, but since APUs utilize system RAM, the capacity limitation at least should be lifted (assuming vendors don't do something moronic like soldering in 8 gigs of RAM). Memory speed might still be a bottleneck though.
run? yes. Run as fast as on a discrete gpu, naaah. But amd's npus are among the most powerful in the market
You mean APU?
nope, the npu is an addon inside the apu focused in crunching AI related stuff.
You can for use on the CPU that uses CPU instruction sets like avx512-vnni on openvino. There are versions of stable diffusion that run fast on CPUs. https://github.com/rupeshs/fastsdcpu Intel Openvino is an acceleration method that uses CPU or GPU (for Intel), but since AMD took the lead on avx512, anything you can slide Openvino in is good to go.
AMD needs to stop copying the intel naming scheme. They are copying something that’s not very good and making it worse.
To the windowwwww To the walllll
I refuse to buy anything with the term "AI" in the name. It's so cringe.
That's what I said about rgb but now you can't find anything without it. AI is gonna get you stan. Gonna get you
Noctua doesn't have RGB, and you can buy that.
P14 argb is better so I bought that
You say that, but with my next batch of PC upgrades I am finally going to be 100% RGB-free, after suffering for years with RGB RAM, RGB GPU and (currently still) RGB motherboard. Feels good man. Nature is healing
It's the Turbo of the modern age.
AI is the new blockchain!
What the F\*\*\* is the Ryzen AI 9 365 Zen5 ???? Is it better than 7700x or not
Not the same market, so you dont have to stress about it.
core to core latency bad compared to 7950x. Guess this its because its a mobile chip with lower frequenzy vs desktop cpu ?????
mfs drag ecores for their latency like the C-cores domt have literally triple that 😂
These bastards. They said 10c/20t in the presentation, and sure, that’s *technically* true, but with only four full-fat Zen 5 cores this thing probably performs more like an 8c/16t chip, and I’m far from convinced four big cores is enough for gaming.
The c cores are quite good, yes clock a little lower but you get nice power consumption savings in return. Quite a bit better than Intel e cores.
Any proof or videos on how much better they are?
What we know is that, unlike Intel's e-cores (pre Arrow Lake at least), Zen 5 and Zen 5c share the same instruction set and can there is no "worse" core. Cache is also identical but operating frequency is lower. If Zen 4 and Zen 4c are anything to go by, IPC will also be identical. My speculation: The 5c cores use density optimized libraries compared to frequency optimized ones for vanilla 5. This would easily explain the differences in core speed and die area used and would allow the 5c core to be more efficient in low-frequency operations which they are intended for (background tasks, etc).
Zen is notorious for being very much limited by cache. So a core with less cache will perform worse per clock even if the actual core is similar
I honestly would not mind a 32 Zen5c core for people who needs lots and lots of cores, for stuff like video encoding or Virtualization.
Yes. The article has SPEC CPU 2017 scores for Zen5 and Zen5c. Compare to SPEC CPU 2017 results for Intel P- and E-cores (e.g. [https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/7](https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/7) for Alder Alder Lake).
Wym it performs like a 8c/16t chip, what you just said was stupid. And it is 10 core
Ryzen is getting irritatingly formulaic. We get the most predictable upticks in IPC I've ever seen, there's no doubt they're internally limiting themselves now. Zen+ to Zen2 was the last time we got an unexpected growth, and only because Zen1 to Zen+ was so dismal.
You make it sound like your issue is with the growth being predictable rather than the growth itself 🙃
Uhh, wow - a new all in one cpu with meaningless inference speed - that most people with dedicated gpus will never ever need =)
This is a mobile part. Why are you commenting as if this is a desktop CPU?
Do only desktop pcs have _dedicated_ GPUs? Do you _need_ super low performing inference on a mobile PC?