> But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.
Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.
You don’t need it if you use llamacpp on Windows, or if you compile it on Linux with CUDA 13 and the correct kernel HMM support, and you’re only using MoE models (which, tbh, you should be doing anyways).
What MoE has to do with it? Aside from Flash-MoE that supports exactly one model and only on macOs - you still need to load entire model into memory. You also don't know what experts going to be activated, so it's not like you can predict which needs to be loaded.
It’s been going on for a while. Search YouTube or the web for 48gb 4090 (this is one of the most popular modded Nvidia cards), Nvidia of course never officially made a 4090 with this much memory.
There are some on sale via eBay right now. The memory controllers on some Nvidia gpus support well beyond the 16-24gb they shipped with as standard, and enterprising folks in China desolder the original memory chips and fit higher capacity ones.
Give that most of mine, and probably yours, and probably most of the world's computers are in fact made in China one way or another, some higher percentage than others, I'm guessing most of us trust our hardware enough to continue using it.
I wouldn't say that's true or even likely. It's completely possible to be in a pit of vipers where every single snake is venomous, and that is pretty much what we are seeing: With technological advances, there is a certain subset of people that will use them primarily to solidify their power and control over others. There is no utopian society right now whose government doesn't look to spy through technology, which of course is best set up at time of manufacture.
Agreed. Unless you have full control over the production chain to fully produce a device, you are subject to the whims and desires of those who preside over such technological feats that we take for granted in our daily lives.
To the original point, it's safe to say that highlighting a nationality with regards to trust is baseless and without merit, as would be for any other topic (men/women from x are y, z food is better here, etc..). Real life is much more complicated and nuanced past nationalities. Some might call it FUD (fear, uncertainty and doubt) but there's always a deeper rationale at the individual level as well.
Rather than people being wary of Chinese in general, it's more that there is a high degree of government control exercised in China and they are known to be very strategic with long-term planning in regards to technology control both for spying and actual remote control of devices. We are all just looking for the least bad option. It's not like devices from other countries are immune, but they are often less organized so there is a better chance of avoiding the Chinese level of planned access.
It does seem like pretty low risk in this specific case so I agree OP's comment was bit over the top, but I would have no way to make anything resembling even an educated guess as to how far their programs go.
Sadly, memory bandwidth is abysmal compared to Apple chips - 273 GB/s vs 614 GB/s on M5 Max for similar price. Even though fp4 compute is faster, it doesn't help for all the decode heavy agentic workflows.
You can still buy used 3090 cards on ebay. 5 of them will give you 120GB of memory and will blow away any mac in terms of performance on LLM workloads. They have gone up in price lately and are now about $1100 each, but at one point they were $700-800 each.
FWIW I have never used NVLink, and I’m not sure why people are bringing up “daisy chaining” because as far as I’m aware that is not a thing with modern GPUs at all.
> The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.
This setup will work for 100B models as well. And yes, the Mac will draw less power, but the Nvidia machine will be many times faster. So depending on your specific Mac and your specific Nvidia setup, the performance per watt will be in the same ballpark. And higher absolute performance is certainly a nice perk.
> You can certainly daisy chain several 3090's together but it doesn't work seamlessly.
Citation needed; there's no "daisy chaining" in the setup I describe, and low level libraries like pytorch as well as higher level tools like Ollama all seamlessly support multiple GPUs.
> I think it's bad form to say "citation needed" when your original claim didn't include citations.
I apologize, but using multiple GPUs for inference (without any sort of “daisy chaining”) is something that’s been supported in most LLM tooling for a long time.
> Regardless - there's a difference between training and inference.
No one brought up training vs. inference to my knowledge, besides you — I was assuming the machine was for inference, because my experience building a machine like the one I described was in order to do inference. If you want to train models, I know less about that, but I’m pretty sure the tooling does easily support multiple GPUs.
> And pytorch doesn't magically make 5 gpus behave like 1 gpu.
I never said it was magic, I just said it was supported, which it is.
Where are you gonna find Apple hardware with 128GB of memory at enthusiast-compatible price?
The cheapest Apple desktop with 128GB of memory shows up as costing $3499 for me, which isn't very "enthusiast-compatible", it's about 3x the minimum salary in my country!
Seems I misunderstood what a "enthusiast" is, I thought it was about someone "excited about something" but seems the typical definition includes them having a lot of money too, my bad.
I'm an immigrant to Canada, and yes, English has both literal meanings and colloquial meanings.
In the most literal meaning, absolutely, "Enthusiast" just means a person who likes something, is excited about something.
When it comes to market and products though, typically you'll see the word "Enthusiast" as mid-tier - something like: Consumer --> Enthusiast --> Professional (may have words like "Prosumer" in there as well etc:)
In that context, which is typically the one people will use when discussing product pricing and placement, "Enthusiast" is somebody who yes enjoys something, but does it sufficiently to be discerning and capable of purchasing mid-tier or above hardware.
So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear. Equivalently, there are myriad gamers out there (on phones, consoles, Geforce Now, whatever:), an enthusiast gamer is assumed to have a dedicated gaming computer, probably a tower, with a dedicated video card, likely say a 5070ti or above, probably 32GB+ RAM, couple of SSDs which are not entry level, etc.
Again, this is not to say a person with limited budget is "not a real enthusiast", no gatekeeping is intended here; simply, if it may help, what the word means when it comes to market segmentation and product pricing :)
Additionally, "enthusiasts"/"hobbyists" tend to be willing to spend beyond practical utility, while professionals are more interested in pragmatism, especially in photography from what I can tell.
If you're an actual pro, you need your stuff to work properly, efficiently, reliably, when it's called for. When you're a hobbyist, it's sometimes almost the goal to waste money and time on stuff that really doesn't matter beyond your interest in it; working on the thing is the point, not the value it generates. Pros should spend money on good tools and research and knowledge, but it usually needs to be an investment, sometimes crossing over with hobbyist opinions.
A friend of mine who's a computer hobbyist and retail IT tech, making far far less than I do, spends comically more than me on hardware to play basically one game. He keeps up to date with the latest processors and all that stuff, he knows hardware in terms of gaming. I meanwhile—despite having more money available—have a fairly budget gaming PC that I did build myself, but contains entirely old/used components, some of which he just needed to get rid of and gave me for free, and I upgrade my main mac every 5 years or something. I only upgrade when hardware is really getting in my way.
>> So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear.
It's interesting that you chose photographers as the example here. In many cases that I've seen, enthusiast photographers spend much more than professional photographers on their gear because the photographers make their money with their gear and therefore need to justify it, while the enthusiasts are often tech people, successful doctors, etc., who spend lots and lots on money on their hobbies...
In any case, your point stands, that "enthusiast" computer users would easily spend $3-4K or more on gear to play games, train models, etc.
$3.5k is a lot of money, but not a ton by American hobby standards. It's easy to spend multiples, even orders of magnitude more than that on hobbies like fishing, wine, sports tickets, concerts, scuba, travel, being a foodie, golf, marathons, collectibles, etc.
It's out of reach for lots of people, even in developed countries. But it's easily within reach for loads of people that care more about computing than other stuff.
In June 1977, the base Apple II model with 4 KB of RAM was $1,298 (equivalent to about $6,900 in 2025), and with the maximum 48 KB of RAM it was $2,638 (equivalent to about $14,000 in 2025).
Wow, 48k for $14000. Now you can get a MBP with a million times more memory for $3500 or so. Whereas that CPU was clocked at 1 MHz, so CPUs are only several thousand times faster, maybe something like 30,000 times faster if you can make use of multi-core.
I'd argue that some of those are more consumption and activity than hobby depending on how they're engaged with, and that people use the word "hobby" too loosely, but would agree that Americans in-particular consume at obscene rates.
Golf equipment, mountaineering equipment, skiing and snowboarding lift tickets and gear, a single excessive graphics card that's only used for increasing frame rates marginally, or basically a single extra feature on a car, are all things that accumulate quite quickly. Some are clearly more superfluous than others and cater to whales, while some are just expensive by nature and aren't attempting to be anything else
Those are the prices for just buying equipment, which at least retain some kind of value. 3 million+ American kids are enrolled in competitive soccer with annual clubs dues between $1K and $5K, and that money is just gone at the end of the year. Basically none of those kids are going to have a career in soccer, so it's clearly a hobby, and everyone knows it. And soccer isn't even the most popular sport!
I live in America, I am very well compensated. Have been for 15 years now. $3500 is a lot of money. A lot. There is a tiny bubble of us tech folks who think it is accessible to most people. It is not. It is also the same reason Macs are still a niche. Don't take your circles to be the standard, it is very very far from it, especially if you think $3500 is not a lot of money.
It is easy to confirm this, just look at the sales number of these $3500 devices. It is definitely not an enthusiast price point, even in the US.
It's not nothing for most people... it's more than a month of rent/mortgage for a significant number of Americans even. But if it's your primary hobby, it's not completely out of reach, and it's not something you necessarily spend every year. A lot of people will upgrade to a new computer every 3-5 years and maybe upgrade something in between those complete system upgrades.
I know plenty of people who don't make a lot of money (say top 25% or so) that will have a Boat or RV that costs more than a $3500 computer, and balk at the thought of spending that much on a computer. It just depends on where your interests are.
The first words I said: "$3.5k is a lot of money..."
There are tens of millions of top 10% income adults in America. So something can be both unaffordable to most people, and also easily accessible to very many people.
It’s a midrange to upper expense in the US if it’s your hobby. Most people don’t have a serious computer hobby but they golf, trade ATVs, travel, drink, etc.
$3500 would have been 3–4 months' discretionary spending as a PhD student in Finland 15 years ago. A sum you might choose to spend once a year on something you find genuinely interesting.
Some people succumb to lifestyle creep or choose it deliberately. Others choose to live below their means when their income grows. The latter have a lot more money to spend on extras, or to save if that's what they prefer.
An enthusiast in the hobby space is by definition someone willing to pour much more money that someone else not that enthusiast in whichever hobby we are talking about.
Well, and also has a bunch of money, not just willing. I guess locally we don't really have that difference, as two other commentators here went by, that's why I had to update my local understanding of "enthusiast". Usually we use it for how engaged/interested a person is, regardless of how much money they can or are willing to use.
Learned something new today at least, so that's cool :)
Yes, when tech gear is sold as 'enthusiast' gear, it is almost invariably the most expensive non-professional tier of equipment. That is roughly the common understanding: Expensive and focused on features more than security required for public use; while remaining within reach of at least some individuals, not only corporations.
For an individual making median income in the US, it would cost 2% of your income to get a machine like this every 4-5 years. That's a matter of enthusiasm, not a matter of having a lot of money. Sorry that income is less where you are, but the people talking about the product tier are using American standards.
I spent aaround that on my current personal desktop... 9950X, 2x48gb ddr5/@6000, RX 9070XT, 4tb gen 5 nvme + 4tb gen 4 nvme. I could have cut the cpu to a 9800x3d and ram to 32gb with a different GPU if my needs/usage were different. I'm running in Linux and don't game too much.
That said, a higher end gaming setup is going to cost that much and is absolutely in the enthusiast realm. "enthusiast" doesn't mean compatible with "minimum wage"
Enthusiast compute hardware doesn't cater to the people on the minimum salary in any country, let alone developing nations. When Ferrari makes a car they don't ask themselves if people on minimum salary will be able to afford them.
In in the bottom two poorest EU member states and Apple and Microsoft Xbox don't even bother to have a direct to customer store presence here, you buy them from third party retailers.
Why? Probably because their metrics show people here are too poor to afford their products en-masse to be worth operating a dedicated sales entity. Even though plenty of people do own top of the line Macbooks here, it's just the wealthy enthusiast niche, but it's still a niche for the volumes they (wish to)operate at. Why do you think Apple launched the Mac Neo?
Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then? I understood that to juts be about the person, not what economic class they were in, maybe I misunderstood.
Enthusiast in this contest more or less means you are excited enough about something to get a level above what normal people should get and just below professional pricing. An enthusiast camera body can be 2000 euros.
I would say an enthusiast computer is 2-4k.
It really depends what you meant with minimum salary (yearly?) because paying 3 months of salary for a computer like that isn't far fetched. You're not using this to generate recipes for cookies. An enthusiast level car is expensive as well.
>Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then?
Why? Enthusiasts are by definition people for whom value for money is not the main driver but top performance and cutting edge novelty at any cost. Affording enthusiast computer hardware is not a human right same how affording a Lamborghini or McMansion isn't.
But you don't need to buy a Lamborghini to do your grocery shopping or drive your kids to school, same how you don't need an Nvidia 5090 or MacBook Pro Max to do your taxes or do your school work.
So the definition is fine as it is. It's hardware for people with very deep pockets, often called whales.
Untouchable my ass. You get a PC that has an ssd glued to the motherboard so if you run write intensive workloads and that thing wears out replacing it will have significant cost. Then there’s no PCie slot to get any decent network card if you want to work more than one of them in unison, you’re stuck with that stupid thunderbolt 5 while Infiniband gives x10 network speeds. As for memory bandwidth, it’s fast compared to CPUs but any enterprise GPU dwarfs it significantly. The unified RAM is the only interesting angle.
Apple could have taken a chunk of the enterprise market now with that AI craze if they had made an upgradable and expandable server edition based on their silicon. But no, everything has to be bolt down and restricted.
> Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.
I think you can do better than the proverbial Apples and Oranges comparison.
In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.
This has changed since Sam Altman started buying up all the chip supply, raising prices on memory, storage, and GPUs for everyone, but it used to be the case that you could build a PC that was both cheaper and faster than a Mac for LLM inference, with roughly equal performance per watt.
You would use multiple *90-series GPUs, throttled down in terms of power. Depending on the GPU, the sweet spot is between 225-350W, where for LLM workloads you only lose 5-10% of performance for a ~50% drop in power consumption.
Combined with a workstation (Xeon/Epyc) CPU with lots of PCIe, you can support 6-7 such GPUs (or more, depending on available power). This will blow away the fastest Mac studio, at a comparable performance per watt.
Again, a lot of this has changed, since GPUs and memory are so much more expensive now.
Macs are great for a simpler all in one box with high memory bandwidth and middling-to-decent GPU performance, but they are (or were) absolutely not "untouchable."
I think OP’s point was that it would do more than 2-3x the workload, thus them stating “blow it out of the water” and specifying “performance-per-watt”.
A 128GB 2TB Dell Pro Max with Nvidia GB10 is about $4200, a Mac Studio with 128GB RAM and 2TB storage is $4100. So pretty comparable. I think Dell's pricing has been rocked more by the RAM shortage too.
Unfortunately the GB10 is incredibly bandwith starved. You get 128gb ram, but only 270GB/s bandwidth. The M3 Ultra mac studio gets you 820GB/s. (The M4 max is at 410GB/s. I'm not aware of any workload that gets the GB10 to it's theoretical peakflops.
From the spec sheets I’m looking at, it is not. I’m seeing models of the Dell Pro Max with 128 GB of DDR5-6400 as CAMM2, then a separate memory of up to 24 GB on the GPU. CAMM2 does not make the memory unified.
You're not looking at the right thing. Dell's naming is horrible. Dell Pro Max with GB10 (https://www.dell.com/en-us/shop/cty/pdp/spd/dell-pro-max-fcm...). It's a very different computer than what you're looking at and has 128GB LPDDR5X unified memory.
AFAIK, for the unified bandwidth, it depends mostly on the CPU, for M4 Max (I think it's the default today?) it does ~550 GB/s, while GB10 does ~270 GB/s, so about a 2x difference between the two. For comparison, RTX Pro 6000 does 1.8 TB/s, pretty much the same as what a 5090 does, which is probably the fastest/best GPUs a prosumer reasonable could get.
No, that's why Apple uses Performance Per Watt not actual performance celling as the metric. In actual workloads where you'd need this power then actual performance is what matters not PPW.
Probably comparable, but that's only with business-grade products, it's why Apple's current silicon is so remarkable on the market at the consumer level.
It has a HDMI port and its USB-C ports also support display out. But I believe most who buy it intend to use it headless. The machine runs Ubuntu 24.04 and has a slightly customised Gnome (green accents and an nvidia logo in GDM) as its desktop.
I don't quite get what you mean? EPP is technically in power (whatever that means in the European Parliament). But also why would that matter? Or they wanted to force a vote just so they could vote against it (which is not necessarily a stupid strategy in cases like this)?
No, that's not what it means. Actually, it doesn't _really_ mean anything, here, as it's not correct. The EPP has 188 seats out of 720. It is the largest single party, but, ah, to some extent, so what.
(Also it is a European Parliament party, not a _real_ political party. It's not a cohesive unit and has no leadership; it's pretty much just a grab-bag of member state parties.)
It does matter. Even if it eventually passes, the later and more gutted it is, the better.
Saying that it doesn’t matter is just defeatist (and unfortunately always parroted on HN) and plainly wrong. Defeatists have been proven wrong time and again.
Also making sure this is as painful and costly as possible to pass will discourage future attempts. If we just rolled over and let it happen that would signal that it's easy to pass legislation like this and we would get a lot more like it
A system where this can happen is healthy. The alternative is a system where once legislation fails to pass you are forbidden to modify it and try again. _That_ would be a broken system, where compromise is impossible, and attempting to make any change is a very risky move because you might fail, forever. There would be a chilling effect, legislation would take longer to change, and laws would become frozen in the past.
What we are seeing here is checks and balances, working as intended.
It's not that, the demo was impressive but when it became wildly available the reality of it never lived up to what was demoed and it later came out some of the shorts they did with directors had a lot of editing to them anyway.
Crazy how far the hype dropped from this product when only the paid influencers had access we were told "It's like a reality simulator" but when it became widely distributed it didn't deliver anywhere near that hype, you look at the front page of it today and it's identical to the Grok video gen front page, very underwhelming.
>I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
The creator of the 2channel and current owner of 4chan Hiroyuki Nishimura explains it:
>people can only truly discuss something when they don’t know each other.
>If there is a user ID attached to a user, a discussion tends to become a criticizing game. On the other hand, under the anonymous system, even though your opinion/information is criticized, you don’t know with whom to be upset. Also with a user ID, those who participate in the site for a long time tend to have authority, and it becomes difficult for a user to disagree with them.
>Under a perfectly anonymous system, you can say, “it’s boring,” if it is actually boring. All information is treated equally; only an accurate argument will work
yes, you are repeating the "what about" part. my comment has literally nothing to do with other social networks.
if it helps, feel free to apply the original quote to facebook or whatever when they do something good. but this article and comment chain is about 4chan. so i am talking about 4chan.
lol, what are you talking about? i said i was reminded of a quote, that is it. no one disagreed, they just said “other people do it too” and put words in my mouth so they could argue about something.
like, what “stance” do you think i am even trying to take?
Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.
reply