Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are still industrial uses of asbestos today, because for some applications, there isn't a ready drop-in replacement. Uses where safer drop-in replacements were possible (shingles, automotive friction materials, flooring, insulation) have gone away. That might be a good comparison against "portable assembly", also known as C.


I think the problem NVidia has is that it has a massive low-level high-performance computing infrastructure written in C. Things like WebGL are a thin veneer over it, which is terrifying.

Their use combines the proper use of C (low-level driver code) with the unsafe use (large-scale systems). They should absolutely stop using C. On the other hand, I think it's fine for my network driver.


I was talking with someone in charge of coordinating WebGL rollout with various IHV driver teams and they said while AMD and NVIDIA were definitely annoyed at the work it was going to take, Intel was the one where the team reacted with sheer terror.

Maybe that speaks to some of the driver problems they're having now, if their house just wasn't in order beforehand.


The cynic in me says that Intel is the only vendor of the three who has thought about security before.

I am willing to bet that NVidia and AMD's GPU unit have floodgates of vulnerabilities along the lines of Spectre, which they simply never had to deal with before (not to mentions likely hundreds of more basic ones).


> I am willing to bet that NVidia and AMD's GPU unit have floodgates of vulnerabilities along the lines of Spectre,

100%, I'm not anybody important but I've been saying this for years, as soon as I saw Spectre/Meltdown announced it's like "lol yeah I bet GPUs are even worse, everyone is just racing for performance and not timing-correctness so they probably have zero hardening against that". I hadn't thought of the connection between Meltdown and that person's comment but yeah it's entirely possible they saw the potential for that or other security shenanigans.

Multi-tenant or multi-privilege-level GPU is likely a shitshow, so in a way it's actually a blessing that heavy GPGPU compute never really took off. Because I bet you can totally do things like leak the desktop or your browser windows to malicious webGL or another tenant on a shared vGPU. We live in a world where clients are mostly running one GPGPU application at a time, plus the desktop, and that means there's nothing there to leak.

(Although of course, it's always a little useless to speculate about massively counterfactual scenarios and assume everything would have gone the same... if multi-tenant/multi-app GPGPU compute had taken off, more attention probably would have been paid to multi-user security/hardening.)

Of course the real game-over is if you can get the GPU to leak CPU memory (or get the GPU to get the driver stack to leak kernel memory/etc via the CPU). That's bad even without multi-tenant.

Intel in particular may also be more vulnerable to that sort of thing since they have uniquely tight ties between the iGPU and the CPU. They come from a world where dGPUs didn't exist and the iGPU was only ever a ringbus away from memory or the CPU... it's apparently been a huge problem with the Xe/Arc drivers (there have been a couple patches where they produced 100x speedups by fixing operations that allocate memory in the wrong place, etc) since the GPU is suddenly no longer super close. It would not surprise me at all if AMD would be more secure because they're not working with an iGPU that's super tightly tied to the CPU like that.

https://www.pcworld.com/article/819397/intels-graphics-drive...

That's super funny you bring that up, thanks for tickling that particular neuron. Great comment and again, just a rando who tech-watches for fun, but, I agree 100%.


Out of curiosity what are the modern uses for asbestos?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: