Power is taken but also given. It's a dynamic and I agree it's gotten way, way out of hand. It may eventually supress progress and become a real parasitic presence, but we've not reached that point yet (in net terms). Google has been relatively responsible with the power, but cracks have been starting to show. It will get a whole lot worse before it gets better. That is why I embrace vertical integration despite the tremendous cost. Call it the cockroach approach; it allows being partially decoupled from outside fluctuations.
Addition: People underestimate Google's influence. It's easy to forget they de-facto control Firefox, leaving only Apple and Google in control of the Web. Scary, but looking away won't help either. The Americans have been consistently competent with technology since the advent of the transistor right after WW2. They're reaping the benefits of that still to this day. I say that as a European.
If the plugins were bought for six figures, then it must be incredibly lucrative. How on earth could they be making it back? Is injecting spam into Google results THAT lucrative?
Cool use case. I have it PDFs for my electronic load and lab PSU, and it was able to write drivers in C++. Quite a powerful use case. I did battery discharge tests this way.
For large georeferences textured meshes, generally 3D Tiles is used which uses a hierarchy of oriented bounding boxes to support various LODs. If you split up your model into chunks as well as diferent LOD levels, the viewer can then request chunks based on what is in the view port as well as the zoom level. The Cesium implementation leaves a lot to be desired; it't pretty tricky to get right. This will become commonplace since 3D scans are getting more common.
Interestingly, there are zero non-US powerful laptops.
The closest option is the Moore Threads MTT AI Book (12-core 2.65Ghz, 32GB DDR5, 1TB SSD, 14 inch). It cannot reach a modern Ryzen in performance though.
It's fascinating that only the US can make good computers. I'm not from/in the US so I'm not saying that from a patriotic point of view. How hard can it be to pop a good ARM chip in a laptop and compete with HP, Apple and the likes?
> It's fascinating that only the US can make good computers.
Seemingly, the US might be able to design good computers, but it cannot make them themselves. This should make it easier for others to do the same, design the computer in country X but actually make it somewhere else, just like the US. Yet we're not seeing this at all.
Which powerful computers are made in the USA? Design and assembly don't count, as these are the least robust to replication attempts. Apart from that, the manufacturing is all in East Asia; Intel is the exception, not the normal!
I think using the vision decoder baked into modern LLMs is the way to go. Have the LLM iterate; make sure it can assert placement qualities and understands the hard requirements. I think it can be done.
Dont know about LLM, but AI in general isnt such a stupid idea as one might think and Chinese are particularly well positioned to take advantage.
Take for example something like XinZhiZao (XZZ), ZXW, Wuxinji, diyfixtool. They have huge databases with pictures, diagrams and boardviews of pretty much every phone, laptop and graphics card. With all this data you could build AI system ripping of^^^^^ "suggesting" routing for your design based on similarity to stole^^^training data. That way you start with layout that worked in devices shipped by the millions.
This could be build in stages, starting witch much weaker system trained on just pcb pictures + layer count. This should be enough to suggest ~optimal initial chip placement for classical auto-router.
author here: I think synthetic data, generated by ~brute force iteration with LLMs, with every DRC analysis imaginable and more, will yield a more consistent/usable/larger dataset than any existing dataset. It's a mistake to put too much weight in anyone's existing data. This is why we work hard to make algorithms that LLMs can use, because they have emerging spatial capabilities that excel when coupled with detailed analysis.
They do a have a vision decoder like many other LLMs, so in theory it should be able to write the positions textually, then call a render command, then look at the rendered bitmap. I's all very opaque though; I'd love a visualisation of the latent space data that it's converting the image to. I found that very long vertical images throw Opus off completetely for example. It's very interesting to experiment with this. Let it play with placement and let it call a render command. Then let is describe in detail what it sees. I'll be looking into this a lot this year. Maybe there will be niche models that will be smaller but have better vision capabilities then Opus. A world where one model rules would be incredibly depressing (kinda like what we saw with some software companies since the 90s).
I build custom harnesses (like many of us) and I genuinely think Anthropic will eventually sue their customers if they detect they are selling competing harnesses (competes with their vertically integrated offerrings).
I feel Alibaba and DeepSeek see themselves more as infra. No urge to control the stack and litigate competition out of existence.
What are the reasons for switching? Personally I got into the habit of doing a bit of a round robin with Codex/Claude (CLI) and then DeepSeek and Qwen web chat. And Claude in web chat. I like to switch just to learn the differences, otherwise I'd never know what the other models can do. But I still feel attached to Opus, but this can be fammillarity. If I only had Qwen maybe it would be effectively identical at the end of the day. Hard to say.
Addition: People underestimate Google's influence. It's easy to forget they de-facto control Firefox, leaving only Apple and Google in control of the Web. Scary, but looking away won't help either. The Americans have been consistently competent with technology since the advent of the transistor right after WW2. They're reaping the benefits of that still to this day. I say that as a European.
reply