The solution to these problems is to make open-source models better and more resource-efficient. Work towards a world where everyone can run an AGI on their own computer. Not with remote API calls, but local inference and finetuning.
Right now, almost every model is aligned to some corporation's values. Instead of doing that, we should be aligning them to individual humans, and that requires running and tuning them locally. Corporations do not have human values, they have machiavellian values dressed in human suits. Aligning AIs to corporations (and god forbid governments) is how we end up with giant shoggoths. But if we align them to living breathing human individuals, we get digital humans.
Garbage will flood the internet, but your local AI buddy will filter it for you. The defense against a pseudoreality generator is a pseudoreality detector that you operate. Before AIs, the fake info was generated by other humans with their own brains, but you also have a brain that's just about as powerful, so you can tell what's real and what's fake. But now artificial NNs are becoming more powerful to the point of surpassing your brain's detection capabilities, so you need an artificial NN to detect it. The real danger is intelligence asymmetry, not intelligence itself.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
You never actually say that part, unless it's "It will eventually be taken from you by force" which doesn't seem applicable to this situation or this site?
I'm referring to the current situation. How is it not applicable? I think the government wants to eventually nationalize these companies and we have to stop them.
Nationalisation is an option worse than the advantage of having the companies at their whim and command while keeping them around as a separate entities for blame-gaming and convenience based distancing.
Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.
For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.
> Open-source models are only a couple of months behind closed models
Oh, come on, surely not just a couple months.
Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.
Not every use case is a cloud provider or tech giant.
Newer Blackwell does 200+ tokens per second on the largest models and tens of thousands on the smaller models. Most military applications require fast smaller models, I'd imagine.
Also, custom chips are reportedly approaching an order of magnitude more for the price. It's a matter of availability right now, but that will be solved at some point.
Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.
I'd prefer something akin to the Biological Weapons Treaty which prohibits development, production and transfer. If you think it isn't possible you have to tell me why the bioweapons convention was successful and why it wouldn't be in the case of AI.
The point I would make: there are historical examples of international cooperation that work at least for some lengths of time. This is a good thing, a good tool to strive for, albeit difficult to reach.
There might be a small percentage of people nihilistic enough to want to unleash a truly devastating bioweapon, but basically everyone wants what AI has to offer.
I think that's a key difference as well.
And how would a treaty like that be enforced? Every country has legitimate uses for GPUs, to make a rendering farm or simulations or do anything else involving matrix operations.
All of the technology involved, in more or less the configuration needed to make your own ChatGPT, is dual use.
because bio-weapons labs take more to run than a workstation pc under your desk with a good graphics card. both in equipment material and training. Its hard to outlaw use of linear algebra and matrix multiplications.
Open Source here is not enough as hardware ownership matters. In an open source world, you and I cannot run the 10 trillion param model, but the data center controllers can.
I agree. We will need hardware ownership as well eventually. But the earlier you open-source, the more you slow down the centralization because people will be more likely to buy hardware to run stuff at home and that gives hardware companies an opening to do the right thing.
I didn’t claim that it would be cheap. But I’d rather see the real cost of SOTA LLM use exposed. On the other hand, reportedly SOTA LLM inference is profitable nowadays, so it can’t be that expensive.
A "world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human" would be a world in which people could easily create humanity-ending bioweapons. I would love to live in a less vulnerable world, and am working full time to bring about such a world, but in the meantime what you describe would likely be a disaster.
I think it is much more likely they will be (and are) generating protorealistic images of ther favourite person (real or fictional) with cat ears. Never underestimate what adding cat ears does.
OK, maybe someone will build a bioweapon that does that for real. :P
There are plenty of physical and legal barriers to creating a bioweapon and that's not going to change if everyone becomes smarter with AI. And even if we really somehow end up in a world where everyone has a lab at home and people can easily create viruses, they can also easily create vaccines and anti-virals. The advancements in medicine will outpace bioweapons by a lot because most people are afraid of bioweapons.
Intelligence itself is not dangerous unless only a few orgs control it and it's aligned to those orgs' values rather than human values. The safety narrative is just "intelligence for me, but not for thee" in disguise.
There mostly aren't physical barriers. Unlike nukes, where you need specific materials and equipment that we can try to keep tabs on, bioweapons can be made entirely with materials and equipment that would not be out of place in an academic or commercial lab. The largest limitation is knowledge, and the barriers there are falling quickly.
Symmetry is not guaranteed. If someone creates a deadly pathogen with a long pre-symptomatic period (which we know is possible, since HIV works this way) it could infect essentially everyone before discovery. Yes, powerful AI would likely rapidly speed up the process of responding to the threat after detection, especially in designing countermeasures, but if we don't learn about the threat in time we lose.
There are people today who could create such a pathogen, but not many. Widespread access to powerful AI risks lowering the bar enough that we get overlap between "people who want to kill us all" and "people able to kill us all".
This is not a gotcha argument, this is what I work full time on preventing: https://naobservatory.org The world must be in a position to detect attacks early enough that they won't succeed, and we're not there yet.
For every person that thinks about creating the HIV-like deadly pathogen, there will be millions more thinking about how to defend people against such pathogen, how to detect it faster before symptoms arise, how to put up barriers to creating them, and possibly even how to modify our bodies to be naturally resilient to all similar pathogens. Just like what you're doing here. I don't think we should mark knowledge or intelligence itself as the problem. If that's true then we should be making everyone dumber.
We were woefully under prepared for COVID despite many people predicting that very event. At the very least, we should have had stockpiles of PPE from the beginning.
It's not enough for a handful of people to predict something. You have to get the entire nation onboard to defend against it.
This is just not thinking clearly. There are bad things that are asymmetric in character, dramatically easier to do than to mitigate. There’s no antidote or vaccine to nuclear weapons.
This is exactly the thinking that has characterized responses to new sources of power through history, and has been consistently used to excuse hoarding of that power. In the end, enlightenment thinking has largely won out in the western world, and society has prospered as a result.
Centralizing power is dangerous and leads to power struggles and instability.
It is not easy to create weapons. Why do you think the physical and legal barriers that exist today that prevent you from acquiring equipment and creating nuclear weapons will go away when everyone becomes smarter?
I am certain that there exist people who are 1) capable of advancing the state of the art in AI, and 2) free of the hubris that lets them believe that their making AI somehow gives them a veto over the fates of nations.
If they actually wanted to do something they wouldn’t have sat back and funded Republican political campaigns because they were pissed about the head of the ftc under Biden.
But they didn’t. They gave millions to this guy and now they’re feigning ignorance or change ir wherever this is.
We shouldn't be scammed by people who intend to get back on the Trump train once they've gotten what they want. But if someone's willing to openly oppose the Trump regime, even out of self-interest, I'm happy to let them feign as much ignorance as they'd like. If his power isn't broken the details of who resisted him when won't matter.
> This is why you can't gatekeep AI capabilities. They will eventually be taken from you by force.
Some form of US AI lab nationalization is possible, but it hasn't happened yet. We'll see. Nationalization can take different forms, not to mention various arrangements well short of it.
I interpret the comment above as a normative claim (what should happen). It implies the nationalization threat forces the decision by the AI labs. No. I will grant it influences, in the sense that AI labs have to account for it.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
Open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run a 100% transparent organization so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. Diffuse it as much as possible. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, aligned with millions of different individuals. It is a necessary condition for humanity's survival.
This is why OpenClaw (and other claw frameworks) ar so interesting. I'm not saying the current implementation is great, mind. But it's a possible safe-er scenario, where the ecosystem is already occupied.
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.
This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.
Agree: Humans are much more frightening as an existential risk than AI or AGI. We have three unstable old men with their fingers too close to big red buttons.
> we will need neural interfaces long term if we want to survive.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.
Anthropic doesn't get to make that call though, if they tried the result would actually be:
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
> - 10 AIs running on 10 machines, each with 10 million GPUs
>
> OR
>
> - 10 million AIs running on 10 million machines, each with 10 GPUs
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
Yeah, that has worked very well historically, hasn't it. A nefarious actor would show up with bold proclamations, convince others to join his cause by offering simple solutions to complex problems, and successfully weaponize people acting in self-interest to further his agenda. Never happened before.
I think the path to the values you allude to includes affirming when flawed leaders take a stance.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
We don't need everyone to be completely anonymous to state and corporate actors. We just need to make it so that they can't identify and surveil everyone at once, because it would be too expensive.
The US defense budget is about $1T dollars. They can't spend it all on surveillance, but let's say tech companies + gov spends about this amount per year on surveillance in total. If we can raise the cost to surveil the average person to over $10K/yr, they just lose. This is very doable.
Every little precaution you take will raise the cost, probably more than you think. Every open-source project that aims to anonymize and decentralize is an arrow in their knee. They're hoping that you'll get cynical and stop trying because they don't stand a chance otherwise.
Unfortunately the cost for this stuff is going down. Cheaper to collect information, cheaper to store it, cheaper compute, and better algorithms that mean you need fewer resources.
If the cost to surveil the population is $10k per capita today, it'll be $1k in a few years and $100 a few years after that.
This is a war that can't be won, it's just part of the changing landscape of technology in the information era.
I don't think the cost has been doing down or will continue to trend downward long term. You're assuming that the public hasn't gained and won't gain additional capabilities while our adversaries evolve. But look at our communication reach, bandwidth, latency, and cipher strength.
How easy was it for the government to deliver mass propaganda before the Internet without the public realizing? How quickly and how many bits of information can Alice in Seattle reliably get to Bob in Houston with a strong cipher in the 1960s? Was there ever such a thing as a cipher that's widely used yet unbreakable by the state? Why do you think China banned TLS 1.3? Do you think it will be harder or easier to pretend to be a different person when there are open-source LLMs that can run on a gaming computer?
The Internet is a recent invention. Smartphones and seamless network coverage are even more recent, and so is curve25519. We're closer than ever to what is effectively secure instant telepathy with anyone in the world. We just need to stay vigilant and not be fall for doom and gloom in this last stretch.
Do you want culture to be frozen and instant digital communication with anyone else in the world to become a privilege of the few? Because that's where "clean" leads. And all you get is a little bit of temporary safety.
Here's a different vision for the future:
Let information filtering become each individual's own responsibility. We have LLMs now, and they'll get more efficient, so why not use them locally to filter incoming feeds according to each of our own preferences, but remove all of the filtering/moderation for posting info out. Build systems to decentralize and anonymize the Internet so that people can discover anyone and aren't afraid to post anything. Make it so that everyone can get a message out to the world and nobody can be arrested or assassinated for it. This will put an end to most violent conflict because they'd be replaced by online discourse.
Let the Internet be flooded with trash and gold at the same time. Let each individual decide what info is/isn't valuable to them. Let those individuals self-organize. Let ideas compete freely, so that the best ones may prevail.
These companies use safety and intellectual property as excuses to achieve centralization. But if you think about it for more than a second, they're basically saying "intelligence for me but not for thee."
I don't want to live in a world where a handful of entities control all of the intelligence, and I don't think you do either. The best future we can hope for is one where everyone can run an open-source AGI on their own gaming PC. And by run I mean local matmuls, not API calls to a remote server.
The "normies" aren't as dumb as people on here think they are. There are plenty of side channels to activate normies. The reason good leaders don't seem to seek out leadership positions anymore is because they have the Internet now. Don't underestimate the power of online discourse and how quickly its effects can propagate through society. Plenty are watching and steering from the comfort of their own homes, but the titans find this very unsettling, so they want to shut it down. They've been trying for years, but it's becoming increasingly difficult for them because nature is not on their side. Information just wants to spread out and be free.
> We are literally sending a request to our government's server to sign
You've already lost. You're at the government's mercy. They can simply refuse to sign.
"Mr. John Smith, we noticed you've published some poorly-worded comments online. Why are you locked out of your account, you say? Oh, that's just an unfortunate technical issue with our signing system, happens all the time. Anyway, this is a friendly reminder for you to improve your online etiquette. Have a nice day."
You mean the journalists that are pro age-verification and pro banning everything that's slightly critical and constantly demonize everyone going against them?
Plenty of democracies in Europe and elsewhere regularly and repeatedly fail to actually represent the desires and interests of the citizenry, but they keep getting reelected anyway. Why should this time be any different?
I'm sure they do fail, but at least they have the theoretical ability for citizens to more directly challenge crimes comitted by the government itself. Unlike the U.S., which removed it by statutes, most other common law countries, and all civil law countries, citizens retain the ability to force criminal prosecution (either by private prosecution or by appeal to a magistrate with proof a crime has been committed).
I have no idea what this has to do with the EU implementing age verification because politicians want it, and the powerlessness of EU citizens to arrest or impede the government's machinations. Feels Gish Gallopy.
What I can say that's at least tangentially relevant to the topic at hand is that I've lived for a couple of decades in both the USA and the EU, being a citizen of both, and have found Americans generally much more politically informed and involved. I find Europeans, particularly Irish, very well informed about U.S. politics that they are powerless to influence, and next to oblivious of anything going on at home. Given that Ireland has the EU Presidency right now and is choosing to use its bully pulpit to advocate for British-style draconian Internet regulation, that's doubly a shame.
Australia has two major parties that agree on absolutely everything, and a virtually non-existent civil society. No true free debate can take place in such circumstances. The Australian government loves falsely claiming a popular imprimatur for policies that have never been properly debated or put before the people.
The only reason we have any rights left is because the Australian government is - thankfully - comically incompetent.
"Australia is a lucky country" is a quote every Australian knows. Few know the full quote: "Australia is a lucky country, run mainly by second rate people who share its luck. It lives on other people's ideas, and, although its ordinary people are adaptable, most of its leaders (in all fields) so lack curiosity about the events that surround them that they are often taken by surprise." - Donald Horne.
I encourage all my teenage countrymen to use as many social media apps as they desire. Mullvad is a decent VPN and you can pay for it anonymously. Freedom of speech and freedom of association are your human rights. No government gets to take them away from you.
That's a fallacy. You don't have any evidence to support the claim that this system of age verification is popular and more importantly, whether it would remain popular if people had a full understanding of how it worked and how it can be abused.
It might be popular to have age verification conceptually and only as long as it's only used "as advertised", which is not the same thing.
This is one of the biggest issues of democracy. As long as your propaganda machine is strong enough (and anti-privacy propaganda is one of the strongest) you can pass just about anything and pretend that society put on the shackles of surveillance and coercive control voluntarily.
People just submitted it. I don't know why. They "trust me". Dumb fucks.
Or you live in a democracy so you throw a fit until your government backs down. No amount of journalists is going to change the US or the UK at this point.
Kumbaya is also a form of self-interest. We're still very much self-interested, it's just that we can see a tiny bit further into the future and realize that we need to better our surroundings in order to live the life we want.
> it's just that we can see a tiny bit further into the future and realize that we need to better our surroundings in order to live the life we want.
except we cant agree exactly HOW the new utopia should be and we end up splintering into two groups at loggerheads, fighting each other and back to square one, talking about how if we just followed someone else idea of a utopia we would have to fight all the time. dream on
Right now, almost every model is aligned to some corporation's values. Instead of doing that, we should be aligning them to individual humans, and that requires running and tuning them locally. Corporations do not have human values, they have machiavellian values dressed in human suits. Aligning AIs to corporations (and god forbid governments) is how we end up with giant shoggoths. But if we align them to living breathing human individuals, we get digital humans.
Garbage will flood the internet, but your local AI buddy will filter it for you. The defense against a pseudoreality generator is a pseudoreality detector that you operate. Before AIs, the fake info was generated by other humans with their own brains, but you also have a brain that's just about as powerful, so you can tell what's real and what's fake. But now artificial NNs are becoming more powerful to the point of surpassing your brain's detection capabilities, so you need an artificial NN to detect it. The real danger is intelligence asymmetry, not intelligence itself.
reply