I do find it hilarious that after all the machine learning optimizations done on people's feeds over the years, all the promos got for a 1% improvement on this metric, every E7 and E8 who can claim x% of this or that, after all of that work, we might genuinely, and not even as a joke, be in the situation of needing to throw _other_ AI agents at this selfsame feed in order to extract any real value from it. What a world we've built.
I am afraid that we are heading to a world in which we simply give up on the idea of correct code as an aspiration to strive for. Of course code has always been bad, and of course good code has never been a goal in the whole startup ecosystem (for perfectly legitimate reasons!). But that real production code, for services that millions or even billions of people rely on, should be reliable, that if it breaks that's a problem, this is the whole _engineering_ part of software engineering. And we can say: if we give that up we're going to have a whole lot more outages, security issues, all those things we are meant to minimize as a profession. And the answer is going to be: so what? We save money overall. And people will get used to software being unreliable; which is to say, people will not have a choice but to get used to it.
I disagree. An analytics tool that's correct 99.9% of the time is not 0.1% less valuable than a tool that is always correct. It's 100% less valuable.
Outage is the easy failure mode. I can work around a service that's up 80% of the time, but is 100% correct. A service that's up 100% of the time but is 80% correct is useless.
Some years later, I interviewed at Knight Capital, just a couple of weeks before their blowup. (Dreadful interview at which I did dreadfully, being asked to write C _over the phone_ by a supremely uninterested engineer. Quite a red flag in retrospect.)
A lack of imagination on my part perhaps, but I can't think of anything I'd use it for which isn't either: 1) cheating myself and others of leisure (e.g. I suppose I could use it to fake-keep up with friends..? Or plan a holiday and book a load of stuff for me. But I like doing that!), 2) not feasible (loading and unloading the dishwasher, which is already the robot I used to wash dishes), 3) utterly insane (it's tax time in the US).
Think about how openclaw can automatically send email to book guests on your podcast that no one watches.
It is a game changer. It can even automatically send email to other people who have podcasts and youtube channels no one watches to come on your podcast no one watches and you can both discuss what a game changer openclaw is.
It can even then watch the podcasts for both of you so now you both have viewers!
I'm reminded of Douglas Adams's take on video recorders as machines to watch television so you don't have to. He should have been around to see Marvin built for real!
I do genuinely wonder about the endgame here. Why would the objective winners of the _current_ system, our billionaire class, want to disrupt that system? Do they really believe that they will necessarily be winners in the new world too, are they that arrogant?
They already understand the current system and status quo is going away. They understand, on some level, the consequences of the technocapitalist system they've built and perpetuated.
I think assuming human agency (building technocapitalism, correcting course) or the possibility to escape capitalism and its consequences (in bunkers), underestimates what capitalism is.
What I find interesting, and reflects my ignorance of how these things are used, is that if you look at, say, FAANG companies, Office isn't used. I've worked for two FAANGs over the past couple of years, and everything is done via Google docs. Replacing a giant suite like Office looks hard, replacing something simpler like Google docs looks very much simpler, and surely should suffice?
I haven't watched the whole interview. In the clip, a couple of things jump out:
1. He was speaking to a receptive audience. The head nods when he starts to make the comparison between the energy for bringing a human up to speed versus that for training an AI.
2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
> 2. He is trying to rebut a _specific_ argument against his product, that it takes even more energy to do a task than a human does, once its training is priced in. He thinks that this is a fair comparison. The _fact_ that he thinks that this is a fair comparison is why I think it is too generous to say that this is just an offhand comment. Putting an LLM on an equal footing with a human, as if an LLM should have the same rights to the Earth as we do, is anti-human.
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
Exactly. Perhaps in Altman's world, a human exists specifically to do tasks for him. But in reality, that human was always going to exist and was going to use those 20 years of energy anyway; they only happened to be employed by his rich ass when he wanted them to do a task. It's not equivalent to burning energy on training an LLM to do that task.
> as if an LLM should have the same rights to the Earth as we do,
I don't see him calling for an LLM to have rights. I don't think this is part of how OpenAI considers its work at all. Anthropic is open-minded about the possibility, but OpenAI is basically "this is a thing, not a person, do not mistake it for a person".
> It also contains a rather glaring logical flaw that I would hope someone as intelligent as Altman should see. The human will be here anyway.
His point is flawed in other ways, like the limited competence of the AI and how even an adult human eating food for 20 years has an energy cost on the low end of the estimated energy cost to train a very small and very rubbish LLM, and nowhere near the energy cost of training one that anyone would care about. And even for those fancy models, they're only ok, not great, etc., and there are lots of models being trained rather than this being a one-time thing. Or in the other direction, each human needs to be trained separately and there's 8 billion of us. And what he says in the video doesn't help much either, it's vibes rather than analysis.
But your point here is the wrong thing to call a flaw.
The human is here anyway? First, no: *some* humans are here anyway, but various governments are currently increasing pension ages due to the insufficient number of new humans available to economically support people who are claiming pensions.
Second: so what if it was yes? That argument didn't stop us substituting combustion engines and hydraulics for human muscle.
reply