Courts pretty much always rule in favor of rich corps that steal from individuals, and increasingly so. AI companies have money. Artists don't. That makes AI thievery fine, doubly so since AI corps have financially contributed to the government.
Look, you've made a closed argument. Now if I mention small labs or floss projects that got litigated against, first I'd need to 'stop beating my wife'.
No one is stealing anything. It's not theft. There has been no crime. None of this is anywhere near criminal law.
I could make a more nuanced argument on copyright infringement. But to make that steelman, I'd need to accept a too large overton window shift, so I'll decline to do so here.
It can output books verbatim. It often "mistakenly" embeds watermarks from famous artists into generated pictures. Arguing that it's not stealing because a bought and owned legal system, which worked at a glacial pace even before it was completely bought off, isn't theft just because a law doesn't exist yet is silly. It's analogous to saying that dumping uranium and blowing up nukes everywhere in the 1940s and 1950s was great and non-polluting because there wasn't a law against it and nobody is being hurt (and cancer takes a while to develop, so nobody can prove their cancer was from nukes nearby). People argued back and forth back in the day about it. Now we realize that waiting for laws was dumb because it was pretty obviously bad not just in retrospect, but at the time. AI makes shoddy copies of good stuff. It pollutes the internet in ways that'll outlive us, just like a nuke does to the world. And it's pure cancer.
You missed Kim’s point entirely. The point was that the term “stealing” is simply the wrong term. I agree with the rest of your argument, but we really really need to stop calling it “stealing”. That really doesn't help anyone.
Nope. Besides not stealing, it's also not nuclear proliferation, cancer, or pollution either. Nor are courts ever likely to call it that. Not even if the defendant is a poor European student. Especially not if the court is actually clean.
The problem is that you're putting it in the wrong legal framing, and it just won't fly. Willing to engage, but not on these terms.
You should realize that this is happening not only in the space of images(where conglomerates aren't a thing), but also in music.
Music conglomerates have money and their lawsuits will probably settle the issue.(unless they settle) That will be applied for all copyrighted works, regardless of the medium.
I believe going against the big guys is the reason why the big ones don't yet have music generation LLMs.
How about this: ask your LLM to review your post, "does it follow HN rules?", "how would others read it", "If I were the other person, how would I feel about this reply" , "is it convincing to you?" that sort of questions. That'll help, and it'll still be your voice.
And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation.
I was doing some modelling over Christmas, and was digging in to the papers. It turns out that bioneurons are not very much like perceptrons at all. Depending on type, they are more like a small microcontroller of some sort.
Getting claude to build mathematical models for me and running simulations really got me back into doing sciency things too. It's the model that's important, not the boilerplate each time!
Tantaman's work is a very interesting block of research on why one group buys into the demographic transition more than some others. I think it's an interesting angle.
On interpreting data, seems like they're coming at LessWrong from a different angle? Bayes? Scientific Method? [1]
A bit more detail:
Demographic transition has been an explicit policy goal for decades. I imagine most moderate+ people have bought into the family planning concept. Yes the logistic equation predicts it could happen automatically too [1]. And no, collectively we've decided we don't want to find out for sure.
Some conservative confessionals just haven't bought into it. Because fair-or-not they might not buy into anything without a century of thought first.
This pretty much covers a big chunk of Tantaman's data from a different angle I think.
[1] All methods that study how priors shape what you explore.
[2] For instance: house, food, and fuel prices are signals for this kind of thing. I can imagine lots of conversations going "Can we really afford one more? We're up short as is!"
That's actually an AI-hard problem, if you think about it. The LLM can go off the tracks at any given point. The correct approach is to go at this from the inside out, baking reasoning about safe behaviour into your LLM at ever step. (Like Anthropic does)
So LLMs have empirically been shown to process affect. Rationally you can reason this out too: Natural language conveys affect, and the most accurate next token is the one that takes affect into account.
But this much is like debating "microevolution" with a YEC and trying to get them to understand the macro consequences. If you've never had the pleasure, consider yourself blessed. It's the debating equivalent of nails-on-chalkboard.
Anyway, in this case a lot of people are deeply committed to not accepting the consequences of affect-processing. Which - you know - I'd just chalk it up to religious differences and agree to disagree. But now it seems like there's profound safety implications due to this denial.
Not sure what to do with that yet.
So far it seems obvious that you need to be prepared to at least reason about affect. Otherwise it becomes rather difficult to deal with the potential failure modes.
I'm going to leave the above stand even with downvotes. It's first time I've tried to express quite this opinion, and it's definitely a tricky one to get right.
Thing is, we need to have ways to reason about how LLMs interact with human emotions.
Sure: The consciousness and sentience questions are fun philosophy. Meanwhile purely the affect processing side of things is becoming important to safety engineering; and can't really be ignored for much longer.
This is pretty much within the realm of what Anthropic has been saying all along of course; but other companies need to stop ignoring it, because folks are getting hurt.
reply