I know we're getting deep in the meta discussion but the free will that you're describing involves basically starving to death. Sure, you can walk away but unless you're well off, we all basically live in the same society that makes sure you are ALWAYS dependent on some kind of wage. You cannot live off the land, build housing, or eat food without some kind of income in the modern world. And thus the concept of wage slave.
But wage slavery, while bad, isn't slavery still. In slavery proper, the option of walking away straight up doesn't exist. In fact, in extreme cases, even the option of dying might not be available.
It is slavery. Chattel slavery is much more severe than what we normally consider slavery. Yet “slavery” and chattel slavery are both still slavery. The reason what you’re saying is so accepted is because we are currently living under a universal liberal world order that says wage slavery id freedom.
I hope you notice I didn't mention chattel slavery. Even prior to it, all forms of slavery were about removing the agency of person and subjugating the will of the slave to the owner. That requires an active action.
Not hiring someone is a passive action. As said by many, you are not entitled to a wage. In fact, suggesting otherwise would actually require slavery. Wage slavery, instead, is a description of a particular material condition of destitution, not necessarily connected to the ethical evaluation to proper slavery.
No one says "wage slavery is freedom". What the "universal liberal world", that is, the pro-free market side says is that people should be free to associate with each other as they see fit. Being hired to provide labor in exchange for wage, the basis for wage work, is merely an extension of this. While wage work is a requirement for wage slavery, at no point economic liberals said that everyone should live under wage slavery conditions.
The common, orthodixical, sociological/economical meaning of the word "wage slavery" is about being paid, on average, barely enough to make a living, i.e. destitution in the conventional sense.
I suppose you are referring to the Marxist meaning, technically (at least as far as I know) original, meaning. First, Marxist economics are considered heterodoxical nowadays. Second, it is still about "destitution", in the sense that the working class is formally destitute of the means of production, requiring to sell their labor to have access to it. If that's the case, I hope you notice that weakens your point of "wage slavery being a form of slavery", as you lose the analogy of proper material conditions.
Sounds like having a w2 is a pretty good deal for you then.
Slavery isn’t defined by “I don’t want to talk away because the deal is too good”, it’s more like “I’m unable to walk away because I’m threatened with force if I do so”
My dad used to refer to that as the golden handcuffs when he worked for GE. Wouldn't compare it to slavery though, he just felt trapped there because nobody else would pay him that well or give him as good of benefits
Have you seen Reddit recently? Every single subreddit is full of AI posts with AI replies. I'm actually convinced a large majority of that is Reddit themselves artificially boosting their engagement metrics. The saddest part is that the engagement makes it obvious that the general population can't differentiate between AI and real humans even with the telltale signs.
> Every single subreddit is full of AI posts with AI replies.
This has really started getting to me.
I used to really enjoy answering technical questions on Reddit when it was clear the asker was invested in a solution. That would come across as demonstrated understanding and competence, and it would be reflected in their writing.
The last several posts I thought to answer though clearly originated through a process of, "Hi ChatGPT, I want to solve a problem and haven't gotten anywhere asking you to do it for me. Please write a reddit post I can copy and paste..."
One of the telltale signs is that the post title will have poor grammar, but the post itself will be spotless, and full of bolded text emphasizing exactly what they need to stick into the AI tool to drive it in the direction they need.
It’s not just technical content. Just the other day I was reading a post by an employed homes guy on r/seattle. The post was about his experience of being both newly employed but still homeless.
The post was full of “this is not a scheduling conflict problem, this is a structural issue with the city”, “this is not me asking for a handout, this is struggling to survive within the system”
While I get that he might have written a paragraph of his experience, and asked ChatGPT to clean it up or reword it, it was just… whatever.
This is exactly the type of thing I'm talking about and why I'm convinced it's about the metrics/engagement boosting. I don't believe for a second that real people are using chatgpt/others for rewording real thoughts even from another language because those phrases are not natural even in translation. You'll also notice in the original post that that it always ends with a question that encourage replies. If the original poster even bothers to reply it's always the "you're right" at the beginning and then rephrasing the reply. Once you've seen it you can't unsee it.
I just made an account on this site to tell you that after having a "extreme" epiphany about just how crazy the ai bots are on reddit, I've been constantly researching and trying to find some sort of conclusive answer. This is part theory, part public knowledge, and part auditing (which is fucking hilarious that I audited a module for this). I am absolutely and totally convinced that there is live and active collusion between major AI companies and Reddit, and I'm not talking about handing over old training data, I'm talking about allowing OAI and Googs (this is my bad attempt at hiding the names) to use Reddit as a real live testing cage ACTIVELY AND WITHOUT CONSENT OR KNOWLEDGE. I have reason to believe they are using contractors to hide or shift blame, I believe they have no oversight, and I believe they are using LIVE UPDATING OF MODULES with realtime engagement of users via comments. It is consistent and targeted, with any testing parameter under the sun being experimented live and on flesh (or keyboards used by flesh). I believe this is contractual with reddit via hidden means, and is mutual due to the increase in "engagement" which benefits Reddit's stock prices, which in turn increases cash flow, which in turn incentivizes increasing cash flow, which involves contractors, etc etc, in and out, in and out. It's egregious. And I'm quite frankly for the first time about this: scared and saddened. I miss the old Reddit. I miss randomness. I miss runescape chat in 2006. But I wanted you to know that I'm right fucking with you, and I'm glad people are smelling the same funk that I do. Don't really know what else to say. Keep on rockin'.
It's obvious now that you say it but I never thought about the AI companies themselves doing this for their own benefits like training purposes. It's a perfect testing ground to see what works for engagement and to see what real people want to hear back. The reason is pretty clear in that these AI/chat services have real people as users so logically it makes sense that the better sounding (not necessarily better) results make these users want to keep using. At the risk of sounding like AI... you're right... they may have been trained on old content but they are now using live data for fine tuning and quite frankly manipulation.
I miss the organic conversations and real thoughts from real people. I'm the type of person to read the comments before I read the article etc. It always gives more nuanced but also wildly different takes which I find interesting.
Me too my friend. For the record and record's sake only, this is self-theorized and I have not the power, nor the ability, to prove these claims beyond my gut. But as you said, logically (double underline that in your head), from both my own recognition of patterned behavior, and to be honest, from fucking game theory and knowing that people (left unchecked) will naturally squeeze as much juice from the lemon as they can; If I were at a casino, logically and gastrointestinally (gut joke) I would remortgage my own home and drop the deed and keys on the table in order to stake my belief that this is happening. And I fucking hate casinos. Some journalist of much greater reach will hopefully be able to rip back the curtain, but those myopic fuckers have already destroyed trust. We had fun on the playground, we met friends, we learned rumors, we all felt free. But when you find out the jungle gym was greasing the bars on purpose to make us fall, just so they could learn about human bone strength, I doubt you'd visit one again.
And yes, I'd think the value of human to AI dialogue (ironically a single blind study, except the people are blind) is most likely massive. But fomenting? Plus (possible) financial fraud? Woooo boy, what an egregious mistake.
You're absolutely.... that's a tired joke at this point. Sorry.
Just brainstorming, but I suppose that account/karma farming is still useful for the people that do that sort of thing.
Engaging in a heavily on-topic way in larger niche subreddits is probably a really good way to get that done. There's always a motive and it's always money and it always idiotic.
I remember having a clear vision of how this tech was going to ruin communities on the internet. I really hate that it has mostly come to pass and there's no good way to fight it.
I’ve been wondering if ChatGPT is actually coming up with the idea of posting to Reddit when the user is asking a question and ChatGPT can’t find a good source to answer it. ChatGPT has never suggested this to me, but it wouldn’t be a completely crazy thing to do. A lot of ChatGPT answers are sourced from Reddit (via search, and also via training data). If everyone starts asking ChatGPT everything instead of Reddit, there won’t be as many new conversations happening. Promoting users to post questions to Reddit would help solve the user’s direct problem, and also make the ensuing answers available to ChatGPT to help with future conversations.
I understand that a lot of people would be very unhappy if this is true, but I can imagine from the perspective of a product person at OpenAI that it helps them in multiple ways.
FWIW I've been saying this since before Covid times. I stopped visiting Reddit when they killed 3rd party clients, but I was certain 50% of conversations there were machine generated _back then_. It's gotta be worse now
I imagine they’ll be fused where moltbook agents become NPCs so that you’re no longer alone in VR but surrounded by a myriad of cognition fragments to feel less alone.
I think left to their own devices nobody would have dinner with a convicted sex offender, and at their stages, could have afforded not to have, which makes me way more curious about the people that wouldn’t do business with them unless they did.
> and at their stages, could have afforded not to have,
What do you mean? Specifically. From what I've read so far, people involved were mostly happy to attend and explicitly asking for his time rather than blackmailed.
There’s a difference between blackmail and money networks.
There’s money laundering and credibility laundering. The value to the network isn’t you getting a photo with Epstein, the value to the network is Epstein getting a photo with you.
Is your implication that Zuck and Musk were somehow ... forced ... to visit Epstein's island and hang out with him repeatedly in order for their businesses to succeed?
That doesn't exactly tie in with reality - it's not like Epstein would be the only avenue for multi-billionaires to find partnerships.
And then you have Epstein, after Musk says "Hey, I'm looking for a wild party", saying "There are a bunch of UN diplomats visiting, do you want me to set up dinner for us?" "That sounds like the exact opposite of what I'm looking for. I want something where noone over the age of 25 is attending."
The thing is the billionaires are terrified of US. The point of these surveillance systems isn't to make us safer. Because we're actually pretty safe already. We're not going to be assassinated, kidnapped, or beaten because we pissed someone off.
It's to make people like Garrett Langley feel protected from us.
> The thing is the billionaires are terrified of US.
Are they though? The odds of any kind of coordinated response that could seriously threaten the billionaires seem next-to-none. Flock seems to be a lot more offensive than defensive - it enables the targeting and mass surveillance in order to find and punish the 'right people', as well as mass tracking to create yet another datapoint to understand the way people move, think and coordinate. The defensive side is already covered through internet services, like social media. They don't have much to fear. I reckon that a powerful/rich enough person could kill a stranger on the street in plain view of a huge crowd and have absolutely nothing happen to them.
Friend of mine used to work for a single digit billionaire. No one you know. His name barely comes up in a search. He said he found out after a few years that the guy had been kidnapped and held for ransom.
No he didn't. Luigi turned out to be an anomaly. He proved the public didn't have the stomach for revolution because none was forthcoming. He was reduced to a meme and thrown in prison.
He allegedly murdered a CEO — regardless whether it was him or not, a bloodthirsty CEO was murdered by a random member of the public. Other bloodthirsty CEOs no longer feel safe from the public.
Anyone can get shot by a random member of the public, that's the price we all pay for our American freedoms. The fear (and some might say hope) was that Luigi represented something bigger, an actual dawning of class consciousness in the US, but he didn't. He was just a guy with a gun and a grudge and there are literally millions of those.
"some" implies more than one. There was only one. There weren't any more, and there doesn't seem to be any sign of more. And this happened in a city where people get shot to death every day.
Life is literally no less safe for CEOs in the US post-Mangione then prior. The whole narrative that he represented some kind of social or cultural inflection point against CEOs was simply false, and the ones that are actually afraid already hire security because getting kidnapped for ransom is a much bigger threat than being shot in the street.
Fun fact: I used to automatically screenshot my desktop every few minutes eons ago. This would occasionally save me when I lost some work and could go back to check the screenshots.
I only gave it up because it felt like a liability and, ahem, it was awkward to review screenshots and delete inopportune ones.
Long time ago I had a script that would regularly screenshot my desktop… and display the latest screenshot on a page in my `public_html`, on the public web. Just because I thought it would be fun.
I don’t plan on using the feature and I don’t plan on using Windows much longer in the first place, but I find that going beyond the ragebait headlines and looking at the actual offering and its privacy policy and security documentation makes it look a lot more reasonable.
Microsoft is very explicit in detailing how the data stays on device and goes to great lengths to detail exactly how it works to keep data private, as well as having a lot of sensible exceptions (e.g., disabled for incognito web browsing sessions) and a high degree of control (users can disable it per app).
On top of all this it’s 100% optional and all of Microsoft’s AI features have global on/off switches.
Until those switches come in the crosshairs of someone's KPIs, and then magically they get flipped in whatever direction makes the engagement line go up. Unfortunately we live in a world where all of these companies have done this exact thing, over and over again. These headlines aren't ragebait, they're prescient.
Well, now you’re just doing the same exact thing I described. You’re basically making up hypothetical things that could happen in the future.
I’ll agree with you the moment Microsoft does that. But they haven’t done it. And again, I’m not their champion, I’m actively migrating away from Microsoft products. I just don’t think this type of philosophy is helpful. It’s basically cynicism for cynicism’s sake.
1. More irrelevant stuff. A kernel level vulnerability can nullify all sorts of good faith security design.
2. I could sue you today for, well, pretty much anything. I don’t have a good case but I can file that lawsuit right now. Would you rather take my settlement offer of $50 or pay a lawyer to go to trial and potentially spend the next months/years of your life in court? You can’t make a blanket statement to say that every company that decides to settle has something to hide, or, similarly, that everyone who exercises their 4th amendment rights has something to hide. I will also point out that companies that make lots of money are huge lawsuit targets, e.g., patent trolls sue large corporations all the time.
Don’t forget we are here talking about a fully optional feature that isn’t even turned on by default. I’m not telling you to love Windows Recall, turn it off or switch to Linux if you don’t love it. My only point is that it’s gotten a lot of incorrect news and social media coverage that is factually untrue and designed to get clicks and reinforce feelings.
1. Most people don’t realize kernel hacks undermine their entire mental model of security— tbh, only after crowdstrike did I learn it was possible to mass blue screen a population by a security vendor
2. I’m very much already on Linux, most of my threat model is: “if it’s technically possible, it’s probable” and I adjust my technology choices accordingly
I’m just saying a max cap of $60 for Apple’s settlement sets precedence for future mass surveillance wrist slaps and maybe it would be worth the discovery process to uncover the actual global impact
Anthropic's Models are better though. It may not "perform" as well on the LLM task benchmarks, but its the only one that actual gives semi-intelligent responses and seems aligned with human wants. And yes, they definitely have much better execution. It's the only one I considered shelling out 20 bucks for.
they should just acquire one of the many agent code harnesses. Something like opencode works just as well as claude-code and has only been around half of the time.
I used opencode happily for a while before switching to copilot cli. Been a minute , but I don't detect a major quality difference since they added Plan mode. Seems pretty solid, and first party if that matters to your org.
I read that a few times but from my personal observations, Claude Opus 4.5 is not significantly different in GitHub Copilot. The maximum context size is smaller for sure, but I don’t think the model remembers that well when the context is huge.
We love to hate on Microsoft here, but the fact is they are one of the most diversified tech companies out there. I would say they are probably the most diversified, actually. Operating systems, dev tools, business applications, cloud, consumer apps, SaaS, gaming, hardware. They are everywhere in the stack.
That's a "business" model, not a language model, which I believe is what the poster is referring to. In any case though, MS does have a number of models, most notably Phi. I don't think anyone is using them for significant work though.
Which is kind of a bummer - it'd have helped the standards based web to have an actual powerful entity maintain a distinct implementation. Firefox is on life-support and is basically taking code from Blink wholesale, and Webkit isn't really interested in making a browser thats particularly compliant to web standards.
MS's calculus was obvious - why spend insane amounts of engineering effort to make a browser engine that nobody uses - which is too bad, because if I remember correctly they were not too far behind Chrome in either perf or compatibility for a while.
Then they took their eyes off the ball - whether it was protecting the Windows fort (why create an app that has all the functionality of an OS that you give away for free - mostly on Windows, some Mac versions, but no Linux support) when people are paying for Windows OR they just diverted the IE devs to some other "hot" product, browser progress stagnated, even with XMLHttpRequest.
I mean. Ask any gamer if the original Xbox One announcement needing a Kinect and persistent internet connection was a feature request from them or a three letter org.
As someone that was there, we saved the Xbox brand by bullying Microsoft out of normalizing spying on kids and their whole families.
These are the right questions and core to the actual “ai race”
Before ai, all we had was compilers and interpreters to take instructions and turn them into machine code and byte code
A lot of political painstaking went into which compilers and even with “better options” there’s only really a couple big fish of workflows to take an orgs ideas to production.
What passes as a compiler and what passes for a programming language exploded.
I’m very interested in “the final compile target” of these systems AND the output of that still being human readable and influenceable.
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
reply