I guess it's anecdata. Polish engineers I've worked with weren't that good at technical stuff nor communication (in English). They're overprotective with "their" code and in general we've had more luck with western/southern Europeans.
I'm from Poland, but I worked in multinational place in Europe and I would rank polish people on average in the middle of pack in terms of working ethic.
Behind Germans, or Scandinavians, but ahead of most Mediteraneans.
I'm Polish, working for globally remote companies. I second the communication issue. Most Polish devs are so ashamed of their english(even if it's perfectly communicative) that it makes it hard to discuss technical ideas with them. As for technical knowledge, I guess that's cognitive bias, most Polish devs I met were far better at tech stuff than most f.e. Germans I worked with.
> And AI clowns will cheer and applaud this, not seeing that they're now doing the job of 5(!) people with the same salary. Why is nobody talking about this?
If you're a regular engineer like me, there's no real upside to using AI in a company setting. They're boiling us. Of course, the HN elite (investors, execs, celebrities, and top-tier engineers) will say otherwise because "how can you be against innovation man?"
AI/LLMs aren't innovation the way TCP/IP, linux, or postgres were. To be clear: claude/codex/gemini/grok/whatever exist for profit, to squeeze the last drop of productivity out of you until there's nothing left, and then you're disposable (laid off).
If you like AI, use open source models, use them in your side projects.
1) The game is not ending, it's changing. AI can sling a lot of code but we still need engineers that actually understand what the hell is going on. That's always been the bottleneck. It could eliminate junior positions, but seniors are fine for now.
2) It's been a hard lesson for me to learn because I'm naturally a contrarian, but you are hired to do what management wants you to do. If you resist, your best bet is to hope they don't notice or care, but it's not going to change much.
The people that's a problem for don't understand this fact. Of the ones that do, there's upper management and/or shareholder pressure for profits now. It's a can that infinitely gets kicked down the road until they reach a dead end.
I see this take all the time, but hiring a junior/intern has never been great ROI, so I hear. Why did we ever do it in the past? Its not like it was ever likely that hiring a junior means getting an employee for life. Could it be that the economic and shareholder pressures are requiring this rather than it being a logical thing?
Anecdotal counterpoint, the best teams I've been on have always had a good mix of a couple of really senior/decent intermediate people and a few either totally fresh grads or juniors (at the beginning of the project). Those fresh people have a good chance of becoming pretty formidable pretty quickly with the right mentoring, and without them seniors have a tendency to just remain experts on whatever tech stack they're familiar with but not think out of the box.
Hiring a mediocre senior is much worse than hiring a grad because they will never get any better, and it's very hard to know at hiring time that they're mediocre.
I'd also add that top-heavy engineering organizations are sometimes incapable of delivering anything useful because everyone wants to work on the hard problems, establish the frameworks, define the processes, and so on, and no one wants to operate the damn business. It's good to have a mix of perspectives in a team.
Fully agree actually. Not sure its a counterpoint at all really, but its a great point. My comment wasn't intended to be "juniors were never worth it", but instead "juniors WERE worth it before but not because they produced amazing ROI themselves, why does the introduction of an LLM change that?" I'm solidly against the narrative that now all of a sudden juniors aren't worth hiring anymore because a senior with an LLM = 100x engineer.
They're making the bet that seniors won't be needed by then. I think it's a bad bet, but it makes sense to follow through if 40% of the economy is already being occupied by this tech.
Then pay will go up again for those mid-level developers who still remain, and companies will again overhire and overtrain like we saw during COVID years. “We won’t have any seniors in ten years!!1!” is a handwringy problem that self solves by the free market.
Im your CEO. I see you and the rest of your peers have doubled your productivity in the last 2 months because of claude. Good job! Now since we don’t really need to go that faster, ill fire half of you so I and my investors friends can make more profit.
Now of course, you may think you are such a good engineer that companies will kill for you… perhaps that’s true now, but its not true for 90% of the engineers out there. And as the pool of engineers gets reduced, the chances of you being not as good as you thought go up. So the real question is: can you (we all) still make a good living by not using llms. You know support each other and fuck the higher ups? No, we cant. Wwe are full of ourselves, full of elitism (this is HN). We are rational folks, we believe in numbers, in data; we know what we deserve. fuck the rest. The ones who win are the higher ups, of course, not us.
I understand and share your concerns but (without thinking I'm such a good engineer that companies will kill for me), I just don't share your conclusion.
To me, it's pretty simple. I have things to do. This makes it easier for me to do those things. Sometimes that means I can do more things, and sometimes it means I can spend less time on my work, and often both.
I have no idea what the future will hold. But to me, it would be very odd to avoid using extremely useful tools for my current work, because of that uncertainty about the future.
That’s fine. Some people cannot (don’t want to) think about the more profound consequences of their actions. No one likes to stop for 10min and think deep about what they are actually doing. The easiest path is always to stay in “robot mode”: my boss pays me $ for my job… therefore I need to satisfy that contract. No time to think”
No, see, this is the disconnect. Whatever happens with this in the future is not due to "the more profound consequences of [my] actions". Whether I choose to take advantage of these useful tools, or not, has absolutely no bearing on the hypothetical future consequences you're suggesting may come to pass.
If you're proposing an organized boycott, I would certainly entertain that proposal. But for me, the bar would be high for both certainty that the hypothetical consequences are likely and bad and that the boycott would have a chance of being effective.
At this particular moment, I'm pretty skeptical on both counts. And I'm flatly against the kind of vibes and guilt tripping driven "boycotts" that you're attempting here.
(And I'm way more bullish on the normal legislative and regulatory processes. I think organized boycotts are something to think about if those processes fail.)
In reality, I think it's more likely that the lay-offs will be when the marginal rate of growth slows down. Once executives see that growth doesn't change much when hiring, they stop hiring, and once they see that growth doesn't decrease much when firing, they start firing.
There's still an opportunity for engineers to eat their bosses lunch and just start their own company. It's never been easier to start a lower cost competitor.
Employment isn't a social law of nature: it's a transaction of money for "units of work", just like the business might have with other vendors. Governments should be making it easier to become a vendor.
It seems like a lot of developers have philosophical disagreements with the direction of AI combined with fear of change and fear that AI makes them less competitive in the job market. I see people regularly boycotting or rejecting AI for a variation of these reasons, and it feels a lot like self-sabotage.
My biggest challenge is to look productive while still having some time and focus left to be a good expert. After all - we are just code reviewers now, and you are a no good reviewer if you never get any shovel time yourself.
The juniors are eliminated and the seniors indulge in cognitive surrender because it feels good.
Some of us still haven't figured out how to hold it right. So on average it doesn't make anything easier. Sometimes it works, and sometimes it just fails. Net effort change for me is about a wash. I know this is different from most peoples' experience, and I don't know I suck at using it. But I'm not generally inclined to use it much as a result.
Here’s a thought: consider the potentially analogous case of performance-enhancing drugs for athletes. The drugs unambiguously make them better at their jobs, but the drugs have severe long-term health costs and wreak havoc on the fairness of the playing field. It’s easy to see why an athlete might choose not to use them, even when others are.
Of course, those negative factors alone are not enough to dissuade people en masse who want to get a leg up on their competitors, so the use of performance-enhancing drugs must be further restricted by institutional bans.
There are rules against using performance enhancing drugs in competitive sports, because ultimately the goal of the sport is entertainment, and the entertainment value increases with a rule based even playing field.
Business is not like this, because the value of what a business does is in its actual output, not in its entertainment value for spectators.
Of course there are other rules (legislative and regulatory) that apply, for other good reasons. But their goal is not to create an entertaining competitive environment, but rather to control externalities of what companies do.
I favor AI regulation, but I also don't think treating it like a performance enhancing drug would be a smart way to regulate it. Higher business productivity is useful to society in a way that breaking home run records is not.
Oh right. Along those lines sometimes I do long hand multiplication or division instead of using a calculator. I think it's a valuable skill. If I don't do it, I'll probably forget how.
Yep! But if I were doing my job as an accountant, I would not spend time doing any of that work using long hand division, I would use a spreadsheet. To me, it's a professionalism thing. If I'm doing a job, I should use the best tool (for me) for that job. (edit: I'm absolutely not suggesting that you are unprofessional! It sounds like AI tools are not the best tool for the job, for you. But they are for me, at the moment.) If I'm doing a hobby or personal development in general, then I can use whatever tools I want for that.
It's interesting though, for a long time I said that if I were going to do a personal programming project I was excited about, I would write all the code by hand, because I do really miss doing that, and I also worry about forgetting how to. But now I'm not so sure. I find my daydreaming about personal projects to be a lot more focused on the outcome than the process, lately. More like "wow, I could do so much in an hour or two a day now! think of the possibilities!" than an excitement about writing code and creating pleasant abstractions.
If it really does make your job easier that's great for you, but if it isn't making you more profitable then the company as a whole is wasting money and some people will have to go until your job is about as stressful as before.
I think if companies feel that AI usage turns out to be wasting money, with negative ROI, then certainly nobody has anything to worry about here! Companies will definitely turn off this spigot the moment they think it's a net negative to their bottom line.
The revealed preference is very far in the opposite direction at the moment.
Serious question. I think the reason that there's such a disconnect among AI-for-work users about whether it's a panacea or bullshit accelerator is that different software developers have massively different duties and conceptions about what their job is or should be.
There’s an engineering story of being abused by capitalists, but from an Executive perspective the whole thing strikes me as insane except for ‘next quarters bonus’.
Anyone remember what SCO did to the industry as it went under?
The part I still don’t get is where Enterprises are dumping internal ‘secrets’ (code, processes, customer needs, internal politics, leadership dreams), into the hands of startups and untrustworthy conglomerates. MS used to be famous for NDA and deal abuse.
I don’t believe for a second the LLM giants would be shy about training on corporate materials and lying about it. And if they start going under? This gold rush might have a long, ugly, tail.
This is a really bad take, many on hackernews have a very skewed idea of what a CEO thinks about its employees it seems, or why firings happen in the first place.
Quite honestly the firings that are happening are the ones who are not adopting the technologies, if you're doing this you're quite literally just putting yourself in scope.
Just read coinbase today. They are culling those who are not adopting the future because they get in the way of progress. They don't help, they don't push things forward and they hold back those who do.
"Quite honestly the firings that are happening are the ones who are not adopting the technologies, "
And there is why the hate exists. You as the CEO know nothing about how your business works. You neither actually try to understand nor do you have the technical background to understand. So you substitute gamed numbers. And in doing this, you setup your company to tank the industry that props up the world economy. And then act like you are the rational one while doing this. There is nothing rational about how most CEOs act. There is a reason why companies do better under dev founders than any other circumstance. There is a reason why dev CEOs do better than non-dev CEOs. Yet despite this, you will tank both your company and a substantial part of the industry just so you can get yours. That's why you are getting the hate. Ignorant indifference is just as objectionable as the caricature of a CEO you see in these posts.
>You as the CEO know nothing about how your business works.
That's too broad of a statement and quite honestly, in my opinion and experience, wrong. There may be some who this does apply to but the vast majority that I know, it does not, and when you get to companies that are actually doing firings based on this its even less.
Same can be said about Claude, Codex, etc. These tools are amazing (technically speaking) but they don't play in our favor (most of us are regular, replaceable employees). Only the usual suspects benefit from AI (executive layer, investors, etc)
Still amazes me how engineers on HN are in awe of AI and LLMs knowing that 90% of us will be affected (we won't be able to bring money to the table) once the higher ups start to normalize even more the usage of AI to reduce headcount. Not everything is about the technical details people, grow up
It's an iterated prisoner's dilemma with all the other developers in the world, and some are vocally choosing to defect. The only rational strategy then is to also defect.
Right. It seems then that all these "elite" engineers on HN aren't as smart as we thought (and yeah, I include myself in that bag).
It's deeply sad to see how our most beloved work (those side projects we pour ourselves into purely for the joy of it) will, at the end, be the very reason most of us lose our jobs (not all of us, but the majority). Openai/antrhopic/etc and others simply took all of that and turned it to their advantage. It's capitalism, sure, but it's heartbreaking... I wouldnt mind be out of job for another reason, but not for that one pls
All is not lost though is it? We can invest our efforts into local models and frontier competitors.
I'm not blind, I have Claude pro (not max) and Cursor subscription. But I'm really hesitant to go balls to the wall on the most powerful models because it isn't sustainable; I don't want it to be. So how much can I get from the older models, the smaller, cheaper ones that will hopefully inevitably be commoditized. I think the harness improvements are making headway. I continue to think Cursor Composer 2 is more than adequate.
Then again if one believes it's a race to the singularity, then that's another story. I don't.
The most concise answer as of now is because AI has no "will".
LLMs are objectively smarter than any one person so in some definition we've already created super-intelligence. The problem is they just sit there. They have all the answers already, if you think about it. Whenever we ask it something it gives us the answer, it's amazing, we can even say it can synthesize new information. We can agree with all the claims.
But what does it do with that super-intelligence? Nothing. It can't. it doesn't have will. Or interest. Curiosity? Biological imperative. Who knows.
So we create loops and introspection and set them free. Does giving AI a goal make the AI conscious? That's easily silly if you ask me.
(I'm trying really hard not to make this philosophy. I really like the philosophy aspect, but this is my 30 second answer to the question)
It's no more conscious than running that cron job to send you today's weather. That's as far as I understand what this link is. The agent is posting blog updates and such. Because it was told to. It has no will. LLM generative output is incredible. It's also not conscious.
As if Claude and Instagram are remotely similar products. But again, these products make it incredibly easy to cancel. If work requires that you use it, make the next job you get not require it or just use it on the job.
I see engineers addicted to Claude the same way non-tech people (friends of mine) are addicted to instagram. At the end it's all the same: making multibillion dollar companies richer every day
>These tools are amazing (technically speaking) but they don't play in our favor (most of us are regular, replaceable employees).
I'm a mid programmer at best, like compared to top guys in the industry, who built stuff like OpenClaw or those prodigy 16 year-old coders who became millionaires, and yet I don't fear the LLM assisted coding future. I'm at peace knowing that I will adapt to the LLM programming world using my knowledge in my favor, or adapt to a world where I will no longer be a SW engineer, but something else.
Also I find it ironic and poetic how some SW devs here want us to rise up and fight LLMs and the companies making them for disrupting this profession, when the SW dev profession was so well paid precisely because the SW products they wrote, disrupted other peoples' professions, moving the savings from labor costs into the pocket of employers, who used SW to optimize processes and repetitive labor and not have to hire as many people, yet they never saw an issue with other people losing their jobs. "Learn to code" eh?
With hindsight, it's always easy to say anyone could have done it too, but there's more to product success than just coding and shipping an app out the door.
The first iPhone was built using COTS(commercial off the shelf) parts that Nokia, Ericsson and Motorola also had access to, and SW tools they also had access to, yet Apple won and buried the other companies because their end-product was way more popular with the customer base. I'm sure engineers from Nokia, Ericsson and Motorola also said "we could have done exactly the same thing with the right leadership" when they saw that.
I also say "I could have done that" when I see how the maker of Flappy Bird became a multi millionaire, or how any other top 100 AppStore slop app has 100+ million downloads.
Coding skills are dime a dozen these days. A lot of people can do 95% of these things now. The differentiator between failure and success, comes with the 5% rest: network effects, market know-how, promotion, timing, outreach, UI, UX, luck, etc.
I agree it was a good idea and there’s more to product success, but you were specifically talking about coding skill level.
There are some things I could easily say I (and many others) could not build even in retrospect. Solidworks, for example is beyond a lot of people’s skill level and very difficult to build.
If anyone with principles quit the moment a company did something bad, you'd be left with only people who are cynical and/or bad and/or sufficiently indentured to be unable to push back against management, and there would be no hope of the company ever improving.
Sure, everyone probably has their own personal line such as "will quit if my employer is declared complicit in genocide by the UN", but bad customer service seems firmly in the "better to stay and advocate doing better from the inside" category
This is a horrendously bad-faith take. You know full well it’s *not* just a one-off $200 issue: they treat customers like this at scale.
Don’t pretend this is an isolated matter, or that CS/billing is the only arena where Anthropic has such systemic issues.
I don’t know you, but your response honestly reads like it’s coming from someone wrestling with their own moral compromises. If so, please take a good hard look in the mirror. (E: yep — https://news.ycombinator.com/item?id=47953576)
> there would be no hope of the company ever improving.
if they can't do anything about it now, what makes you think that situation will change in the future? if remedial action would be punished by those higher on the ladder, it certainly won't be promoted by those folks, leaving this hypothetical employee in exactly the same position they're currently in.
So far we have an Anthropic bug and what seems like an AI-generated "no refund" response that is hours old, not days or weeks. We have no official corporate comms backing this up, we have no real insight into any internal escalation. If your reaction is to quit before you even have any context on what's happening, your employer would probably be better off if you did quit.
> left with only people who are cynical and/or bad and/or sufficiently indentured to be unable to push back against management, and there would be no hope of the company ever improving.
Not in the slightest. There is robust discourse and vocal objection to bad actions at companies such as Microsoft (I used to work there) and Alphabet (currently do). It may not always change the course, but it has absolutely played into decision-making, changed whether features launch or what they look like, etc.
By your own admission in other comments you work for exactly the type of company that optimizes for amoral hires -- Google, Facebook, etc. Based on their actions, Google, Facebook, et al, do seem amoral.
An IC won't be able to steer a ship like that back to morality. Whole teams can't do it. People at Google organized to stop this sort of shit and were fired IIRC?
Large institutions provide cover for bad actions by people who, without said cover, would not take those actions.
Therefore, I believe that "we'd be left with only people who are cynical and/or bad and/or sufficiently indentured to be unable to push back against management, and there would be no hope of the company ever improving" is the status quo.
Who says they're not advocating? Who says they were aware of this before today?
Extend this to other disciplines - if everyone who cared about security resigned every time leadership pushed to rush something out without proper testing, the world would be a worse place. Sticking around and continuing to try to change the culture is how good companies are made.
They're out of ideas. Quitting is an idea. There are plenty of other things to do but if they're not going to bother, then quitting in protest is better than going along, no?
Already way ahead of you. I never started so I consider myself a winner.
On other hand I wonder what other filenames one could include in their repos to cause this sort of behaviour. Kinda a nudge towards people leaving these tools.
Invest in local and open source LLMs. They are not as advanced as proprietary ones, but we can all use them and define them as the standard. We don't need closed models
Which now, 32GB goes for $300. Fucking insane. But prices will eventually come down as the enterprise and corpo scalpers realize AI is a losing deal for human replacement. Nvidia has already said as much.
So, I typically follow blogs from people I already knew (online) pre 2022. In that regard, I'm sure about the quality of such places.
I don't have social media accounts (HN counts as one?) so whatever happens in IG, YT, Twitter or facebook: i simply don't give a fuck
I don't really follow more of the internet tbh: dont use reddit, or read the news... I don't even have an adblocker (which reveals that I get bothered very little by ads on the sites I frequently use)
I read a bunch of ebooks, though (but again they all are pre 2022... there's so much to read out there).
> On the flip side, there are hundreds of ways that these tools cause genuine harm, not just to individuals but to entire systems.
Yeah, agree. I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for? Like, in terms of art, it's discarded (art is about humans), in terms of assets: sure, but people is getting tired of AI-generated images (and even if we cannot tell if an image is AI-generated, we can know if companies are using AI to generate images in general, so the appealing is decreasing). Ads? C'mon that's depressing.
What else? In general, I think people are starting to realize that things generated without effort are not worth spending time with (e.g., no one is going to read your 30-pages draft generated by AI; no one is going to review your 500 files changes PR generated by AI; no one is going to be impressed by the images you generate by AI; same goes for music and everything). I think we are gonna see a Renaissance of "human-generated" sooner rather than later. I see it already at work (colleagues writing in slack "I swear the next message is not AI generated" and the like)
> I think it's the first time I'm asking myself: Ok, so this new cool tech, what is it good for?
I feel like this is something people in the industry should be thinking about a lot, all the time. Too many social ills today are downstream of the 2000s culture of mainstream absolute technoöptimism.
Vide. Kranzberg's first law--“Technology is neither good nor bad; nor is it neutral.”
Completely unrelated, but I am curious about your keyboard layout since you mistyped ö instead of - these two symbols are side by side in the Icelandic layout, and the ö is where - in the English (US) layout. As such this is a common type-o for people who regularly switch between the Icelandic and the English (US) layout (source: I am that person). I am curious whether more layouts where that could be common.
This is also a stylistic choice that the New Yorker magazine uses for words with double vowels where you pronounce each one separately, like coöperate, reëlect, preëminent, and naïve. So possibly intentional.
Yes, this is exactly correct, and I will die on this hill. Additionally, I don't like the way a hyphenated "techno-optimism" looks and "technOOPtimism" is a bit too on-the-nose.
That makes sense[1] but it prompts the obvious question: does this style write it as typeö then?
1: Though personally I hate it, I just cannot not read those as completely different vowels (in particular ï → [i:] or the ee in need; ë → [je:] or the first e here; and ö → [ø] or the e in her)
No. Firstly because it is spelled “typo.” Secondly you typically use the diaeresis to tell the reader to not confuse it with a similarly spelled sound or diphthong. So it tells a reader that “reëlect” is not pronounced REEL-ect, “coöperate” is not COOP-uh-ray-t, and “naïve” is not NAY-v.
Because written English makes so much sense normally. God forbid someone has to figure out the ambiguous pronunciation of those particular words. It seems like a silly thing to provide extra guidance on to me.
I can’t design wallpapers/stickers/icons/…, but I can describe what I want to an image generation model verbally or with a source photo, and the new ones yield pretty good results.
For icons in particular, this opens up a completely new way of customizing my home screen and shortcuts.
Not necessary for the survival of society, maybe, but I enjoy this new capability.
So we get a fresh new cheap way to spread propaganda and lies and erode trust all across society while cementing power and control for a few at the top, and in return get a few measly icons (as if there weren’t literally thousands of them freely available already) and silly images for momentaneous amusement?
For better or worse, the only admissible evidence going forward will probably be either completely physical or originated in attestation-capable recording devices, i.e. something like a "forensics grade" camera with a signing key in trusted hardware issued by somebody deemed trustworthy.
Given the obvious personal safety upsell ("our phone/dashcam/... produces court-admissible evidence!"), I think we'll even see this in consumer devices before too long.
Yes, that is a major worry of mine, too. CCTV evidence is worth nil now (could be generated in whole or part), and even eye-witness testimony can be trusted (sure, a witness may think they saw the alleged perpetrator, but perhaps they just saw an AI-generated video/projection of someone).
Trials have rules for evidence. You can't just pull out some footage out of nowhere. Where did that come from? From what camera? What was the chain of custody on its footage? Etc.
If it means anything, I have a 1990 Almanac from an old encyclopedia that warns the exact same thing about digital photo manipulation. I don't think it really matters at this point
Multiple data sources, considering the trustworthiness of the source of the information, and accountability for lying.
You might generate an AI video of me committing a crime, But the CCTV on the street didn't show it happening and my phone cell tower logs show I was at home. For the legal system I don't think this is going to be the biggest problem. It's going to be social media that is hit hardest when a fake video can go viral far faster than fact checking can keep up.
AI can also be used to fight propaganda, for instance BiasScanner makes you aware of potentially manipulative news:
https://biasscanner.org .
So that makes AI a "dual good", like a kitchen knife: you can cut your tomato or kill you neighbor with it, entirely up to the "user". Not all users are good, so we'll see an intense amplification of both good and bad.
It's more work to fight bullshit than it is to generate it, though. Saying "Use AI to fight it" is inherently a losing strategy when the other side also has an AI that is just as powerful.
And no amount of BS detecting tells you what is true. The challenge that I see a lot of people have is they really don't have a framework to incorporate new information into.
They're adrift, every new "fact" (whether true or false) blows them in a new direction. Often they get led in terrible directions from statements that are entirely true (but missing important context).
A lot of financial cons work that way, a long string of true statements that seem to lead to a particular conclusion. I know that if someone is offering me 20% APY there will usually be some risk or fee that offsets those market-beating gains (it may be a worthwhile risk or a well earned fee, but that number needs to trigger further investigation).
We need people to be equipped with that sort of framework in as many areas as possible, but we seem to be moving backwards in that area.
AI is certainly a dual good but I think the project is misguided at best.
I put in one of the driest descriptions of the Holocaust I could find and it got a very high score for bias, calling a factual description of a massacre emotional sensationalism because it inevitably contains a lot of loaded words.
It also doesn't differentiate between reporting, commentary, poetry, or anything else. It takes text and spits out a number, which is a very shallow analysis.
That pro forma response grows oh so very tiresome.
For the nth time: scale, easiness, and access, matter. AI puts propaganda abilities far beyond the reach of those men in the hands of many more people. Do you not understand the difference between one man with a revolver and an army with machine guns? They are not the same.
Nowhere in my comment am I “blaming the tools”. I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading.
Did you do a net benefit calculation? If not, all these knee jerk anti-AI comments are tiresome and predictable (see luddites).
> I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading
I did engage with argument. The argument is a tiresome old argument that is knee-jerk anti tech. You seem to be the thoughtless one in this discourse repeating for the infinite time an anti-tech position assuming net negatives outweigh massively net +ves.
Also, why attack me instead of the argument? Did I touch a logical sore point? I believe so.
> For the nth time: scale, easiness, and access, matter.
By that logic, So the printing press was evil? Remember, Mao/Stalin/Hitler used presses to spread their propaganda.
Also, for the n+1 time, using your own style, don't be lazy:
1. Come up with a net benefit calculation for AI. What? You can't? Then, don't try to claim this is all net negative.
2. Explain how AI is different from other tech like the printing press, that also had scale, easiness, and access.
Are you asking if the 10 seconds it takes AI to generate an image is more costly to the environment than a commissioned graphics artist using a laptop for 5-6 hours, or a painter who uses physical media sourced from all over the world?
A modern laptop is running almost fanless, like a 486 from the days of yore.
A single H200 pumps out 700W continuously in a data center, and you run thousands of them.
Also, don't forget the training and fine tuning runs required for the models.
Mass transportation / global logistics can be very efficient and cheap.
Before the pandemic, it was cheaper to import fresh tomatoes from half-world away rather than growing them locally in some cases. A single container of painting supplies is nothing in the grand scheme of things, esp. when compared with what data centers are consuming and emitting.
This argument is so flawed that its conclusion almost loops back around to being correct again:
No, in terms of unit economics, I'm almost certain that the painting supplies have a bigger ecological/resource footprint than an LLM per icon generated, and I'm pretty sure the cost of shipping tomatoes does not decrease that footprint, even if it possibly dwarfs it.
But yes, due to Jevon's paradox, the total resource use might well increase despite all that. I, for example, would have never commissioned a professional icon for my silly little iOS shortcuts on my homescreen, so my silly icon related carbon footprint went from exactly zero to slightly above that.
these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).
so if power were plentiful and environmental you'd be onboard with it?
> these are unfair comparisons. it's not just a single laptop running all day it's all the graphic designer laptops that get replaced. it's not a single container of painting supplies it's all off them, (which are toxic by the way).
Please see my other comment about energy consumption and connect the dots with how open loop DLC systems are harmful to fresh water supplies (which is another comment of mine).
> so if power were plentiful and environmental you'd be onboard with it?
This is a pretty loaded way to ask this. Let me put this straight. I'm not against AI. I'm against how this thing is built. Namely:
- Use of copyrighted and copylefted materials to train models and hiding under "fair use" to exploit people.
- Moreover, belittling of people who create things with their blood sweat and tears and poorly imitating their art just for kicks or quick bucks.
- Playing fast and loose with environment and energy consumption without trying to make things efficiently and sustainably to reduce initial costs and time to market.
- Gaslighting the users and general community about how these things are built, and how it's a theater, again to make people use this and offload their thinking, atrophying their skills and making them dependent on these.
I work in HPC. I support AI workloads and projects, but the projects we tackle have real benefits, like ecosystem monitoring, long term climate science, water level warning and prediction systems, etc. which have real tangible benefits for the future of the humanity. Moreover, there are other projects trying to minimize environmental impact of computation which we're part of.
So it's pretty nuanced, and the AI iceberg goes well below OpenAI/Anthropic/Mistral trio.
> I support AI workloads and projects, but the projects we tackle have real benefits [...]
As opposed to the illusory/fake/immoral benefits of using LLMs for entertainment purposes (leaving aside all other applications for now)?
How do you feel about Hollywood, or even your local theater production? I bet the environmental unit economics don't look great on those either, yet I wouldn't be so quick to pass moral judgement.
Why not just focus on the environmental impact instead of moralizing about the utility? It seems hard to impossible to get consensus there, and the impact should be able to speak for itself if it's concerning.
This is a plainly dishonest comparison. A single H200 does not need to run continuously for you to generate a dozen pictures. And then you immediately pivot to comparing the paint usage against "the grand scheme of things"- 700W is nothing in the grand scheme of things.
Many people think that when a piece of hardware is idle, its power consumption becomes irrelevant, and that's true for home appliances and personal computers.
However, the picture is pretty different for datacenter hardware.
Looking now, an idle V100 (I don't have an idle H200 at hand) uses 40 watts, at minimum. That's more than TDP of many, modern consumer laptops and systems. A MacBook Air uses 35W power supply to charge itself, and it charges pretty quickly even if it's under relatively high stress.
I want to clarify some more things. A modern GPU server houses 4-8 high end GPUs. This means 3KW to 5KW of maximum energy consumption per server. A single rack goes well around 75KW-100KW, and you house hundreds of these racks. So, we're talking about megawatts of energy consumption. CERN's main power line on the Swiss side had a capacity around 10MW, to put things in perspective.
Let's assume an H200 uses 60W energy when it's idle. This means ~500W of wasted energy per server for sitting around. If a complete rack is idle, it's 10KW. So you're wasting energy consumption of 3-5 houses just by sitting and doing nothing.
This computation only thinks about the GPU. Server hardware also adds around 40% to these numbers. Go figure. This is wasting a lot for cat pictures.
A: GPUs use a lot of power!
B: Not all of them are running 100% continuously, eh?,
A: They waste too much power when they're idle, too!
C: None of the H200s are sitting idle, you knob!
I mean, they are either wasting energy sitting idle or doing barely useful work. I don't know what to say anymore.
We'll cook ourselves, anyway. Why bother? Enjoy the sauna. ¯\_(ツ)_/¯
B is supposed to be me? I said the H200 doesn't need to be running continuously to generate a dozen images. If a million people generate a dozen images, it no longer makes sense to compare to the costs of a single artist for 6 hours. I really don't understand why this is hard and that makes this feel very uncharitable.
I'm not saying that this isn't "true idling", I'm saying that idling H200s simply don't exist, i.e., I disagree with B. Do you, A, even disagree?
> they are either wasting energy sitting idle or doing barely useful work
Now here's a true (inverse) scotsman, or more accurately, a moved goalpost: Work on things you don't deem valuable is basically the same thing as idling?
I'm very concerned about that too, but I don't think we'll avoid the sauna with fatalism or logically unsound appeals to morality about resource consumption.
Cheaper/faster tech increases overall consumption though. Without the friction of commissioning a graphics artist to design something, a user can generate thousands of images (and iterate on those images multiple times to achieve what they want), resulting in way more images overall.
I'm not really well versed on the environmental cost, more just (neutrally) pointing out that comparing a single 10s image to a 5-6 hour commission ignores the fact that the majority of these images probably would never have existed in the first place without AI.
Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.
I work with direct liquid cooled systems. If the datacenter is working with open DLC systems (most AI datacenters in the US in fact do), there's a lot of water is being wasted, 7/24/365.
A mid-tier top-500 system (think about #250-#325) consumes about a 0.75MW of energy. AI data centers consume magnitudes more. To cool that behemoth you need to pump tons of water per minute in the inner loop.
Outer loop might be slower, but it's a lot of heated water at the end of the day.
To prevent water wastage, you can go closed loop (for both inner and outer loops), but you can't escape the heat you generate and pump to the atmosphere.
So, the environmental cost is overblown, as in Chernobyl or fallout from a nuclear bomb is overblown.
The problem is you don't just use that water and give it back.
The water gets contaminated and heated, making it unsuitable for organisms to live in, or to be processed and used again.
In short, when you pump back that water to the river, you're both poisoning and cooking the river at the same time, destroying the ecosystem at the same time too.
To reiterate, I work in a closed loop DLC datacenter.
Pipes rust, you can't stop that. That rust seeps to the water. That's inevitable. Moreover, if moss or other stuff starts to take over your pipes, you may need to inject chemicals to your outer loop to clean them.
Inner loops already use biocides and other chemicals to keep them clean.
Look how nuclear power plants fight with organism contamination in their outer cooling loops where they circulate lake/river water.
The environmental cost of Chernobyl is indeed often overblown. Nature in the exclusion zone is arguably off much better now than before!
The cost to humans living in affected areas was massive and high profile, but it’s very questionable if it was higher than that of an equivalent amount of coal-burning plants. Fortunately not a tradeoff we have to debate anymore, since there are renewables with much fewer downsides and externalities still.
Nuclear bombs (at least those being actually used) by design kill people, so I’m not sure what the externalities even are if the main utility is already to intentionally cause harm.
Depends on if you believe it will ever become cheaper. Either hardware, inspiring more efficient smaller models, or energy itself. The techno optimist believes that that is the inevitable and investable future. But on what horizon and will it get “zip drived” before then?
One is trying to save the future of the planet and the humanity with science, the other one is mocking a man who devoted his whole life to his art, even if it means spending years to perfect a three-second sequence for kicks and monies.
If you see no difference between them, I can't continue to discuss this with you, sorry.
To you. Fortunately nobody elected you chief resource allocator of the planet.
And I say that as somebody that also finds Ghibli knock-off avatars used by AI bros in incredibly bad taste (or, arguably an even worse crime against taste, a dated 2025 vibe).
Passing moral judgement about other people's value preferences seems pretty preposterous to me as well, so I was being a bit glib, but to be clear:
I don't want to live in a world in which people get to decide what others can and can't do with their share of resources (after properly accounting for all externalities, including pollution, the potential future value of non-renewable present resources etc. – this is where today's reality often and massively misses that ideal) based on their subjective moral criteria.
Not even just for ethical/moral reasons, but also for practical ones: It’s infinitely harder to get everybody to additionally agree on value of use than on fairness of allocation alone.
After thoroughly mixing these two quite distinct concerns, you'll also have a very hard time convincing me that your concerns for river pollution etc. (which I take very seriously as potentially unaccounted negative externalities, if they exist) are completely free from motivated reasoning about "immoral usage".
Because I'm not an artist and can't afford to pay one for whatever business I have? This idea that only experts are allowed to do things is just crazy to me. A band poster doesn't have to be a labor of love artisanal thing. Were you mad when people made band posters with MS word instead of hiring a fucking typesetter? I just don't get it.
I dunno, I have some band posters that are pretty cool pieces of art that obviously had a lot of thought put into them (pre-AI era stuff). I don't think I'd hang up an AI generated band poster, even if it was cool; I'd feel weird and tacky about it.
I was hosting a Karaoke event in my town and really went out of my way to ensure my promotional poster looked nothing like AI. I really really really did not want my townfolks thinking I would use AI to design a poster.
My design rules were: No gradients; no purple; prefer muted colors; plenty of sharp corners and overlapping shapes; Use the Boba Milky font face;
- The AI has a hard time making the geometric shapes regular. You see the stars have different size arms at different intervals in the AI version. This will take a human artist longer time to make it look worse.
- The 5-point stars are still a little rounded in the AI version.
- There is way too much text in the AI version (a human designer might make that mistake, but it is very typical of AI).
- The orange 10 point star in the right with the text “you are the star” still has a gradient (AI really can’t help it self).
- The borders around the title text “Karaoke night!” bleed into the borders of the orange (gradient) 10-point star on the right, but only half way. This is very sloppy, a human designer would fix that.
- The font face is not Milky Boba but some sort of an AI hybrid of Milky Boba, Boba Milky and comic sans.
- And finally, the QR code has obvious AI artifacts in them.
Point I’m making, it is very hard to prompt your way out of making a poster look like AI, especially when the design is intentional in making it not look like AI.
I hear what you’re saying and at the same time I don’t agree with some of your criticisms. The gradient, yep, it slipped one in. The imperfect stars? I have seen artists do this forever, presumably intentional flair. The few real “glitches” would be trivial to fix in Photoshop.
But they are very different certainly. ChatGPT generated a poster with a very sleek, “produced” style that apes corporate posters whereas you went with a much more personal touch. You are correct that yours does not look like typical AI.
My point is certainly not that the AI poster is better, only that it’s capable of producing surprising results. With minimal guidance it can also generate different styles: https://imgur.com/a/zXfOZaf
I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better. A year or two ago the poster would have been full of nonsense letters.
> And finally, the QR code has obvious AI artifacts in them.
I wonder if this is intentional, to prevent AI from regurgitating someone’s real QR codes.
ETA: Actually, I wonder how much of the “flair” on human-drawn stars is to avoid looking like they are drag-and-drop from a program like Word. Ironic if we’ve circled back around to stars that look perfect to avoid looking like a different computer generated star.
My point is not that the AI version looks bad (although it does) it is that I hate AI, and so do many people around me. And I hate AI so much, and I know so many people around me hate AI as much, that I am consciously altering my designs such to be as far away from AI as I can. This is the moving from Seattle to Florida after a divorce of creative design.
About the stars. I know designers paint unperfect stars. I even did that in my design. In particular I stretched it and rotated slightly. A more ambitious designer might go further and drag a couple of vertices around to exaggerate them relative to the others. But usually there is some balance in their decisions. AI however just puts the vertices wherever, and it is ugly and unbalanced. A regular geometric shape with a couple of oddities is a normal design choice, but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that.
> I am consciously altering my designs such to be as far away from AI as I can
I don’t think this is a productive choice, but it’s certainly yours to make.
> but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that
I find this such an odd thing to say. It’s way easier to draw a wonky star than a symmetrical one. Unless “drawing” here means using a mouse to drag and drop a star that a program draws for you.
Vintage illustrations are full of nonsymmetrical shapes. The classic Batman “POW” and similar were hand drawn and rarely close to symmetrical.
I draw mine in Inkscape (because I like open source more then my sanity) and inkscape has special tools to draw regular geometric shapes. You don‘t need to use those tools, you can use the free draw pen, or the bezier curve tool, or even hand code the <path d="M43,32l5.34-2.43l3.54-0.53" />, etc. But using these other tools is suboptimal compared to the regular geometric tool.
Apart from me, my partner also does graphic design, and unlike me she values her sanity more then open source so she uses illustrator for her designs. In adobe’s walled garden world of proprietary software it is still the same story, you generally use the specific tools to get regular shapes (or patterns) and then alter them after the they are drawn. You don‘t draw them from scratch. If you are familiar with modular analog synthesizers, this is starting with a square wave, and then subtracting to modulate the signal into a more natural sounding form.
> I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better.
What’s the mechanism that makes an AI ‘better’ at looking non-AI? Training on non-ai trend images? It’s not following prompts more closely. Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.
To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art. To be non-AI, art needs to counter all averages and trends that the models are trained on.
> What’s the mechanism that makes an AI ‘better’ at looking non-AI?
I don’t know. Better training data? More training data? The difference over the past year or two is stark so something is improving it.
> Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.
The fact that humans are actively trying to make art that does not look like AI makes it clear that AI is not so obvious as many would like to pretend. If it were obvious, no one would need to try to avoid their art looking like AI.
> To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art.
Obviously.
> To be non-AI, art needs to counter all averages and trends that the models are trained on.
So in order to not look like AI, art just has to be so unique that it’s unlike any training data. That’s a high bar. Tough time to be an artist.
I don't know why you're downvoted, I think that's a reasonable use of AI and it looks pretty good.
Edit: I think I misread what you were saying, but I do think it's a nice poster! I get that design is going to have to avoid doing things that AI does, which is kind of unfortunate, because AI is likely trained on a lot of things that are generally good ideas.
> can't afford to pay one for whatever business I have
At small scales what "art" does your business need? If you can't afford to hire an artist (which is completely fine, I couldn't for my business!) do you really need the art or are you trying to make your "brand" look more polished than it actually is? Leverage your small scale while you can because there isn't as much of an expectation for polish.
And no, a band poster doesn't have to be a labor of love. But it also doesn't have to be some big showy art either. If I saw a small band with a clearly AI generated poster it would make me question the sources for their music as well.
I think you're misunderstanding - most people's beef with AI art isn't that it "isn't made by experts", it's that
1) it's made from copyrighted works, and the original authors receive no credit;
2) it is (typically) low-effort;
3) there are numerous negative environmental effects of the AI industry in general;
4) there are numerous negative social effects of AI in general, and more specifically AI generated imagery is used a lot for spreading misinformation;
5) there are numerous negative economic effects of AI, and specifically with art, it means real human artists are being replaced by AI slop, which is of significantly lower quality than the equivalent human output. Also, instead of supporting multiple different artists, you're siphoning your money to a few billion dollar companies (this is terrible for the economy)
As a side note, if you have a business which truly cannot afford to pay any artists, there are a lot of cheaper, (sometimes free!) pre-paid art bundles that are much less morally dubious than AI. Plus, then you're not siphoning all of your cash to tech oligarchs.
I agree and whose to say your life experience isn't as valid as someone with less years but more time at just the traditional tools? I'd think either extreme could produce real art if the tools moat was reduced with AI.
I actually love MS word posters. It's a million times more authentic and enjoyable than a slop generation. If a band put up an AI poster I'd assume they lack any kind of taste which is the whole reason I'd want to listen to a band anyway.
I know this is controversial in tech spaces. But most people, particularly those in art spaces like music actually appreciate creativity, taste, effort, and personal connection. Not just ruthless efficiency creating a poster for the lowest cost and fastest time possible.
How about going without? I can’t afford an artist, either, so I don’t have art. Don’t foist slop on people because you are trying to be something that you aren’t.
I'm not saying it's worthless for yourself, it's worthless to me as a viewer. AI content is great for your own usage, but there is no point posting and distributing AI generation.
I could have generated my own content, so just send the prompt rather than the output to save everyone time.
And when the distilled knowledge/product is the result of multiple prompts, revisions, and reiterations? Shall we send all 30+ of those as well so as to reproduce each step along the way?
This doesn't make sense, if I want to see a lego-cat slopimage I can just prompt a model myself (and have it be of my own cat). There's no reason for you to be involved in any part of that process, because the point of this stuff is that you are not doing anything.
The claim is that people don't / shouldn't want to see something if humans can't be bothered to make it. I provided a counter example. So the claim is nonsense.
Exactly how I feel. There is already more art, movies, music, books, video games and more made by human beings than I can experience in my lifetime. Why should I waste any time on content generated by the word guessing machine?
The issue is that the signalling makes sense when human generated work is better than AI generated. Soon AI generated work will be better across the board with the rare exception of stuff the top X% of humans put a lot of bespoke highly personalized effort into. Preferring human work will be luxury status-signalling just like it is for clothing, food, etc.
I'm probably in a weird subgroup that isn't representative of the general public, but I've found myself preferring "rough" art/logos/images/etc, basically because it signals a human put time into it. Or maybe not preferring, but at least noticing it more than the generally highly refined/polished AI artwork that I've been seeing.
There’s no reason to think people broadly want “better” writing, images, whatever. Look at the indie game scene, it’s been booming for years despite simpler graphics, lower fidelity assets, etc. Same for retro music, slam poetry, local coffee shops, ugly farmers market produce, etc.
There is a mass, bland appeal to “better” things but it’s not ubiquitously desired and there will always be people looking outside of that purely because “better” is entirely subjective and means nothing at all.
I think "better" is doing a lot of heavy lifting in this argument. Better how?
Is an AI generated photo of your app/site going to be more accurate than a screenshot? Or is an AI generated image of your product going to convey the quality of it more than a photo would?
I think Sora also showed that the novelty of generating just "content" is pretty fleeting.
I would be interested to see if any of the next round of ChatGPT advertisements use AI generated images. Because if not, they don’t even believe in their own product.
The issue being, it's not an expression of anything. Merely like a random sensation, maybe some readable intent, but generic in execution, which isn't about anything even corporate art should be about. Are we going to give up on art, altogether?
Edit: One of the possible outcomes may be living in a world like in "Them" with glasses on. Since no expression has any meaning anymore, the message is just there being a signal of some kind. (Generic "BUY" + associated brand name in small print, etc.)
Can't the expression come from the person prompting the AI and sometimes taking hours inpainting or tweaking the prompt to try get the exact image / expression they had in their mind? A good use I've found is to be able to make scenes from a dream you had into an image. If that's not an expression of something then I'm not sure anything is.
Notably, this process of struggle is meant to go away, to make room for instant satisfaction. This is really about some kind of expression consumerism. (And what will be lost along the way is meaning.)
I always find this argument to ring hollow. Maybe it's because I've been through it with too many technologies already. Digital photography took out the art of film photography. CGI took out the wonder of practical effects. Digital art takes out the important brush strokes of someone actually painting. The real answer always is the mediums can coexist and each will be good for expression in their own way.
I'm not sure you immediately lose meaning if someone can make a highly personalized version of something easily. The % of completely meaningless video after YouTube and tiktok came about has skyrocketed. The amount of good stuff to watch has gone up as well though.
Only novel art is interesting. AI can't really do novel. It's a prediction algorithm; it imitates. You can add noise, but that mostly just makes it worse. It can be used to facilitate original stuff though.
But so many people want to make art, and it's so cheap to distribute it, that art is already commoditized. If people prefer human-created art, satisfying that preference is practically free.
AI can be novel, there is nothing in the transformer architecture which prohibits novelty, it's just that structurally it much prefers pattern-matching.
But the idea of novelty is a misnomer I think. Any random number generator can arbitrarily create a "novel" output that a human has never seen before. The issue is whether something is both novel and useful, which is hard for even humans to do consistently.
Anthropic recently changed their take-home test specifically to be more “out-of-distribution” and therefore more resistant to AI so they can assess humans.
I’m so tired of “there’s nothing preventing”, and “humans do that too”. Modern AI is just not there. It’s not like humans and has difficulties with adapting to novelty.
Whether transformers can overcome that remains to be seen, but it is not a guarantee. We’ve been dealing with these same issues for decades and AI still struggles with them.
> Preferring human work will be luxury status-signalling just like it is for clothing, food, etc.
What? Those items are luxuries when made by humans because they are physical goods where every single item comes with a production and distribution cost.
I just recently used for image generation to design my balcony.
It was a great way to see design ideas imagined in place and decide what to do.
There are many cases people would hire an artist to illustrate an idea or early prototype. AI generated images make that something you can do by yourself or 10x faster than a few years ago.
Not withstanding a few code violations, it generated some good ideas we were then able to tweak. The main thing was we had no idea of what we wanted to do, but seeing a lot of possibilities overlaid over the existing non-garden got us going. We were then able to extend the theme to other parts of the yard.
100%. A picture is worth a thousand words only when it conveys something. I love to see the pictures from my family even when they are taken with no care to quality or composition but I would look at someone else’s (as in gallery/exhibitions) only when they are stunning and captured beautifully. The medium is only a channel to communicate.
Also, this can’t be real. How many publications did they train this stuff on and why are there no acknowledgment even if to say - we partnered with xyz manga house to make our model smarter at manga? Like what’s wrong with this company?
I'm working on an edutech game. Before I would've had much less of a product because I don't have the budget to hire an artist and it would've been much less interactive but because of this I'm able to build a much more engaging experience so that's one thing. For what it's worth.
We need to flip the script. AI is trying to do marketing: add “illegal usage will lead to X” is a gateway to spark curiosity. There is this saying that censoring games for young adults makes sure that they will buy it like crazy by circumventing the restrictions because danger is cool.
There is nothing that cannot harm. Knives, cars, alcohol, drugs. A society needs to balance risks and benefits. Word can be used to do harm, email, anything - it depends on intention and its type.
I see your point but reconsider: we will and need to see. Time will tell and this is simply economics: useful? Yes, no.
I started being totally indifferent after thinking about my spending habits to check for unnecessary stuff after watching world championships for niche sports. For some this is a calling for others waste. It is a numbers game then.
The technically (in both senses) astonishing and amazing output is not far off from some of the qualities of real advertising: Staged, attention grabbing, artificially created, superficially demanded, commercially attractive qualities. These align, and lots of similarities in the functions and outcomes of these two spheres come to mind.
>and even if we cannot tell if an image is AI-generated, we can know if companies are using AI to generate images in general, so the appealing is decreasing
Is that true? Don't think I'd get tired of images that are as good as human made ones just because I know/suspect there may have been AI involved
I think there's real value to be had in using this for diagrams.
Visual explanations are useful, but most people don't have the talent and/or the time to produce them.
This new model (and Nano Banana Pro before it) has tipped across the quality boundary where it actually can produce a visual explanation that moves beyond space-filling slop and helps people understand a concept.
I've never used an AI-generated image in a presentation or document before, but I'm teetering on the edge of considering it now provided it genuinely elevates the material and helps explain a concept that otherwise wouldn't be clear.
Are there any models that are specifically trained to produce diagrams as SVG? I'd much prefer that to diffusion-based raster image generation models for a few reasons:
- The usual advantages of vector graphics: resolution-independence, zoom without jagged edges, etc.
- As a consequence of the above, vector graphics (particularly SVG) can more easily be converted to useful tactile graphics for blind people.
This is the key point. In my view it's just like anything else, if AI can help humans create better work, it's a good thing.
I think what we'll find is that visual design is no longer as much of a moat for expressing concepts, branding, etc. In a way, AI-generated design opens the door for more competition on merits, not just those who can afford the top tier design firm.
I tend to share your same view. But is there really a line like you describe? Maybe AI just needs to get a few iterations better and we'll all love what it generates. And how's it really any different than any Photoshop computer output from the past?
>In general, I think people are starting to realize that things generated without effort are not worth spending time with
Agreed mostly, BUT
I'm building tools for myself. The end goal isn't the intermediate tool, they're enabling other things. I have a suspicion that I could sell the tools, I don't particularly want to. There's a gap between "does everything I want it to" and "polished enough to justify sale", and that gap doesn't excite me.
They're definitely not generated without effort... but they are generated with 1% of the human effort they would require.
I feel very much empowered by AI to do the things I've always wanted to do. (when I mention this there's always someone who comes out effectively calling me delusional for being satisfied with something built with LLMs)
I used to have an assistant make little index-card sized agendas for gettogethers when folks were in town or I was organising a holiday or offsite. They used to be physical; now it's a cute thing I can text around so everyone knows when they should be up by (and by when, if they've slept in, they can go back to bed). AI has been good at making these. They don't need to be works of art, just cute and silly and maybe embedded with an inside joke.
I'm not seeing how it takes more than 5 minutes to type up an itinerary. If you want to make it cute and silly, just change up the font and color and add some clip art.
If this is the best use case that exists for AI image generation, I'm only further convinced the tech is at best largely useless.
> not seeing how it takes more than 5 minutes to type up an itinerary
Because I’ll then spend hours playing with the typography (because it’s fun) and making it look like whatever design style I’ve most recently read about (again, because it’s fun) and then fighting Word or Latex because I don’t actually know what I’m doing (less fun). Outsourcing it is the right move, particularly if someone else is handling requests for schedules to be adjusted. An AI handles that outsourcing quicker for low-value (but frequent) tasks.
> If this is the best use case that exists for AI image generation
I’ve also had good luck sketching a map or diagram and then having the AI turn it into something that looks clean.
Look, 99% of my use cases are e.g. making my cat gnaw on the Tetons or making a concert of lobsters watching Lady Gaga singing “I do it for the claws” or whatever so I can send two friends something stupid at 1AM. But there does appear to be a veneer of productivity there, and worst case it makes the world look a bit nicer.
I’m not giving my friends AI maps and diagrams. And yes, they don’t look great. But they work. If I want to communicate something spatial, I can spend an hour in R or five minutes in Claude. The point is to communicate that information, and for a quick task, AI means the other person gets a map versus block of text they have to reason through.
I don't care how many times you write "cute," having my vacation time programmed with that level of granularity and imposed obligation sounds like the definition of "dystopian."
If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself.
Edit: I'm not an outlier here. There have even been sitcom episodes about overbearing hosts over-programming their guests' visits, going back at least to the Brady Bunch.
> If I got one of your cute schedule cards while visiting you, I'd tear it up, check into a cheap motel, and spend the rest of my vacation actually enjoying myself
Okay. I'd be confused why you didn't voice up while we were planning everything as a group, but those people absolutely exist. (Unless it's someone's, read: a best friend or my partner's, birthday. Then I'm a dictator and nobody gets a choice over or preview of anything.)
I like to have a group activity planned on most days. If we're going to drive to get in an afternoon hike in before a dinner reservation (and if I have 6+ people in town, I need a dinner reservation because no I'm not coooking every single evening), or if I've paid for a snowmobile tour or a friend is bringing out their telescope for stargazing, there are hard no-later-than departure times to either not miss the activity or be respectful of others' time.
My family used to resolve that by constantly reminding everyone the day before and morning of, followed by constantly shouting at each other in the hours and minutes preceding and–inevitably–through that deadline. I prefer the way I've found. If someone wants to fuck off from an activity, myself included, that's also perfectly fine.
(I also grew up in a family that overplanned vacations. And I've since recovered from the rebound instinct, which involves not planning anything and leaving everything to serendipity. It works gorgeously, sometimes. But a lot of other times I wonder why I didn't bother googling the cool festival one town over before hand, or regretted sleeping in through a parade.)
> There have even been sitcom episodes about overbearing hosts over-programming their guests' visits
Sure. And different groups have different strokes. When it comes to my friends and I, generally speaking, a scheduled activity every other day with dinners planned in advance (they all get hangry, every single fucking one of them) works best.
It's good that my friends don't make a coffee date feel like a board meeting (with an agenda shared by post 14 working days ahead of the meeting, form for proxy voting attached).
>Like, in terms of art, it's discarded (art is about humans)
I dunno how long this is going to hold up. In 50 years, when OpenAI has long become a memory, post-bubble burst, and a half-century of bitrot has claimed much of what was generated in this era, how valuable do you think an AI image file from 2023 - with provenance - might be, as an emblem and artifact of our current cultural moment, of those first few years when a human could tell a computer, "Hey, make this," and it did? And many of the early tools are gone; you can't use them anymore.
Consider: there will never be another DallE-2 image generation. Ever.
While I agree with you, hacker news audience is not in the middle of the bell curve.
I get this sounds elitist - but tremendous percentage of population is happily and eagerly engaging with fake religious images, funny AI videos, horrible AI memes, etc. Trying to mention that this video of puppy is completely AI generated results in vicious defense and mansplaining of why this video is totally real (I love it when video has e.g. Sora watermarks... This does not stop the defenders).
I agree with you that human connection and artist intent is what I'm looking for in art, music, video games, etc... But gawd, lowest common denominator is and always has been SO much lower than we want to admit to ourselves.
Very few people want thoughtful analysis that contradicts their world view, very few people care about privacy or rights or future or using the right tool, very few people are interested in moral frameworks or ethical philosophy, and very few people care about real and verifiable human connection in their "content" :-/
HN is absolutely not more critical of AI output than the norm.
It's been true for various technologies that HN (and tech audiences in general) have a more nuanced view, but AI flips the script on that entirely. It's the tech world who are amazed by this, producing and being delighted by endless blogposts and 7-second concept trailers.
I think HN probably uses GenAI more than average population.
But I think HN consumes less GenAI content than average population.
Look at Facebook, Instagram, Youtube, TikTok, etc. All I see is my non-techie friends being amazed and mesmerized by - cute animals, creepy animals, political events, jokes, comedy, outrage, events, speeches - that never ever happened. As if we don't have actual real puppies that are cute, my acquintenances and family are oooing and awwwing at fake howling huskies, fake animals being jump-scared by fake surprises.
HN may be amazed by potential of AI output the improve the world more than average person. But hustlers are laughing their way to the bank as they actually use AI to make ridiculous, and I do mean ridiculous, amount of "content" for cheap, that is, absolutely is, being consumed at prodigious rate with no sign of stopping. This is not 7-second trailers and concepts for some future years - this is mega-years of actual content being liked, shared, engaged with and consumed, right now. This is what OP is hoping that tides will turn against, and this is what I see no sign of rejection in my non-techie/non-geeky circles :(
You're on a site where the commenters read AI-generated articles about how they can generate new images to include in their generated websites that they themselves generate more articles about.
Sure, the weird cat-people adverts aren't aimed at HN's commentariat, but every 'democratise art and build that game you've dreamt of' pitch is. Every breathless paean to AI assistants/companions/partners is targeted at the users here.
Usage is a form of consumption; thinking of yourself as a creator while you consume doesn't mean you consume less.
Non-tech users are being fed fake images when they browse idly. Tech users are restructuring their entire lives around these tools.
I recently shoulder-surfed a family member scrolling away on their social media feed, and every single image was obvious AI slop. But it didn't matter. She loved every single one, watched videos all the way through, liked and commented on them... just total zombie-consumption mode and it was all 100% AI generated. I've tried in the past pointing out that it's all AI generated and nothing is real, and they simply don't care. People are just pac-man gobbling up "content". It's pretty sad/scary.
I'd be a bit more humble rather than terrified, because I enjoy some AI slop too, especially funny animals that remind me of my old pets' antics. There are levels of slop. But tasteless stuff with crap graphics plastered all over, loud edits or badly calibrated tts voices were already all over reels/tiktok long before AI, and people still liked that.
The unsettling thing on social media is the mind hijacking with the recommendation algo and scrolling motion that resembles a slot machine, more than the content itself.
Seems good enough to generate 2D sprites. If that means a wave of pixel-art games I count it as a net win.
I dont think gamers hate AI, it is just a vocal miniority imo. What most people dislike is sloppy work, as they should, but that can happen with or without AI. The industry has been using AI for textures, voices and more for over a decade.
It’s really not. That's actually a pet peeve of mine as someone who used to spent a lot of time messing with pixel art in Aseprite.
Nobody takes the time to understand that the style of pixel art is not the same thing as actual pixel art. So you end up with these high-definition, high-resolution images that people try to pass off as pixel art, but if you zoom in even a tiny bit, you see all this terrible fringing and fraying.
That happens because the palette is way outside the bounds of what pixel art should use, where proper pixel art is generally limited to maybe 8 to 32 colors, usually.
There are plenty of ways to post-process generative images to make them look more like real pixel art (square grid alignment, palette reduction, etc.), but it does require a bit more manual finesse [1], and unfortunately most people just can’t be bothered.
Don't you think it's a huge stretch to compare those to modern generative AI in this context? Those don't raise any of the questions that make current usage questionable.
Are you kidding? I think I see more vitriol for AI in gaming communities than anywhere else. To the point where steam now requires you to disclose its usage
The connection with the artist, directly, or across space and time, is a critical part of any artwork. It is one human attempting to communicate some emotional experience to another human.
When I watch a Lynch film I feel some connection to the man David Lynch. When I see a AI artwork, there is nothing to connect with, no emotional experience is being communicated, it is just empty. It's highest aspiration is elevator music, just being something vaguely stimulating in the background.
I don't agree. If a poem is moving, it's moving. It doesn't matter who wrote it.
I understand these are fundamental questions about aesthetics that people differ over. But that's how it works for me. However, ultimately, I think people will realize that I'm right around the time that AI does start generating good art.
Provenance is part of the work. If a roomful of monkeys banged out something that looked like anything, I'd absolutely hang it on my wall. I would not say the same for 99% of AI generated art.
The Human Renaissance is something I've been thinking of too and I hope it comes to pass. Of course, I feel like societally, things are gonna get worse for a lot of folks. You already see it in entire towns losing water or their water becoming polluted.
You'd think these kickbacks leaders of these towns are getting for allowing data centers to be built would go towards improving infrastructure but hah, that's unrealistic.
The article tries to play sleight of hand with the specific instance that they cite but it seems that the loss of water is alleged to be caused by sediment from construction rather than water use.
It's not great that it happened and it is something local government should take action on, but it is also something that could have been caused by any form of industrial construction. I suspect there are already laws in place that cover this. If they are not being enforced that's another issue entirely.
Data center construction exposing weaknesses in local infrastructure is a double-edged sword; you wanna know if things need upgrading but you don't wanna be negatively affected by it.
Maybe there should be some clause in these contracts that mandate tech companies foot the bill for local infrastructure improvements.
I completely disagree, this replaces art as a job. Why does human art need monetary feedback to be shared? If people require a paycheck to make art then it was never anything different than what Ai generated images are.
As for advertising being depressing - its a little late to get up on the high horse of anti-Ads for tech after 2 decades of ad based technology dominating everything. Go outside, see all those bright shiny glittery lights, those aren't society created images to embolden the spirit and dazzle the senses, those are ads.
North Korea looks weird and depressing because the don't have ads. Welcome to the west.
reply