Like others mentioned, letting the agent touch the code makes learning difficult and induces anxiety. By introducing doubt it actually increases the burden of revision, negating the fast apparent progress. The way I found around this is to use LLMs for designing and auditing, not programming per se. Even more so because it’s terrible at keeping the coding style. Call it skill issue, but I’m happier treating it as a lousy assistant rather than as a dependable peer.
Adobe won’t be hurt by this in the professional market because they have inter-app compatibility and a somewhat consistent language, plus you need their software to work with legacy files. Adobe is cheap, you can get the full suite for a very reasonable price. Competing software is always niche and you need to learn each one individually as they don’t share UX principles nor ontologies. They might be free now, but imagine managing individual subscriptions for each one later on; a nightmare for individuals and companies alike. Just needing to sign-up for multiple apps individually is a headache, all the emails and updates, etc. Unless someone makes a comparable and comprehensive suite, they won’t be actually competing with Adobe.
Typst is unfairly good for doing systematic designs. I wrote a template system for a complex product catalog in a couple days. Then I modeled the clients products list (exported from their ERP) to the schema and generated a hundred pages catalog instantly with flawless layout. Traditional catalog design in InDesign is extremely prone to errors and inconsistencies, not to mention time consuming if done by hand and very brittle if done with the native automation, which does not handle tabular data very well, requiring arcane non-UTF8 encodings. With Typst, if done right and input data is properly treated once, you can wholly skip the review phase which is represents a massive cost reduction. IMO doing this kind of parametric design from a DSL, either for print or digital, is something massively underrated. Surely feels like cheating. Organizing the media files is a bit more time consuming, though, even with automation. But once you organize and standardize the media repo you’re set, as you just need to do the plumbing once.
On Notes on the Synthesis of Form, Alexander defines design as the rationalization of the forces that define a problem. You’ll won’t find a better definition. But people tend to think design is the synthesis and its results. This misunderstanding of the role of design and the designer is responsible for all the unfit designs we encounter on a daily basis. Anyone equipped with a synthesis tool and feeling empowered to quickly and cheaply generate forms will almost inevitably become blind to the very nature of the underlying problems they set to solve. They’ll be fitting the problem to the available forms. They’ll skip the understanding, the conversations, the conflicts and disagreements, and happily and wrongly assume a design problem can be solved hermetically, in isolation. They’ll think quality is a factor of aesthetics, when in truth, aesthetics is an effect; nevertheless these effects is all they’ll have control over, as it’s all the tool can do. The tool will hinder their ability to be rational; to see the inner structures; to find the hidden but essential semantics; to create the ontologies that’ll support not only the immediate synthesis, but that will sustain the evolution of the design over its lifetime. They’ll be denied the enlightenment that comes with gradual, slow construction; the only place and moment where innovative ideas reveal themselves. They’ll be impoverished and confuse output with agency. I feel sorry for anyone that will think using tools equals doing design, because of the truly marvelous human experiences that they’ll miss, and that could never be replaced by the shallow pride of empty achievement.
This is a really verbose way to say that using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience.
Agreed on your take on the parent, although I have to say I feel that AI has had the opposite effect for me. It has only accelerated learning quite significantly. In fact not only is learning more effective/efficient, I have more time for it because I am not spending nearly as much time tracking down stupid issues.
> I have more time for it because I am not spending nearly as much time tracking down stupid issues.
It is a truism that the majority of effort and time a software dev spends is allocated toward boilerplate, plumbing, and other tedious and intellectually uninteresting drudgery. LLMs can alleviate much of that, and if used wisely, function as a tool for aiding the understanding of principles, which is ultimately what knowledge concerns, and not absorbing the mind in ephemeral and essentially arbitrary fluff. In fact, the occupation hazard is that you'll become so absorbed in some bit of minutia, you'll forget the context you were operating in. You'll forget what the point of it all was.
Life is short. While knowing how to calculate mentally and/or with pen and paper is good for mastering principles and basic facility (the same is true of programming, btw), no one is clamoring to go back to the days before the calculator. There's a reason physicists would outsource the numerical bullshit to teams of human computers.
Sounds like you're talking about research AI and not generative AI. You can't learn artistic/creative techniques when you're not practicing those techniques. You can have a vision, but the AI will execute that vision, and you only get the end result without learning the techniques used to execute it.
Okay, this is a pet peeve of mine, so forgive me if I come off a little curt here, but-- I disagree strongly with how this was phrased.
"Generative AI" isn't just an adjective applied to a noun, it's a specific marketing term that's used as the collective category for language models and image/video model -- things which "generate" content.
What I assume you mean is "I think <term> is misleading, and would prefer to make a distinction".
But how you actually phrased it reads as "<term> doesn't mean <accepted definition of the term>, but rather <definition I made up which contains only the subset of the original definition I dislike>. What you mean is <term made up on the spot to distinguish the 'good' subset of the accepted definition>"
I see this all the time in politics, and it muddies the discussion so much because you can't have a coherent conversation. (And AI is very much a political topic these days.) It's the illusion of nuance -- which actually just serves as an excuse to avoid engaging with the nuance that actually exists in the real category. (Research AI is generative AI; they are not cleanly separable categories which you can define without artificial/external distinctions.)
That's a really useful distinction to have explicitly articulated. It's also why plan mode feels like a super power. Research vs Generative AI are different: I'm going to use this.
I guess I was more referring to just using generative AI when learning new subjects and exploring new ideas. It's a really efficient tutor and/or sidekick who can either explain topics in more depth, find better sources, or help me explore new theories. I was thinking beyond just generating code, which is incredibly useful but only mildly interesting.
Well, the research is sometimes 10x quicker with AI assistant. But not always. Building phase is maybe 20-100% quicker for me at least, depending on the complexity of the project. Green field without 15 years of legacy that is never allowed to break is many times faster, always has been.
It really really really depends on how you are using it and what you are using it for.
I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.
I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...
This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.
LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.
Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.
The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.
Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.
Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.
I agree with your definition of programming (and I’ve been saying the same thing here), but
> It's annoying when a distracting and unessential detail derails this conversation
there is no such details.
The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).
> No one argues that we should throw away type checking,…
That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.
As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.
Qua formal system, yes, but this is a pedantic point as the aim - the what - of a system is more important than the how. This distinction makes the distinction between domain-relevant features and implementation details more conspicuous. If I wish to predict the relative positions of the objects of our solar system, then in relation to that end and that domain concern, it matters not whether the underlying model assumes a geocentric or heliocentric stance in its model (that tacitly is the deeper value of Copernicus's work; he didn't vindicate heliocentrism, he showed that a heliocentric model is just as explanatory and preserves appearances equally well, and I would say that this mathematical and even philosophical stance toward scientific modeling is the real Copernican revolution, not all the later pamphleteer mythology).
Of course, in relation to other ends and contexts, what were implementation details in one case become the domain in the other. If you are, say, aiming for model simplicity, then you might prefer heliocentrism over geocentrism with all its baroque explanatory or predictive devices.
The underlying implementation is, from a design point-of-view, virtually within the composite. The implementation model is not of equal rank and importance as the domain model, even if the former constrains the latter. (It's also why we talk about rabbit-holing; we can get distracted from our domain-specific aim, but distraction presupposes a distinction between domain-specific aim and something that isn't.) When woodworking, we aren't talking about quantum mechanical phenomena in the wood, because while you cannot separate the wood from the quantum mechanical phenomena as a factual matter - distinction is not separation - the quantum is virtual, not actual with respect to the wood, and it is irrelevant within the domain concerning the woodworker.
So, if there is a bug in a library, that is, in some sense, a distraction from our domain. LLMs can help keep us on task, because our abstractions don't care how they're implemented as long as they work and work the way we want. This can actually encourage clearer thinking. Category mistakes occur in part because of a failure to maintain clear domain distinctions.
> That’s not a good comparison. Type checking [...]
It reduces cognitive load vis-a-vis understanding code. When I want to understand a function in a dynamic language, I often have to drill down into composing functions, or look at callers, e.g., in test cases to build up a bunch of constraints in my mind about what the domain and codomain is. (This can become increasingly difficult when the dynamic language has some form of generics, because if you care about the concrete type/class in some case, you need even more information.)
This cognitive load distracts us from the domain. The domain is effectively blurred without types. Usually, modeling something using types first actually liberates us, because it encourages clearer thinking upfront about the what instead of jumping right into how. (I don't pretend that types never increase certain kinds of burdens, at least in the short term, but I am talking about a specific affordance. In any case, LLMs play very nicely with statically-typed languages, and so this actually reduces one of the argued benefits of dynamic languages as ostensibly better at prototyping.)
> As long as you can pattern match to get a solution [...]
Indeed, and that's the point. LLMs work so well precisely, because our abstractions suck. We have lot of boilerplate and repetitive plumbing that is time-consuming and tedious and pulls us away from the domain. Years of programming research and programming practice has not resolved this problem, which suggests that such abstractions are either impractical or unattainable. (The problem is related to the philosophical question whether you can formalize all of reality, which you cannot, and certainly not under one formal system.)
I don't claim that LLMs don't have drawbacks or tradeoffs, or require new methodologies to operate. My stance is a moderate one.
> Yes but that’s why you ask it to teach you what it just did.
Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.
I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.
This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.
I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
It takes a lot of cajoling to get an LLM to produce a result I want to use. It takes no cajoling for me to do it myself.
The only time "AI" helps is in domains that I am unfamiliar with, and even then it's more miss than hit.
> I find it intellectually exhausting to describe to a machine what I want, when I could build something better in the same amount of time, and it isn't for lack of understanding how the LLM works.
I don’t even bother. Most of my use cases have been when I’m sure I’ve done the same type of work before (tests, crud query,…). I describe the structure of the code and let it replicate the pattern.
For any fundamental alteration, I bring out my vim/emacs-fu. But after a while, you start to have good abstractions, and you spend your time more on thinking than on coding (most solutions are a few lines of codes).
It is better than doomscrolling on Instagram for hours like the new generations. At least the brain is active, creating ideas or reading some text nonstop to keep itself active.
Are you sure that is not the illusion of learning? If you don't know the domains, how can you know how much you now know? Especially consider that these models are all Dunning Kruger-inducing machines.
Agree on that too. And I use these as tools. I don't think I'm missing out on anything if I use this drill press to put a hole through an inch of steel instead of trying to spend a day doing it wobbly with a hand-drill.
"Verbose" is the wrong adjective. Yours is a terse projection into a lower space, valid in itself, but lacking the power and precision of its archetype.
The argument is not that only designers can design, nor that everyone should design like a designer. It’s to not confuse shopping for or generating generic solutions with the activity of problem solving. Per Alexander, trivial problems, those that can be solved without balancing interactions between conflicting requirements, are not design problems. So, don’t worry and just pick what you need and like!
Presumably you care about the quality of your marketing. Otherwise why do it at all. Worst case scenario, your marketing turns people off to your music, who would have otherwise been listeners.
Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound.
So you should be really interested in how to capture the “vibe” of your music in a visual medium.
But if you don’t care at all whether ppl actually listen to your music, then yeah you don’t have to deep dive.
"Actually there’s some interesting problems here because a huge part of music marketing is in a visual medium, like a poster or album cover. It is literally impossible to include a clip of your sound."
The term you are looking for is 'aesthetic'.
And indeed.. music is far more than just a sound or whatever simple thing one tries to boil it down to.
Im convinced many (especially here) really dislike that - they want it just be a case of typing in a few things in an LLM and bam... there you go. They have zero clue about the nature of the economy, what's really going on in various markets etc etc.
I think that the beauty of the human experience is that all you need to learn is to practice. You automatically improve at what you're doing. The kinds of skills that atrophy when you use AI are skills that AI can already automate. And nobody is going to pay you to do slowly what a machine can do quickly/cheaply.
When you deploy AI to build something, you wind up doing the work that the AI itself can't do. Holding large amounts of context, maintaining a vision, writing apis and defining interfaces. Alongside like, project management. How much time is spent on features vs refactoring vs testing.
> using generative AI has a detrimental effect on the user because one deprives themselves of the learning experience
Or it lets folks focus. My coding skills have gotten damn rough over the years. But I still like the math. Using AI to build visualizations while I work on the model math with paper and pen is the best of both worlds. I can rapidly model something I’m working on out algebraically and analytically.
Does that mean my R skills are deteriorating? Absolutely. But I think that’s fine. My total skillset’s power is increasing.
Was thinking similarly... Without the friction, you're unable to explore the space, the space doesn't even exist at all... So it's not even clear where you're going from or where you'll arrive at.
Not really. It’s saying that most people in tech have no fucking idea what designers do, but somehow feel qualified to evaluate their output, and think tools that make things that look nice are designing things. What you reference is one effect of what the comment is about. Another effect is developers, combining this with engineer’s disease, being incredibly irritating to work with because they constantly make reductive comments that completely miss the point while other developers nod and say “yeah that sounds right.” I was a developer for ten years— I’ve seen this from both sides.
> I feel sorry for anyone that will think using tools equals doing design, because of the truly marvelous human experiences that they’ll miss, and that could never be replaced by the shallow pride of empty achievement.
What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
I think maybe how you are conceptualizing design and how the GP meant it are not in agreement, and if you came to agreement on what it meant you wouldn't really disagree about the point either.
For example, I think design, as they mean it, could be described as "how to get that thing we care about". The correct amount of design depends on how exacting the outcome and outputs needs to be across different dimensions (how fast, how accurate, how easy to interpret, how easy to utilize as an input for some other system). For generalized things where there's not exacting standards for that, AI works well. For systems with exacting standards along one or more of those aspects, the process of design allows for the needed control and accuracy as the person or people doing the work are in a constant feedback loop and can dial in to what's needed. If you give up control of the inside of that loop, you lose the fine grained control required for even knowing how far you are away from theoretical maximums for those aspects.
Balancing requirements to achieve something you care about is doing design. I take that by “design” here you mean perhaps a particular interface or media, and you reckon that such element is not critical to your solution. If that’s the case then there’s no conflict at all. By reaching that conclusion you isolated what’s important and are correctly applying energy where it matters. This happens a lot in design, where producing or perfecting media interfaces is not necessary.
> what if you don't give a shit about design and it's a means to an end…
the parent's point is that it doesn't work that way. The point is self reinforcing. Design is not a thing. it's the earned scars from the process. Fine to disagree but it reinforces the point.
> What if you don’t give a shit about design and it’s a means to an end for a project that involves something different that you do care about?
Thank you for so succinctly demonstrating the problem with using AI for everything. You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you. Now you quickly and cheaply fill in the parts you don't personally care about with sawdust, and as this becomes normalized you deprive yourself and others from discovering that they care about the design part. You'll ship your thing now, and it'll be fine. The damage is delayed and externalized.
I won't advocate against use of new technology to make yourself more productive, but it's important to at least understand what you're losing.
> You used to have to either care enough to do the design yourself or find someone who cared and specialized in that to do it for you.
You think most UI/UX designers, or the artists creating slop for content marketing spam factories for the past decades, cared? Some, maybe. Most probably had higher ambitions, but are doing what actually pays their bills.
It's similar to software developers. Most of those being paid to code couldn't care less, they're in there for the fat paycheck; everyone else mostly complains the work is boring or dumb (or worse), but once you have those skills, it makes no economic sense to switch careers (unless, of course, you're into management, or into playing the entrepreneurship roulette).
I think the more you industrialize a process, the more those involved become cogs (or get replaced with actual or metaphorical cogs in a machine). This is fine, even desirable, for anything we can produce en masse and apply quality control to. I do not mind that my rivets and screws are not artisanal. We figured out how to make a useful and reliable widget and can churn them out on an industrial scale no problem. I do not see the value in doing the same with software. We already get mass-production for free because the product is bytes. Why are we industrializing the process of making millions of variations of the same thing? Surely the effort would be better spent finding the "screw" of software, perfecting it, and making it trivial for users to accomplish whatever task they want without having to generate the gaps between with untested code. I want modularity and better design, not automated design.
The paychecks weren’t great. Everyone was offering to pay designers with “exposure”. If they didn’t innately care about the field they would have done something more lucrative.
Man so much of this thread is full of such high minded philosophizing, it's like we're debating wine instead of talking about interfaces for doing things.
Like, maybe I just want to make an interface to configure my homemade espresso dohickey, do I have to wear a turtleneck and read Christopher Alexander now? I just wanted a couple buttons and some sliders.
We don't all have to be experts in everything, some people just need a means to an end, and that's ok. I won't like the wave of slop that's coming, but the antidote certainly isn't this.
Why do you want sliders when a config file would do the same just fine?
It's true that design theory writing is annoyingly verbose and intangible, but that doesn't make it wrong. Give someone a concrete language spec and they will not really know how it feels to use the language, and even once they do experience its use they will not be able to explain that feeling using the language spec. Invariably the language will tend to become intangible and likely very verbose.
But to answer your question: no, it's of course perfectly serviceable to just copy the interface others have created, and if the needs aren't exactly the same you can just put up with the inevitable discomfort from where the original doesn't translate into the copy.
Don’t be so anti-intellectual, there’s enough of that around. A simple problem is going to have a small set of simple design solutions; the philosophising readily admits that. Nothing’s getting in your way.
I’m not being anti-intellectual, I’m being anti-elitist and anti-obfuscation.
It’s not the science and intellect I take issue with, engineering has plenty of that. It’s the art-adjacent navel gazing post modern bullshit I don’t like.
Well I think that art is good, thinking about design is good, that using a couple of terms that aren’t immediately clear to you isn’t “obfuscation”, and that postmodernism can be a useful analytical lens. Seeing the world only through science and engineering (as useful as they can be when applied well) is cold, dead, and sad.
I agree, though I'd offer a counter-point to the implied idea that tools like this stifle exploration and creativity.
I'm an engineer who also loves design. I've read a lot of the books (including the one referenced), I know some concepts and terminology, and I understand the general process — but I'll never be a professional designer. My knowledge is limited, and I find most design tools so complex they actually get in the way of problem exploration and creativity.
For people like me, this tools removes the friction which actually prevents me from being more focused on the valuable parts of the design process. I can more easily discover and learn new concepts, and ultimately spend more time being creative and exploring the problem space.
The issue is that UI design has different constraints compared to general graphic design, just like product design is not sculplture. Most UI designers only care about the visual aspects while neglecting the interactive aspect.
A whiteboard or a wireframing software would be better, because it lets you focus first on the interactive part. And once that’s solved, the visual part is easier.
There’s no conflict here. Using a tool to automate what you have validated to be the trivial parts of a production process is the proper use of the tool. Professional designers also use this bias. For instance, I might recognize that creating a custom font or illustration is not core to my solution, so I can employ an off the shelf font or illustration and focus, say, in the written content. Same principle. The problem is most people won’t even acknowledge or validate the essential aspects of the solution and just iterate mindlessly.
The tool just allows them to synthesize an implementation, but if its designed badly then it will fail and they will have to get better at design anyway. I dont see how that itself is the problem. The tractor didnt make farmers worse at farming even if they lost the strength to work an old school plow
I think the challenge will be everything else the person will be doing. Will this person also try to coding? And financial management? And marketing? And operational planning? Just because there are tools out there for them to synthesize implementations of that. If so then they wont be able to get good at any of those. But i think the backwards pressure from failing at those things will bring it back to a stable equilibrium where you have specialists who are good at the abstract ideas of their field leveraging these things as a new abstraction layer of work, analogous to the compiler
However, that’s not to say that many designer jobs will be going away, simply because for many cases, cost beats quality. We’ll just have more things with a much lower quality.
You can compare it with mass manufacturing. While some things are better had than not, even with low quality, we’d probably be better off with some things made to last, in lower quantity. But for 99% of the population, e.g. low quality clothing is the norm.
This is such a beautiful distillation of everything I believe about the dangers of over-reliance on AI. I implore thee, good sir, to write a longer essay on this.
Creativity is a very big part of design, these Gen AI tools allow for stepping through a lot of variations and creative ideas very quickly, even creating working artifacts and protoypes on the fly and iterating rapidly
This speed and variation wins for me. But yes without a designers eye laziness can get lost in slop design too..
To me the value of Gen Ai is an accelerant (not slop factory) for ideation and solutions not a replacement of the human owning the process.. but laziness ususally wins
> because of the truly marvelous human experiences that they’ll miss
when people wax philosophical/poetical about what is essentially capital production already i'm always so perplexed - do you not realize that you're not doing art/you're not an artisan? your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
> They’ll be impoverished and confuse output with agency
> your labor is always actively being transformed into a product sold on a market. there are no "marvelous human experiences", there is only production and consumption.
The first time I used Mac OS/X, circa 2004-2005, I was blown away by the design and how they managed to expose the power of the underlying Unix-ish kernel without making it hurt for people who didn't want that experience. My SO couldn't have cared less about Terminal.app, but loved the UI. I also loved the UI and appreciated how they took the time to integrate cli tools with it.
I would say it was a marvelous human experience _for me_.
Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
I work on niche products, so I've never done something as widely appreciated as those examples, but on the products I've worked on, I can easily say that I really enjoy making things that other people want to use, even if it's just an internal tool. I also enjoy getting paid for my labor. I've found that this is often a win-win situation.
Work doesn't have to be exploitive. Products don't have to exploit their users.
Viewing everything through the lens of production and consumption is like viewing the whole world as a big constraint optimization problem: (1) you end up torturing the meaning of words to fit your preconceived ideas, and (2) by doing so you miss hearing what other people are saying.
> Sure it was the Apple engineers' and designers' labor transformed into a product, but it was a fucking great product and something that I'm sure those teams were very proud of. The same was true with the the iPod and the iPhone.
...
> Work doesn't have to be exploitive. Products don't have to exploit their users.
bruh do people have any idea what they're writing as they write it? you're talking about "work doesn't have to be [exploitative]" in the same breath as Apple which is the third largest market cap company in the world and who's well known for exploiting child labor to produce its products. like has this comment "jumped the shark"?
> Viewing everything through the lens of production and consumption
i don't view everything through any lens - i view work through the lens of work (and therefore production/consumption). i very clearly delineated between this lens and at least one other lens (art).
The guy in Cupertino aren't the ones behind bars so they can't jump their deaths; for someone who supposedly "clearly delineated", you sure are mixing up those who are being exploited with the people who benefitted.
Ultimately the exploitative pyramid always terminates in a peak, and the guys working up there can for sure be having a hecking great time doing their jobs.
Maybe you'll dismiss it as another poetic waxing but what I understand they're saying is that capitalism hasn't yet captured all the inefficiencies of the human experience.
just repeating the same mistake as op: sadness/happiness is completely outside the scope here. these are aspects of a job - "design" explicitly relates to products not art. and wondering about the sadness/happiness of a job is like wondering about the marketability of a piece of art - it's completely besides the point!
OP never talked about art. Design is not art, it's problem solving. And good design according to Dieter Rams:
1. Good design is innovative
2. Good design makes a product useful
3. Good design is aesthetic
4. Good design makes a product understandable
5. Good design is unobtrusive
6. Good design is honest
7. Good design is long-lasting
8. Good design is thorough down to the last detail
9. Good design is environmentally friendly
10. Good design is as little design as possible
Generative AI just tries to predict based on its training data.
a product can be a piece of art and design can and does in practice often go hand in had with art, practically most designers also other than the utilitarian role practice the artistic one, wether you would want to group art within design as one is a matter of definitions
Whatever the merits or demerits of 'marvelous human experiences' are from the point of view of production and consumption, the OP's conclusion leaves out the important point that Alexander's 'rationalization of forces that define a problem' produces designs that come closer to solving real-life problems (even in production and consumption) than simply putting attractive lipstick on an economic utility pig. If production isn't solving real human problems, consumers will go elsewhere.
> If production isn't solving real human problems, consumers will go elsewhere.
of course but that's well within the scope of the whole paradigm (as opposed to how it is originally phrased it in relation to a loss of "marvelous human experiences"): if i use a bad tool to solve my customer's problems in an unsatisfactory way then my customers will no longer be my customers (assuming the all knowing guiding hand of the free market). so there's no new observation whatsoever in OP.
Just a thought: The fact that the found kernel vulnerability went decades without a fix says nothing about the sophistication needed to find it. Just that nobody was looking. So it says nothing about the model’s capability. That LLMs can find vulnerabilities is a given and expected, considering they are trained on code. What worries me is the public buying the idea that it could in any way be a comprehensive security solution. Most likely outcome is that they’re as good at hacking as they’re at development: mediocre on average; untrustworthy at scale.
Regardless of how impressive you find the vulnerabilities themselves, the fact that the model is able make exploits without human guidance will enable vastly more people to create them. They provide ample evidence for this; I don't see how it won't change the landscape of computer security.
Yeah the marginal cost of discovery going towards 0 (I mean, not there yet, but directionally) is the problem; it doesn't really matter if the agent isn't equivalent to a human artistic hand-crafted bug discovery if it can make it up on volume. Mass production of exploits!
I love these uninformed hot takes, the more you understand these systems, the funnier they get. Stop imagining and start engineering, you’ll see what I mean. Your vision of this tech is clearly shaped by blog posts. Go build stuff with it
This comment is just a personal attack. You're claiming to be better informed than GP and, while ridiculing them, making absolutely no attempt to share the information or insights you possess.
Maybe because there’s no critical and widely used software written by LLMs so far? Which says a lot about LLMs are failing to even approach the level of capabilities you would expect from all the hype? The goal has always been, even before LLMs, to find something smarter than our smarter humans. So far the success at that is really minuscule. Humans are still the benchmark, all things considered. Now they’re saying LLMs are going to be better than our best vulnerability researchers in a few months (literally what an Anthropic researcher said in a conference). Ok, that might happen. But the funny part is that the LLMs will definitely be the ones writing most of these vulnerabilities. So, to hedge against LLMs you must use LLMs. And that is gonna cost you more.
So today, most of the vulnerabilities being found by these tools are in code written by humans. Your hypothesis is that down the road, most of the vulnerabilities will be in code written by LLMs.
What seems more probable is that the same advances that LLMs are shipping to find vulnerabilities will end up baked into developer tooling. So you'll be writing code and using an LLM that knows how to write secure code.
You’re absolutely right. But consider big brands make for a minor percentage of sites on the web. Also recall that all those big brands have standard profiles on social media and they share the very same layout as your local dog shelter. They have no problem with that.
They do have a problem with that. I don't see companies bigger than the dog shelter directing users to their Facebook page anymore. They all have unique looking websites.
Nice analogy with movies, but essentially it’s a category error. Movies are media, not interfaces. You consume movies, but _use_ websites. A movie is immutable. A website is dynamic. As a matter of fact, even movies follow a very common structure, from narrative, to format specs and credits. Directors and actors fit their performance to these constraints. Movies are arguably way more standard than websites.
It could work if it makes production and distribution of content easier and cheaper. All social media sites without exception have standard layout and usability. There, brands encode their aesthetics through media, and brands are much more alive in these channels than on their own websites, which often lags behind their own platform profiles. Company websites are expensive to build, maintain and update. Even for a design company, say Pentagram, it’s much better to follow their work on the standard architecture of Instagram than on their own handcrafted and “beautiful” website. The relevance of corporate websites as a means to retrieve essential information is decaying. Economic factors ultimately drive decisions. If something like this existed in a solid form, it would be hard to justify spending thousands of dollars on a website. As a matter of personal opinion, UI should never be a place to express creativity. Media is a much better substrate to express personality than through user interface affordances. Nowadays all my corporate clients develop websites on the expectation that they will grant them legitimacy, and they don’t actually expect anyone to actually use or read them. As a user, I actually do prefer when a supplier has an Instagram page because their sites, if they even have one, are 100% going to be awful to read and navigate, not to mention they’ll almost certainly be outdated. The greatest barrier to something like this is simply tradition. The general idea is perfectly defensible and logical. We should be reminded that standard websites are never going away, so this is not to be a replacement, but could open the doors for small businesses and non-profits to spread rich structured information in a cheap and sovereign manner. The argument that businesses are averse to being scrapped is only true for elitist corporations. Most businesses stand to gain tremendously from having their data highly accessible from anywhere. And it’s damn easy to convince them of the benefits. Even more so considering that, if they want, they also could have their handcrafted website, which by the way would be simply a thematic structure over the very same API. You could argue that this is inevitable long term. But regarding the OPs prescribed timeline of couple years, I think it’s just naive. For this to become mainstream it would take at least a decade, if not more. Just writing the specs and tools for this would take years, easily.
reply