I went to a meeting with a prototype once. It was a single happy path with stubbed data, coded in the most naïve way possible. It was, after all, a prototype just to give a feel for what the interactions would be like.
It put enormous pressure on delivery, since leadership had "already seen it working, how hard could it be to make it to production?"
It's funny (tragicomic) to watch the industry learn the same lessons over and over again (such as "'cheap' overseas outsourcing requires unrealistically precise specs otherwise what would take minutes will take days")
This one sounds like "...and this is precisely why we started using wireframes"
Did the same thing early in my career. Built a quick bootstrap website with like 5 pages and all the data was static. The backend was a year off. It was great for end users but the non-IT managers were dumb. Same issue about seeing something working and expecting the world.
Again, these are systems that have been explicitly given the ability to perform these actions. Trying to claim that it was somehow the AI’s fault is sheer incompetence and/or self-serving deceptiveness.
You can’t authorize a system to take some action and then complain when it takes that action. The “approval” you quoted is not a security constraint. Someone who confuses it for a security constraint is incompetent.
the ridiculous anthropomorphism is killing me. Software 'agents' can't ask for 'approval', they're not persons. That's like saying my script didn't ask me for approval to modify the system after I ran it with sudo privileges.
The developer is solely responsible for what APIs they expose to a bot. No you can't say your software agent was grumpy and mean and had a bad day. It is not a human intern, it is an unreliable chatbot who someone ran with permissions it should not have had.
Your argument fails where it equates someone who only codes in one language to an LLM who is usually trained in many languages.
In my experience, a software engineer knows how to program and has experience in multiple languages. Someone with that level of experience tends to pick up new languages very quickly because they can apply the same abstract concepts and algorithms.
If an LLM that has a similar (or broader) data set of languages cannot generalise to an unknown language, then it stands to reason that it is indeed only capable of reproducing what’s already in its training data.
You do realize the vast majority of jobs are not in industrial hubs but urban cities right? Urban cities tend to have more than one employer too. Before I moved across the country, there were maybe a dozen or so companies that were hiring devs where I lived; now after moving there are around 800 companies that hire devs.
In this new, for me, city, Boston, it's quite hard to build housing for reasons that seem to only favor those in existing houses.
An urban city can be or have an industrial hub. By industrial I don't necessarily mean manufacture.
> Urban cities tend to have more than one employer too
They can also have those employers at a couple hours commute (not walking) from each other. That helps my point, not hinders it.
> there were maybe a dozen or so companies that were hiring devs where I lived
Assuming those were all within 15 minutes walking distance, then it would qualify as the kind of industrial hub I mentioned - although a dozen or so may not be good enough for an employee to shop around and get good deals.
> now after moving there are around 800 companies that hire devs
Again, within 15 min walk of each other? I find that hard to believe, but assuming so, that is an industrial hub.
Indeed it does! https://github.com/ading2210/doompdf
reply