Main reason to me is that its layers on layer on top of the base LLM calls with not so much to show for it. Also a lot of native features (like for examples geminis native structured responses) aren't well supported.
We're incapable of putting an accurate, standardized value on developer productivity, yet there often seems to be consensus between senior engineers on who are the high performers and the low performers. I certainly can tell this about the people I work with.
> Point at a problem, and measure the cost of solving it.
The problem with this is that AI will create worse code that is going to cause more problems in the future, but the measurements won’t take that into account.
Measure or estimate? What ways? Honest question, because virtually all AI discussions _convieniently_ become vague a few steps short of actually answering the question.
Unironically, ai evaluating the impact of those lines might be getting close to a metric that would measure output better than having everyone print out their last 6 months of work for the new boss to look at.
Im honestly not that much worried there are some obvious problems (exfiltrate data labeled as sensitive, take actions that are costly, delete/change sensitive resources) if you have a properly compliant infrastructure all these actions need confirmations logging etc. for humans this seemed more like a neusance but now it seems essential. And all these systems are actually much much easier to setup.
Because it’s just using structured response so it should be doable with Gemini 3 ? (We are using Gemini 3 for some docs processing and its visual understanding is just incredible)
> Image segmentation: Image segmentation capabilities (returning pixel-level masks for objects) are not supported in Gemini 3 Pro or Gemini 3 Flash. For workloads requiring native image segmentation, we recommend continuing to utilize Gemini 2.5 Flash with thinking turned off or Gemini Robotics-ER 1.5.
A few days ago I was trying to unsubscribe to a service (notably an AI 3D modeling tool that I was curious about).
I spent 5 minutes trying to find a way to unsubscribe and couldn't. Finally, I found it buried in the plan page as one of those low-contrast ellipses on the plan card.
Instead of unsubscribing me or taking me to a form, it opened a convos with an AI chatbot with a preconfigured "unsubscribe" prompt. I have never felt more angry with a UI that I had to waste more time talking to a robot before it would render the unsubscribe button in the chat.
Why would we bring the most hated feature of automated phone calls to apps? As a frontend engineer I am horrified by these trends.
There might be some confusion about the transition to what some call post-literate era: era where text is not the primary medium. That’s not necessarily bad because you get the advantages of other mediums - oral and visual but it is something to keep in mind.
I'm bit skeptical that a post-literate era is happening. I gather it appears in some sci-fi but I don't see much sign in reality. I mean here we are on a text only site. If anything we seem to be heading for a 100% literate society. Literacy graphs here: https://ourworldindata.org/grapher/cross-country-literacy-ra...
I don’t think the post-illiterate era means that text will disappear. I think it’s just not going to be dominant anymore but I also have my reservations since I do prefer the text medium.
reply