Hacker Newsnew | past | comments | ask | show | jobs | submit | RussianCow's commentslogin

> the moat any single org has is somewhat limited

I disagree. The models are going to become commodities (we're already almost there), but the tooling and integrations will be the moat. Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

Anyone can implement an AI chatbot. But few will be able to provide AI that's deeply integrated into our daily lives.


How would it be nontrivial? Assuming the AI can replace a programmer "reproduce app/api/ecosystem Y" is just tokens. And a negligible amount for trillion dollar companies that have their own data centers.

> Reproducing everything Anthropic has already built with Claude Code, Cowork, and all their connectors would be nontrivial, and they're just getting started.

They're one org with presumably some specific direction. As the actual models get better, expect a large part of the dev community iterating on tools way more easily, sometimes ones that Anthropic doesn't quite have an equivalent to - for example, just recently Cline released their Kanban solution to dish out tasks to agents (https://cline.bot/kanban), OpenCode has been around for a while for the agentic stuff (https://opencode.ai/) and now has a desktop and web version as well, alongside dozens of others. Cline and KiloCode also have decent browser automation.

I will admit that everyone working on everything at the same time definitely means limitless reinvention of the wheel and some genuinely good initiatives dying off along the way (I personally liked RooCode more than both the Cline and KiloCode for Visual Studio Code, sad to see them go), but I doubt we're gonna see a lack of software. Maybe a lack of good software, though; not like Anthropic or any org has any moat there either, since they're under the additional pressure of having to do a shitload of PR and release new models and keep up appearances, compared to your average dev just pushing to GitHub (unless they want corporate money, in which case they do need some polish).


Didn’t Anthropic vibe code all of those integrations? If AI coding is as useful and successful as it is touted, then those integration should be no moat at all.

But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?

when did netflix offer a free tier?

I didn't say free. They've had a highly discounted, ad-supported plan for a few years now. It's relevant because OpenAI also introduced a cheaper monthly plan that includes ads.

openai also has a free plan, which is the one used by >90% of its users. the cheaper monthly plan just provides higher limits.

I'm not the OP, but it's the latter. I'm currently using the "Lite" GLM subscription with OpenCode, for example. I'm not using it very heavily, but I haven't come close to hitting the limits, whereas I burned through my weekly limits with Claude very regularly.


I have a vivid memory of once looking over someone's shoulder in the IE days and being horrified to see toolbars taking up about 80% of the available screen real estate, leaving only maybe 150-200 pixels of vertical space for actual web browsing. I have no idea how they got anything done, and my guess was they never actually used any of the installed toolbars and just thought that was normal.


You can see this today on macOS. I see people with this at work all the time. The defaults have quite inflated scaling and the dock at the bottom. The vertical space left for a website after the address bar is hardly anything.


I have this memory too lol. I was really quite young but it's like a core memory. Similar to when a middle school teacher told me about Firefox and I discovered tabs.


> Didn't a court in the US declare that AI generated content cannot be copyrighted?

No, my understanding is that AI generated content can't be copyrighted by the AI. A human can still copyright it, however.


It's obvious that a computer program cannot have copyright because computer programs are not persons in any currently existing jurisdiction.

Whether a person can claim copyright of the output of a computer program is generally understood as depending on whether there was sufficient creative effort from said person, and it doesn't really matter whether the program is Photoshop or ChatGPT.


Just thinking out loud... why can't an algorithm be an artificial person in the legal sense that a corporation is? Why not legally incorporate the AI as a corporation so it can operate in the real world: have accounts, create and hold copyrights...


Corporations are required to have human directors with full operational authority over the corporation's actions. This allows a court to summon them and compel them to do or not do things in the physical world. There's no reason a corporation can't choose to have an AI operate their accounts, but this won't affect the copyright status, and if the directors try to claim they can't override the AI's control of the accounts they'll find themselves in jail for contempt the first time the corporation faces a lawsuit.


Because the law doesn't say it can. It's that simple.


So if creative effort was put into writing the prompt, then whoever wrote the prompt should have the copyright to the output produced by ChatGPT?


Sure, but the prompt wasn't the only input… there was considerable effort put into the training data as well :)


I honestly can't say I've ever seen a non-techie expand a window to full screen using the green button on macOS. I'm not sure why, because in theory, I agree with you.


In my experience supporting Mac users, it's about 50/50. I think a lot of them have been conditioned to not maximize windows because it hides everything else, and they don't understand how to get back to their other windows.


I don't maximize windows because it means a 1 second delay, as for some reason Mac OS still does the hardcoded workspace switching animation even for that. Which means entering/leaving fullscreen in a video player is also delayed every time. I don't get it, not even the accessibility settings can disable this waste of time.


> Theirs simply no reason for any normal person to buy anything else.

My wife currently has an old MacBook with 8GB of memory, and she hits the memory limit somewhat regularly just from web browsing and light productivity work. But whether more breathing room in terms of memory is worth almost double the price...


Intel or Apple Silicon? The latter manages memory much better.


Intel. That's good to know! Do you know why this is? Presumably because of the shared memory pool across CPU/GPU, or are there other factors?


The next neo might have the SSDs of the current pros, making swapping less problematic.


I know this is happening with external customer support, but is this really happening internally at big companies? Preventing you from talking to a human in the correct department about an issue feels like a bomb waiting to explode.


There is at least an effect that chatbots have become the primary line and support, and even if you are not prevented from talking to a human, the managers of the humans you would talk to have decided that since the chatbot is there, it is inappropriate for them to be spending much time supporting coworkers in other departments when the chatbot can do it.

So to a degree, corporate politics can sort of discourage it.


I'm sure it is. Thankfully I don't work for a company this large any more, but when I was employed by a multinational with 30K+ employees, our IT department was outsourced to India and you had to get through a couple layers of phone tree/webchat hell to actually talk to a real person. I could easily see companies of this size replacing their support with LLM nonsense.


Teams are heavily incentivized to incorporate AI in their internal workflows. At Meta it is a requirement, and will come up in your performance review if you fail to do so.


I'm excited to try this. My coding workflow lately has been to whip up detailed plans with Opus, leaving little to no ambiguity, and hand them off to Composer 1.5 to execute. Composer isn't the smartest model and ends up needing some hand-holding sometimes, but it does a good enough job, and it's so damn fast that I can iterate on the result a few times before Opus would have finished. (And that's not to mention the cost difference, especially with Composer now being charged from the much larger "Auto" pool.)

If Composer 2 is as big a leap as they claim, I might start using it exclusively for anything that's not terribly complicated, including planning. The speed and cost effectiveness are just hard to beat.


That's a completely separate codebase that purposefully breaks backwards compatibility in specific areas to achieve their goals. That's not the same as having a first-class JIT in CPython, the actual Python implementation that ~everyone uses.


Definitely agree that it’s better to have JIT in the mainline Python, but it’s not like there weren’t options if you needed higher performance before.

Including simply implementing the slow parts in C, such as the high performance machine learning ecosystem that exists in Python.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: