Hacker Newsnew | past | comments | ask | show | jobs | submit | conception's commentslogin

Sure over potentially 25 years+.

Apple doesn’t compare themselves to Android the same way Coke doesn’t compare themselves to Pepsi.

Yes humanity has a great track record of taking care of the commons.

LEO is like a bad haircut. Just wait a while and the disaster solves itself.

Just another post saying stuck kde with the new plasma on it for my kids first computer and was blown away by the polish. Switching over my workstation this month for sure. Highly recommended


If only we had non-violent means to do this! Man, what a revolution could be had if we explored those possibilities!

/feedback works for that i believe

Twitter was amazing, not because of people microblogging about breakfast, but because it gave people/companies/orgs a way to interact directly with their audience. If you want to know what Kix cereal had to say - you could follow Kix.

[Crab] silence, brand

> If you want to know what Kix cereal had to say - you could follow Kix

Please! No more!


I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?

No, what you're thinking of as "agents" is the problem. You want workflows.

Think of it like laying down the rails / train tracks, before trains go over them. The trains can only go over the approved tracks, nothing else.

If you have new types of capabilities and actions, it can propose them, but your organization will have policies to autoreject them, or require M-of-N approval of new rails.

You don't just want open-ended ad-hoc exploration by agents to be followed immediately by exploitation before you wake up.

Maybe this will help: https://safebots.ai/platform.html


I noticed 1M context window is default and no way not to use it. If your context is at 500-900k tokens every prompt, you’re gonna hit limits fast.

I had to double check that they'd removed the non-1M option, and... WTF? This is what's in `/config` → `model`

    1. Default (recommended)    Opus 4.6 with 1M context · Most capable for complex work
    2. Sonnet                   Sonnet 4.6 · Best for everyday tasks
    3. Sonnet (1M context)      Sonnet 4.6 with 1M context · Billed as extra usage · $3/$15 per Mtok
    4. Haiku                    Haiku 4.5 · Fastest for quick answers
So there's an option to use non-1M Sonnet, but not non-1M Opus?

Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the worse model with that much context, you have to pay extra? Why the heck would anyone do that?

The screen does also say "For other/previous model names, specify with --model", so I assume you can use that to get 200K Opus, but I'm very confused why Anthropic wouldn't include that in the list of options.

What a strange UX decision. I'm not personally annoyed, I just think it's bizarre.


`/model opus` sets it to the original non-1M Opus... for now.

Thanks. I quickly burned through $100 in credit when I started using Opus 4.6 in OpenCode via OpenRouter. My session stopped and was getting an error not representative of credit availability, so was surprised after a few minutes when I finally realized Opus just destroyed those credits on a bullshit reasoning loop it got stuck in. Anthropic seems to know that the expanded context is better for their bottom line as they've defaulted it now.

And as others have said it's very easy to burn token usage on the $100/month plan. It's getting to the point where it's going to very much make sense to do model routing when using coding tooling.


Not sure why you were downvoted because this is actually correct. Can also use --model opus

export CLAUDE_CODE_DISABLE_1M_CONTEXT=1

Anthropic is not building good will as a consumer brand. They've got the best product right now but there's a spring charging behind me ready to launch me into OpenCode as soon as the time is right.

Would you use Opus if you switched to OpenCode?

I'd like to use Opus with OpenCode right now to combine the best TUI agent app with the best LLM. But my understanding is Anthropic will nuke me from orbit if I try that.

You can use Opus with OpenCode anytime you want, just not with the Claude plan. You can use it via API with any provider, including Anthropic's API. You can use it with Github Copilot's plan. The only thing you can't do without getting banned is use OpenCode with one of Claude's plans.

I keep seeing this "you can use the inconvenient and unpredictably costly way all you want" pedantic kneejerk response so often lately.

It's like saying well humans can fly with a paraglider. It is correct and useless. Most here won't have cash to burn with unbounded opus api usage.


If you want to use Opus with a different coding harness along with a coding plan, you can use Github CoPilot. It even has built in authentication with OpenCode.

OpenCode with a Copilot Business sub and Opus 4.6 as the model works well

I'm looking at their plans (https://github.com/features/copilot/plans) it seems like the limits might be pretty low, even with the Pro+ plan which is 2x the cost of Claude Pro. It seems like Claude Pro might be 10-20x the Opus tokens for only twice the price.

Copilot has a totally different billing model. It's request based rather than token based. Counter-intuitively, in our case at least, it is way cheaper than token based pricing. One request can sometimes consume 2-4 million tokens but is billed as a single request (or it's multiplier if using a premium model like opus).

do you pay for the full context every prompt? what happened with the idea of caching the context server side?

You don't. Most of the time (after the first prompt following a compaction or context clear) the context prefix is cached, and you pay something like 10% of the cost for cached tokens. But your total cost is still roughly the area under a line with positive slope. So increases quadratically with context length.

It helps a ton but it doesn't last forever and you still have to pay to write to the cache

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: