Hacker Newsnew | past | comments | ask | show | jobs | submit | miroljub's commentslogin

Europe slowly becomes a totalitarian fascist federation.

The social media and children protection bullshit serves only to introduce a mandatory identification for accessing the internet.

And we all laughed at the "conspiracy theorists" who were constantly warning us.


Don’t forget the hate speech laws. It’s just ridiculous. A state in Germany wants to criminalize questioning a certain country’s existence, with penalties of up to four years in prison

Foreign state existence.

This sounds like a religious cult priest blaming the common people for not understanding the cult leader's wish, which he never clearly stated.

A strange view. The trade-off has nothing to do with a specific ideology or notable selfishness. It is an intrinsic limitation of the algorithms, which anybody could reasonably learn about.

Sure, the exact choice on the trade-off, changing that choice, and having a pretty product-breaking bug as a result, are much more opaque. But I was responding to somebody who was surprised there's any trade-off at all. Computers don't give you infinite resources, whether or not they're "servers," "in the cloud," or "AI."


He was surprised because it was not clearly communicated. There's a lot of theory behind a product that you could (or could not) better understand, but in the end, something like price doesn't have much to do with the theoretical and practical behavior of the actual application.

There are other methods broadly classified as self-defense that an employee can apply against a company and its officials who attack their privacy.

Let Meta and its officials feel the consequences of their actions.


If it’s company equipment it’s fair game. Literally everytime I sign in it states I have no expectation of privacy while using the equipment.

The American mind can't comprehend privacy. Just because it's company equipment doesn't make it "fair game" to spy and track you.

I wouldn't blame it on Americans. They are not different from the others. Just take a look at what Europeans are putting up with.

The only difference: in the USA it's private companies that spy, while in the EU the governments are leading the spying game.


> if they don't have to.

That's the only reason.

In many enterprises you'd need to be very lucky to get an approval for any service that doesn't come from MS.


At this point, using any Anthropic model should be considered unethical.

Just to clarify to people focusing on the $180/month price tag.

OpenClaw is not a CC-only product. You can configure it to use any API endpoint.

Paying $180/month to Anthropic is a personal choice, not a requirement to use OpenClaw.


So that leads to a question: Is there a physical box I could buy that an amortize over 5-7 years to be half the API cost?

In other words, assuming no price increase, 7 years of that pricing is $15k. Is there hardware I could buy for $7k or less that would be able to replace those API calls or alternativr subs entirely?

I've personally been trying to determine if I should buy a new GC on my aging desktop(s), since their graphic cards can't really handle LLMs)


You can't realistically replace a frontier coding model on any local hardware that costs less than a nice house, and even then it's not going to be quite as good.

But if you don't need frontier coding abilities, there are several nice models that you can run on a video card with 24GB to 32GB of VRAM. (So a 5090 or a used 3090.) Try Gemma4 and Qwen3.5 with 4-bit quantization from Unsloth, and look at models in the 20B to 35B range. You can try before you buy if you drop $20 on OpenRouter. I have a setup like this that I built for $2500 last year, before things got expensive, and it's a nice little "home lab."

If you want to go bigger than this, you're looking at an RTX 6000 card, or a Mac Studio with 128GB to 512GB of RAM. These are outside your budget. Or you could look at a Mac Minis, DGX Spark or Strix Halo. These let you bigger models much slower, mostly.


Thanks. That is what I suspected. The 3090's in my area seem pretty expensive for a several year old second hand card - they are the same price as a new 5080.

5090 is pretty expensive (~$4000) to justify it over a $10-50 sub. I guess the nice thing is the api side becomes "included", if I ever want to go that route. But if I have a GHCP $40 sub vs a $4000 GC to match it, just on hardware, pay off is at 8 years. If I add in electricity, pay off is probably never.

Sure, the sub can go up in price, but the value proposition for self-running doesn't seem to make sense - especially if I can't at least match Sonnet on GHCP or something like that.

I hope to self-run some not useless LLMs/Agents at some point, but I think this market needs to stabalize first. I just don't like waiting.


For what it's worth, eBay in the US currently has some used 3090s for about $1,300, including some marked "Buy it now." I got mine used for about $1,000, and I'm really happy with it—it's a very solid gaming card for Steam on Linux (if you don't need ray tracing), and it allows me to experiment with models up to about 35B parameters. I'm not saying it's a good investment for you in particular, of course! But it's solid at that price, and you can just chuck it in any consumer gaming rig and get a really fun AI "home lab".

As for models, I'm really genuinely impressed with Gemma4 26B A4B and Qwen3.6 35B A3B right now. Between them, I've seen solid image analysis, good medium-image OCR on very tough images, very good understanding of short stories, good structured data extraction from documents, extremely good language translation, etc. If you wanted to build a custom tool which summarized your inbox/RSS feeds/local news every day, or extracted information from emails and entered it into a database, or automatically captioned images, those tasks are all viable locally. The quality of the results is up dramatically in the last 12 months. At this point, my old personal non-agentic LLM benchmarks are "saturated": All the current leading models score extremely well on literally anything I was asking last year.

It's the true agentic coding workflows where the big models really stand out. And those models are all large enough that the hardware needs to amortized over enough users to run 24 hours/day.


> or a Mac Studio with 128GB to 512GB of RAM. These are outside your budget.

M3 ultra with 80GOu cores and 256GB of ram is $7500 - that’s right at the edge of the budget, but it fits.. if you can get an edu discount through a kid or friend you’re even better off!


You can buy a roughly $40k gpu (the h100) which will cost $100/mo in electricity on top of that to get about 30-80% the performance of OpenAI or Anthropic frontier models, depending what you're doing.

Over 5 years, that works out to ~$45k vs ~$10k, and during that duration, it's possible better open models will come available making the GPU better, but it's far more likely that the VC-fueled companies advance quicker (since that's been the trend so far).

In other words, the local economics do not work out well at a personal scale at all unless you're _really_ maxing out the GPU at close to 50% literally 24/7, and you're okay accepting worse results.

As long as proprietary models advance as quickly as they are, I think it makes no sense to try and run em locally. You could buy an H100, and suddenly a new model that's too large to run on it could be the state of the art, and suddenly the resale value plummets and it's useless compared to using this new model via APIs or via buying a new $90k GPU with twice the memory or whatever.


This feels like it should be state infrastructure, the way roads, railroads and the postal system are.

This feels like a market which hasn't settled into long-term profitability and is being subsidized by investors.

And who is doing the research on this, training the models, and building new frontier models in your version of the world?

Note that the (edit: US) postal system is a for-profit system.

Given the trends of the capitalist US government, which constantly cedes more and more power to the private sector, especially google and apple, I assume we'll end up with a state-run model infrastructure as soon as we replace the government with Google, at which point Gemini simply becomes state infrastructure.


> Note that the (edit: US) postal system is a for-profit system.

That's not correct. If USPS makes more revenue than their expenses for a year, they can't pay it out as profits to anyone.

It's true that USPS is intended to be self-funded, covering it's costs through postage and services sold, and not tax revunue. That doesn't mean there's profit anywhere.


> Note that the (edit: US) postal system is a for-profit system.

Pricing in the US postal system is not based on maximizing profit. Ths US postal system is not a for-profit system, at all. It is a delivery system (more or less) that happened to start turning a profit (2006) until PAEA. After that, the next time it made a profit was 2025.


The USPS is self funding, not for-profit. The difference is both significant and consequential.

> Note that the postal system is a for-profit system.

That depends on the country in question :-)


For something like OpenClaw you realistically only need rather slow inference, so use SSD offload as described by adrian_b here: https://news.ycombinator.com/item?id=47832249 Though I'm not sure that the support in the main inference frameworks (and even in the GGUF format itself, at least arguably) is up to the task just yet.

You can use several times cheaper models than Claude as well, its not like you need anything big to handle all the uses cases listed above

Yeah, something like MiniMax m2.7 should be perfectly capable for this sort of thing, and is 10-20x cheaper

You can get quite good models running on a Mac Studio, but these will not rival a frontier model.

$3,699.00

M4 Max 16c/40c, 128GB of RAM, 1TB SSD.

LM Studio is free and can act as a LLM server or as a chat interface, and it provides GUI management of your models and such. It's a nice easy and cheap setup.


For something the size of Claude, probably not. But for smaller models, maybe (though they also are much cheaper to buy tokens for)

Or 'mudo', Microsoft sudo.

With the added benefit of having appropriate meaning in some slavic languages.


How about ms-sudo/mssudo and ms-curl/mscurl

I like the name MS-DOS. MicroSoft DO Superuser

I must say I'm quite disappointed.

I expected something useful for application development. All it offers is some wrapper around the basic Android setup command that LLMs are already good at. What, initial empty project creation now takes 5 minutes instead of 10? Big deal, who cares?

I had another hope awakening that at least skills might be useful. But except for a few migration recipes, there's nothing of value for day to day Android development.

Facit: I'll skip installing another Google app whose only purpose is more spying on me and keep developing Android apps the way I already do.

TLDR: Nothing to see here. Move on.


Most of the salespeople in any company are spammers.

No, you don't understand. The people at my company are auto-opt-in premium-communication value-add customer-relationship-establishment specialists. But otherwise, I agree with you: everyone else is a spammer.

In most of the EU dictatorships, there's no legal way to obtain a phone number without registering with your real identity.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: