Hacker Newsnew | past | comments | ask | show | jobs | submit | alexellisuk's commentslogin

I think it says something about the current focus and mindset, that this got 12 upvotes, despite you having posted it three times.

We also care about security for CI and production workloads (actuated/slicervm). I would have liked to have seen more people becoming aware of this, and taking action.

The CLAUDE_CODE_OAUTH_TOKEN exfil is interesting. When our code review both runs, it thinks it has a valid LLM token, but it's a dummy API key that's replaced through MITM on egress. (Not a product, just something we've found very valuable internally.. )

https://blog.alexellis.io/ai-code-review-bot/


We built SlicerVM for this in 2022, but not just for sandboxing. It's for servers + API launched VMs (i.e. what we now like to call a 'sandbox'). Feel free to take a look, a lot of our early users are saying things like this.


AFAIK that's not possible at the moment. Apple limits the full GPU acceleration for macOS guests.


Classic Apple


Hi, I'm the author of the article (OpenFaaS founder).

Slicer came about as an internal tool (2022) for rapid customer support where we needed real Linux or Kubernetes clusters, not in a few minutes, but as quickly as possible. It shares a lot of code with actuated (ephemeral, self-hosted CI runners).

It's grown and evolved and gained a Mac version which I mention in the post (with a video demo).

We're seeing fragmentation in the world of sandboxes - both in local or SaaS. We've seen that before with FaaS, and openfaas gave a consolidated UX.

We think slicervm can do the same today.

There are many SaaS and OSS tools in the VM and space. Slicer is a premium experience, that's fast and just works with opinionated defaults and a full Kernel, Ubuntu LTS and systemd.

Free trial available, or individual tier with commercial usage allowed.


This is great news for folks that use microVMs - "we only use AWS" has been an issue for our stuff (slicer services/sandboxes/actuated self-hosted GitHub runners)

If anyone here can't wait (as it looks like there's very little info on this at the moment..)

I wrote up detailed instructions for Ant Group's KVM-PVM patches. Performance is OK for background servers/tasks, but does take a hit up to 50% on complex builds like Kernels or Go with the K8s client.

DIY/detailed option:

https://blog.alexellis.io/how-to-run-firecracker-without-kvm...

Fully working, pre-built host and guest kernel and rootfs:

https://docs.slicervm.com/tasks/pvm/

I'll definitely be testing this and comparing as soon as it's available. Hopefully it'll be accelerated somewhat compared to the PVM approach. There's still no sign whether those patches will ever end up merged upstream in the Linux Kernel. If you know differently, I'd appreciate a link.

Azure, OCI, DigitalOcean, GCE all support nested virt as an option and do all take a bit of a hit, but it makes for very easy testing / exploration. Bare-metal on Hetzner now has a setup fee of up to 350 EUR.. you can find some stuff with 0 setup fee, but it's usually quite old kit.

Edit: this doesn't look quite as good as the headline.. Options for instances look a bit limited. Someone found some more info here: https://x.com/nanovms/status/2022141660143165598/photo/1


Why would we need PVM if AWS now supports nested virt?


> Bare-metal on Hetzner now has a setup fee of up to 350 EUR.. you can find some stuff with 0 setup fee, but it's usually quite old kit.

I don't understand what you are paying for here, nested virtualization doesn't need any extra setup for hardware compared to normal one

... or you are saying Hetzner wants 350 EUR for turning on normal virtualization option in BIOS ?


Hetzner charges a fee for setting up your bare-metal machine. Often zero for their smaller machines and for those in auction. Probably they don't want someone to order a large fleet large of machines for one month and then cancel. They might not get another customer for those machines soon.



Good context. They're commenting only on why are they increasing some setup fees though, not justifying their existence. The Hetzner setup fees were in place already before the RAM price hike.


...but servers come with virtualization on by default for like... at least a decade if not more

So they literally want money to fix what they fucked up the first time


They used to charge a fair admin fee like 30-70 EUR for most bare-metal hosts.. now it's 99 EUR for the most basic/cheapest option.. up to 350 EUR for something modest like a 16 Core Ryzen.. monthly fees haven't changed much.

https://www.hetzner.com/dedicated-rootserver/matrix-ex/ https://www.hetzner.com/dedicated-rootserver/matrix-ax/


I've never used Hetzner because their terms of service didn't make any sense to me, but a 350 EUR fee for each setup? That almost seems like they don't want business. Every bare metal host I've used had a management interface I could submit a job to in order to reprovision my host at any time. Some even offer a recovery console through this. It takes 1-10 minutes but I'm assuming it was out of band management based, not human interaction.

Worst case I ever had a hard drive failed and I had to wait I think a week for OVH to physically replace it.


Hetzner offers uniquely cheap dedicated hosting, even beating OVH. Per their statement about the fees, they're having to do this because without the setup fees, recent hardware prices increases would otherwise raise the price of acquiring new hardware so high that they would essentially never make a profit on the hardware they would have to buy for new orders. They're also saying that their overall prices are going to have to increase if the hardware prices don't change soon. Thus they are charging more for setup while keeping their monthly prices low, or at least trying to for now: https://www.hetzner.com/pressroom/statement-setup-fees-adjus...


That seems counter to a "pay as you go" or "pay what you use for" model. I'd rather have sky high monthly fees, so that I don't have a sunk cost.


Bare metal has never been a pay as you go model, its so much cheaper you usually over provision by a factor of 10-100, and still spend less than you would on the cloud if you have moderate needs. You are trading ops tax for money tax.


You'll still pay 10x less than any of the cloud platforms.


Feels weird to roll it in into setup fee vs monthly price


I’m seeing 429s cascading downloading things like setup-buildx on self hosted runners. That seems odd/off.

Anyone else having issues? It is blocking any kind of release


Is this going to need 1x or 2x of those RTX PRO 6000s to allow for a decent KV for an active context length of 64-100k?

It's one thing running the model without any context, but coding agents build it up close to the max and that slows down generation massively in my experience.


I have a 3090 and a 4090 and it all fits in in VRAM with Q4_0 and quantized KV, 96k ctx. 1400 pp, 80 tps.


1 6000 should be fine, Q6_K_XL gguf will be almost on par with the raw weights and should let you have 128k-256k context.


Author of the post here. I have to caveat this - I do use cloud-hosted SOTA models extensively, but whilst I am British, I am not Simon Willison and do not live and breathe local models!

I'm curious what results you can get from your own kit, if you can tune the setup better, using the same inputs or even a different model.

Do test and experiment for yourself. Do not take me as an expert on all things local LLM.

I did use one of our products to sandbox Ollama, and Claude - other sandboxes are available, many cloud-hosted, I'm sure they are also excellent choices.


Disclaimer: author of the post here. Many people are considering Firecracker, and other sandboxing for AI agents, background tasks, and other processing. This post is a quick tour of what is possible - hopefully to get you thinking of what you could do with this kind of tech too. Of course, we're interested in users, please feel free to reach out on X to @alexellisuk or @slicervm


This looks handy.. along with the odd gist of "convert mkv to mp4" that I have to use every other week.

Quite telling that these tools need to exist to make ffmpeg actually usable by humans (including very experienced developers).


i figure out the niche ffmpeg commands various chain filters, etc then expose them from my python cli tool with words similar to what this gentleman above has done.

If one has fewer such commands its as simple as just bash aliases and just adding it to ~/.bashrc

alias convertmkvtomp4='ffmpeg command'

then just run it anytime with just that alias phrase i use ffmpeg a lot so i have my own dedicated cli snippet tool for me, to quickly build out complex pipeline in easier language

the best part is i have --dry-run then exposes the flow + explicit commands being used at each step, if i need details on whats happening and verbose output at each step


I have a text file with some common commands, so no tools needed.

But yea ffmpeg is awesome software, one of the great oss projects imo. working with video is hellish and it makes it possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: