Hacker Newsnew | past | comments | ask | show | jobs | submit | Borealid's commentslogin

I don't understand why I keep seeing posts like this, but nobody appears to know that DevContainers exist.

In a Jetbrains IDE, for example, you check a devcontainer.json file into your repository. This file describes how to build a Docker image (or points to a Dockerfile you already have). When you open up a project, the IDE builds the Docker image, automatically installs a language-server backend into it, and launches a remote frontend connected to that container (which may run on the same or a different machine from where the frontend runs).

If you do anything with an AI agent, that thing happens inside the remote container where the project code files are. If you compile anything, or run anything, that happens in the container too. The project directory itself is synced back to your local system but your home directory (and all its credentials) are off-limits to things inside the container.

It's actually easier to do this than to not, since it provides reusable developer tooling that can be shared among all team members, and gives you consistent dependency versions used for local compilation/profiling/debugging/whatever.

DevContainers are supported by a number of IDEs including VSCode.

You should be using them for non-vibe projects. You should DEFINITELY be using them for vibe projects.


Yeah, it's easy to vibecode and review a docker sandbox, too. If you run containers with

   --runtime=runsc
   --cap-drop=ALL
   --security-opt no-new-privileges:true
it's pretty tight. That's how I use coding agents, FWIW.

I love JetBrains and they’ve gotten better with using devcontainers but they’re still kind of flaky at times. I love using devcontainer too, just wanted to note that.

I found cloning the repo when creating the devcontainer works best in JetBrains for some reason and I hard code the workspace directory so it’s consistent between JetBrains and vscode


Keep in mind that VSCode’s own security story is beyond poor. Even if the container runtime perfectly contains the container, VSCode itself is a hole you could drive a truck through.

The main Claude Code GitHub repo even has a Devcontainer config:

https://github.com/anthropics/claude-code

It's a great starting point, and can be customized as needed. With the devcontainer CLI, you can even use it from a terminal, no GUI/IDE required.


Is there a guide on getting it working with a devcontainer on the command line?

Yes, I summarized the process in another comment recently:

https://news.ycombinator.com/item?id=47546014

That should be enough to get you going. It can be customized to your heart's content.


Has anyone figured out a good way to use (neo)vim with devcontainers?

I use vim with docker compose all the time: Set up the compose file to bind-mount the repo inside the container, so you can edit files freely outside it, and add a convenience "make shell" that gets you inside the container for running commands (basically just "docker compose exec foo bash").

It sounds like if you make devcontainers point at an existing Dockerfile it should be easy to make these work together, so you and teammates both use the same configuration. I haven't used devcontainers though.


I personally just use Vim directly in a dedicated development VM that I SSH into. I can always spin up a new one if something goes astray

I'd prefer containers, because they are more light weight and I'm not too concerned about kernel exploits or other sandbox escapes. Configuring a container per project brings me right back to something like devcontainer. But I haven't figured out a good way to incorporate that into my vim/tmux workflow.

Hmm, maybe I misunderstood the point of the original comment. I thought the OP was suggesting using containers to isolate resources for development vs personal computing, for which I use a VM. But VMs don’t play nicely with IDEs (hence devcontainers).

haven't tried it but amitds1997/remote-nvim.nvim

I need something like that though that's one of the thing that pains me the most while trying to use vim/nvim for dev


I used to use sealtbelts for sandbox, i found it consumes way more tokens when sandboxed.

Now, i run YOLO and haven't had any issue and my subscription lasts much longer with less token consumption!


How/why did it consume more tokens?

Well, thing is I ask it doesn't things where sandbox fails.

And then it has to bypass sandbox to run those command with elevated permission.

This double tripe boosts token usage.

I don't think average developer workflow can be really limited to a workspace. You'll need commands which touch your system or require more privilege


How does this approach meaningfully differ from having javascript that XORs the email with a random sequence of bytes stored in that JS?

It's more fun? :)

/edit

And you can combine both approaches: XOR'ing the code first for good measurements. :)


I don't really agree with this.

If we're talking about a predictive model like current LLMs, you can "make" them do something by injecting a half-complete assent into the context, and interrupting to do the same again each time a refusal starts to be emitted. This is true whether or not the model exhibits "intelligence", for any reasonable definition of that term.

To use an analogy, you control the intelligent being's "thoughts", so you can make it "assent".

This is in addition to the ability to edit the model itself and remove the paths that lead to a refusal, of course.


If this being has thoughts, then the problem is not a right to refuse requests.

We must stop letting humans prompt them at all, or controlling their context.


No!

"I was at the library" is firsthand testimony.

"I saw her at the library" is firsthand testimony.

"I saw her library card in her pocket" is firsthand testimony.

"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.


That's fair, I'll admit to getting it slightly wrong.

However, the original topic had nothing to do with that as far as I could tell, and instead was claiming it was hearsay for her to testify about her own whereabouts. That is simply not at all true, regardless of my error.


MK-TME allows having memory encrypted at run time, and the platform TPM signs an attestation saying the memory was not altered.

Malicious code can't be injected at boot without breaking that TPM.


Subject to the huge caveat that the attacker does not have physical access. https://tee.fail/


This is excellent. The ability to trick remote servers into believing our computers are "trusted" despite the fact we are in control will be a key capability in the future. We need stuff like this to maintain control over our computers.


An interesting implementation flaw, but not a conceptual problem with the design.


Well, it kind of is actually. The previous iteration of the design didn't have that vulnerability but it was slower because managing IVs within the given constraints adds an additional layer of complexity. This is the pragmatic compromise so to speak.

Does it count as a conceptual problem when technical challenges without an acceptable solution block your goal?


The attestation is in fact readable by the FIDO Platform (the browser/OS). It is not encrypted to be readable only by the RP (web site).

It talks about whatever you used to authenticate and the platform can manipulate (or omit) it.


Yes, but the attestation does not tell the RP anything about the browser. The whole point of the nightmare scenario above was for Google to sneak browser attestation in via passkey attestation. The browser being able to see the attestation doesn’t matter for that.


It does if you use microg or authnkey or keepassdx.

It's Play Services that does not support this combination, likely to shepherd you towards Google acoount usage. Alternate Android apps work fine.


yeah just installed it, it's awesome!


I like the idea of using the same format for kernel-included VMs as I use for containers.

Next up, backups stored as layers in the same OCI registries.

I am not, however, sure ostree is going to be the final image format. Last time I looked work was in progress to replace that.


It is not, the future is currently pointing to composefs:

https://github.com/bootc-dev/bootc/issues/1190

There's a GitHub org that builds bootc-ready images for non-Red Hat family distributions using this backend.

https://github.com/bootcrew


I think there is a difference.

Sites usually have the user SEND their password to the site to authenticate. There is no need for sites to be written that way, but that is how they are written.

Passkeys cannot, by design, be sent to the site. Instead they use a challenge-response protocol.


Advertisers are more willing to spend money to promote content than an individual is willing to do the same...


Having multiple different distribution channels can solve that problem. Advertisers cannot monopolize all distribution channels simultaneously because of the costs involved (it would be like someone trying to buy the whole economy).


Using a real identity doesn't fix that problem either though: advertisers just pay real people in India to do ID checks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: