Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
You're already at a disadvantage having to stuff the context and spend extra tokens coercing the model in the correct direction compared to it already knowing what to do (rust, ts, go, etc.)
Here, I just did a quick test with claude.
1. "make a simple tcp echo server that uses rust"
compiles and runs - took a few seconds to generate.
2. "make a simple tcp echo server that uses zig"
result: compile error, took literal minutes of spinning and thinking to generate
response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."
/opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it
@compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. "make a simple tcp echo server that uses zig 0.16"
result: compile error:
zig build-exe main.zig
main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal'
const io = init.io;
^~
4. "make a simple tcp echo server that uses zig 0.15"
result: compile error
zig build-exe main.zig
/nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it
@compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.
i mean, if zig is doing its best (inadvertently) at shooing off slop jockeys, then i already have more confidence that:
1. the language and stdlib are written by people who know what they're doing
2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about
The agents will churn their way through the errors. The new users whose learning material is out of date, as well as the existing users that have an insurmountable task in updating their code, will give up instead.
I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.
The cost plan is a crude approximation of the actual query cost. Sometimes, the query planner makes a terrible guess. Your resident DBA won't appreciate being sometimes paged at 3 AM on a Sunday. A good strategy is to freeze the query plan once you have sufficient sample size of data in the involved tables.
Perhaps not every aspect of the query plan can be dictated, but both MySQL and Postgres (with pg_hint_plan) allow you to specify hints that enforce specify join order and scan behavior for the tables in your query, which is where the majority of "unexpected change in query plan" problems will arise. As for SQLite, I'm less familiar with the knobs available for query tuning, but a cursory Google tells me that join order is respected when using CROSS JOIN, and index usage can be forced with INDEXED BY/NOT INDEXED.
As a corollary, will we see a recurrence of congestion in the middle as FttH sees increased adoption? It's easy to believe that 10 Gbps ought to be enough for everyone, but history tells us that people will find a way to saturate any unused bandwidth (8K video with crazy bitrates, 1 TB video game installs, etc).
A content delivery network (CDN) or content distribution network is a geographically distributed network of proxy servers and corresponding data centers. CDNs provide high availability and performance ("speed") through geographical distribution relative to end users, and arose in the late 1990s to alleviate the performance bottlenecks of the Internet[1][2] as it was becoming a critical medium. Since then, CDNs have grown to serve a large portion of Internet content, including text, graphics and scripts, downloadable objects (media files, software, and documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social media services.
That's why you sandbox. You can mitigate most low-hanging DoS fruits by running your server side hooks in a per-tenant cgroup that limits CPU and memory usage. One tenant per public key for trusted contributors, and one general-purpose tenant shared by all new/unknown contributors.
Exponential search is useful when you're querying a REST API that addresses resources with sequential IDs, and need the last ID, but there's no dedicated endpoint for it:
HEAD /users/1 -> 200 OK
HEAD /users/2 -> 200 OK
HEAD /users/4 -> 200 OK
...
HEAD /users/2048 -> 200 OK
HEAD /users/4096 -> 404 Not Found
And then a binary search between 2048 and 4096 to find the most recent user (and incidentally, the number of users). Great info to have if you're researching competing SaaS companies.
I'm guessing you don't really do this for users since the response for all of them should be 401 on any user that you aren't logged in as? I would argue even for IDs that don't exist, you should get the same error whether they don't exist or you just aren't authorised to see them. It's been a few years since I worked in web but I think that's what I would have done, GitHub does similar for private repos.
You can use a pre-receive hook on a git server to reject pushes that fail compilation. Downside is that it requires admin access on git forges, so you're only able to do this if you self-host.
It can work. Old School RuneScape runs almost entirely on nostalgia, but the community voting system they have for introducing new content keeps the game alive and fresh, even after 20 years.
That could lead to other subtle problems elsewhere though, because it requires synchronizing the seed. If you can't do that, it could lead to problems. E.g. when comparing offline speedruns where everyone would have a different seed. Then some players could have more luck than others even with the same inputs, which would be unfair. (Though I can't think of anything else at the moment.)
If you synchronize the seed at game start for speedruns, the seed is the same for everyone, and players can again manipulate their luck, so nothing was gained.
If you run a game entirely between colluding parties, cheating speed runners can just hack it to do whatever they want anyway. See the Dream Minecraft thing from several years back. Speed running claims may be cheated in a thousand ways. It's up to the people who care about it to establish and enforce rules.
But if you're running a multiplayer game with random elements and aren't colluding, you don't have to let a malicious party set the RNG seed to whatever they like just because you agree on it at game start. There's any number of simple cryptographic protocols that allow each peer to contribute equally to the RNG state based on having a separate commitment phase. And it's a lot easier to run a quick cryptographic setup than it is to have constant input-driven adjustment.
Typical deterministic game engines will do this, send it to every machine as part of the initial game state, and also check the seed across machines on every simulation frame (or periodically) to detect desyncs.
If be curious to see what the IPv4/IPv6 breakdown looks like when looking at HTTP/2 and HTTP/3 connections only, which should exclude the vast majority of crawlers.
reply