Hacker Newsnew | past | comments | ask | show | jobs | submit | jstanley's commentslogin

You could still push your position to the edge even if you don't rotate the map.

In theory, yes, but it’s not been an option on any I have used.

In the absence of the insider trader selling you oil futures on the basis of his insider knowledge, what would you do?

You'd buy oil futures at broadly the same price from someone else (maybe a worse price! Because the presence of the insider selling is already driving the price down). So how exactly do you lose?

The only people who lose out are those whose limit orders don't get filled because the insider outbid them. The counterparty benefits from trading with the insider.


This is completely ignorant of market microstructure. For big swaths of the market, most parties interacting with the market do so by trading with a market maker who ensures that everyone's trades clear immediately in exchange for a fee that covers the risk of getting caught offsides. If you're big enough, it makes sense to take that risk in house, but the risk and its very real financial cost remains. In both cases, trades that increase the risk increase the fee. So no, corruption is a tax and everyone pays the tax even though the mechanics have fallen through the rather large cracks in your understanding.

If you think this tax is de minimis, great, glad to hear it, let's put a government tax of similar magnitude in there and resume the peanut butter rations to starving african kids that DOGE cut.


Sometimes trading with people who have better information than you is part of the risk you take on when you run a market-making strategy.

If market makers figure out that a handful of market participants are being fed inside information, they’ll widen the bid/ask, leading to increased transaction costs for everyone.

It also discourages speculators from entering trades if they suspect they’ll be run over by insiders trading on non-public information.

Fewer market participants leads to worse price discovery.

It’s probably bad.


I'd say the entire market loses out on insider trading except for the two parties involved in the insider trade. The insider trades take away a part of the profit margin that other good faith future providers need to justify the risk they take on by offering the futures. This leads to future providers needing to raise the prices of futures to remain profitable.

Or you could provide the RAW and the JPEG and it would start you off at a point that most closely matches the JPEG?

That's exactly what I'm talking about, how do you imagine that working? Metadata is not compatible by design, because processing pipelines are all subtly different and your result will always look different in your editor. Trying to match some basic parameters with the JPEG is possible and some RAW software can do that, but the result is going to be subtly different for the same reason.

I've never had to write such software but in my imagination there is the sensor data, potentially from several exposures, and some static data about the camera, and a list of edits and parameters that the photo app is using to produce the in-camera JPEG. And I just want a way to intervene in that list of edits and parameters to produce my own result. There must be SOME way to do this otherwise how do I edit raws from my real camera? The starting point for camera raw in my photo editor always looks great if the file came from my camera, and always looks ghastly if it came from my mobile.

Search over all values of all parameters and choose the ones the minimise the mean square error of pixel values.

Obviously that will be slow, so probably do some kind of gradient descent, or perhaps depending on what the parameters are there may be a closed-form solution, I don't know.

Yes the result will be subtly different but it's just a starting point.


Isn't that what the "auto" button does, and then you can tweak from there?

> How Do You Handle Deployments?

This section misses the one thing I was interested in: how do you avoid downtime in a deployment?

I like to write web applications with Perl and Mojolicious, and a deployment is just "hypnotoad app", and then hypnotoad gracefully starts up new worker processes to handle new requests and lets the other ones exit once they've finished handling their in-flight requests.

When I switched to Docker I found that there was no good way to handle this.


Record the existing container id, rescale the service to 2 instances (hence bringing a second container up), wait for the second one to be healthy, (optional) stop directing traffic to the old container, wait a few seconds, stop the old container, rescale the service back to 1 instance.

Here's a CLI plugin that automates this: https://github.com/wowu/docker-rollout

Another vote for this - we’ve been using it for years without issues.

Blue Green Deployment. There must be a docker container to handle this or at least a bash script.

edit: thanks to next comment for referencing one


What is a "pass"?

A generic name for a collection of things used to gain access to something.

That’s not really helping explain it, so here’s some examples:

Airplane tickets, library membership barcode, sports tickets, loyalty cards for your local coffee shop, conference tickets, etc.

Essentially anything with a barcode first and foremost. The website that this blog is about allows you to generate your own passes.


Can I use this feature to generate an airplane ticket? :P

I think "create" is the confusing part. It should be "digitize" or something. Either this, or "pass" means something else here.


I literally don't get what this new feature is adding or why it would be part of an iPhone wallet.

If you want to issue tickets is your wallet the most obvious place to do it from? Why would an airline issue tickets from an iPhone?

Or, if this is just for storing tickets issued by other people, why does it benefit from going into the wallet app?


This is for storing tickets issued by other people.

It's handy because it provides an organizational tool. Airplane tickets are in wallet, concert tickets are in wallet. Maybe ferry passes and store discount ids should be too.

And also because you get better results from scanning a regenerated 2d/3d barcode after decoding the original vs scanning a photo of the original.


I agree with what you're saying, of course, simplicity is better, etc.

But the nav on your blog is a terrible example.

Firstly, you don't get to just click on the links to go to where you want to go, you first have to click the three-lines button, even on a desktop with an enormous screen.

And secondly, despite your claims about an "enhanced experience with a modern browser", it seems to work exactly as if there was no enhancement at all? I click the three-lines menu and it takes me to a new page listing the links I can click. The "X" button to "close" the menu navigates me back particularly quickly, but that is all that I can tell that is unusual.

I'm using Firefox 136 on Ubuntu.

And in any event, this is all unnecessary, because you can make a nav by just putting a bunch of links at the top of the page, like HN does.


Why did you choose to have Claude write it in assembly language?

There are big benefits to using a language that has good static analysis with LLMs.


seriously.... we already have a constellation of good deterministic tooling for taking a relatively high concept spec to low level assembly. what does an llm offer in generating optimized asm that rust wouldn't??

Less memory footprint. No reliance on libs. Pure first-person control. No wasted CPU cycles is the target here for me. And if you read the post, the asm set is only for the desktop itself. The tools I use are in Rust. Result is: Laptop now runs at between 5-6W (down from ~9W) [XPS14 latest hw] on Ubuntu 26.04 - giving me around 3.5h extra battery life.

My guess is you're likely to waste more cycles on development time, and on suboptimal algorithms because the implementation is harder, than you would waste on rust-related bloat.

Still a cool project, thanks for sharing.

I have wondered about having LLMs output machine code directly and skipping the compiler/assembler altogether. Then you'd just commit your spec/prompt and run it through the LLM to get your binary.


> Less memory footprint. No reliance on libs.

rust can do that. You can run a hyper stripped down rust that was made for embedded devices specifically because those devices don't have room for a runtime.


I'm sure I can. The original challenge was more in line of "I wonder if CC can do this now?"

And it apparently can. And very well.

One advantage seems to be that the complete asm file fits easily into CC context window.


> The original challenge was more in line of "I wonder if CC can do this now?"

well, I can respect that for sure


+3.5h extra battery life is a real measurable result! well done.

I know this comment will get ignored by the true believers, and likely pasted directly into Claude by the author in order to "further improve" the code, but here's some small excerpts from the terminal emulator (glass.asm, 19360 lines, 555 KiB):

    cmp dword [rax], 'XAUT'
    jne .rxa_next
    cmp dword [rax+4], 'HORI'
    jne .rxa_next
    cmp word [rax+8], 'TY'
    jne .rxa_next
    cmp byte [rax+10], '='
    jne .rxa_next
    ; Found XAUTHORITY=path
Okay, this is setup code that only runs once at startup - but that would be a reason to optimize it for size and/or readability! REPE CMPSB exists, and may not be the fastest, but certainly the most compact and idiomatic way to compare strings. Or write a subroutine to do it!

This pattern is used everywhere for copying or comparing strings, this was just one example of it.

There's a state variable that's used to keep track of whether the input is text to be displayed or part of a control sequence. It's a full 64 bits, probably not because we need 18 quintillion states? Here's how it is evaluated:

    ; Dispatch based on state
    mov rcx, [vt_state]
    cmp rcx, VT_ESC
    je .vtp_esc
    cmp rcx, VT_CSI
    je .vtp_csi
    cmp rcx, VT_CSI_PARAM
    ...
In total, there are 7 compares + conditional jumps, one after another. Compilers would generate a jump table for this, and a better option in assembly might be to make vt_state a pointer to the label we want to go to. Branch predictors nowadays can handle indirect jumps, and may actually have more trouble with such tightly clustered conditionals as seen in this code.

This code is on the "slow" path, there's a faster one for 7-bit ASCII outside of control sequences, with a lengthy comment by Claude at the top on how it optimized this. Even this one starts with a bunch of conditionals though:

    cmp qword [vt_state], 0            ; VT_NORMAL == 0
    jne .vtp_loop_slow
    cmp dword [utf8_remaining], 0
    jne .vtp_loop_slow
    cmp byte [pending_wrap], 0
    jne .vtp_loop_slow
These could likely all be condensed into a single test or indirect jump via the state variable, by introducing just a few more states for UTF-8 decoding and wrap. Following this, here's a "useless use of TEST" (the subtraction already set the flags):

    mov rbx, [grid_cols]
    sub rbx, [cursor_col]              ; rbx = cells left on this row
    test rbx, rbx
    jle .vtp_loop_slow                 ; no room (or already past)
This also again shows the compulsive use of 64-bit registers and variables for values that should never be this big. It's not the "natural" data size on x86-64 at all, every such instruction requires an extra prefix byte.

I freely confess that I'm a "Luddite", and was explicitly looking for bad (and obviously so) code, but this took me just a few minutes of scrolling through the nearly 20K lines in this file, so it should be somewhat representative of the whole.


Thanks for the improvement. Highly appreciated.

Maybe I misunderstand you, how is Claude doing commits where you don't use Claude?

That is a very different case to VS Code which is something you can in fact use without Copilot.


That is not what dmitriv claimed. He said this was a bug, the behavior should have been to add it only when AI was involved, which indeed, is what claude does by default.

(Both is not fine with me)


Natural language is a fully general system and can define and describe everything.

You could deterministically process any UML diagram into a prose equivalent.

And in fact you couldn't do the other way around (any prose -> UML) because UML is less powerful than natural language and actually can't express everything that natural language can.


> can define and describe everything.

Can it also fully describe a composition by Bach or a Rembrandt's painting? In some weird, overly complex way it probably 'could', but it would be very painful. That's why we pick other forms of expression. We use other forms of expression to compact and optimise information delivery. Another benefit is that we cut out the noise. So yes UML cannot describe everything natural language can, but then again why should it - it was designed as a specific framework for designing relations between objects. Not more and not less. Similar for sequence diagrams or other forms of communicating ideas efficiently.


There are also diagram notation languages and LLMs are happy to both consume and produce e.g. Mermaid.

lahfir, I vouched your (currently still dead) comment because it was interesting to me.

I expect the reason it is dead is that it seems LLM-generated (you "quietly" launched it on github? Who says that?).

Also, your comment claims that the tool is cross-platform and implies that it works on Mac, Windows, and Linux, but the graphic on the github README says it only works on Mac.


It looks hybrid human/LLM at best, but definitely possible that it's mostly human, from someone who is earnestly learning how to use "pitch" language. I got the feeling that some parts, like the bullet points, maybe originated from AI-generated documentation/readme's.

My intuition tells me that it could have been AI-generated, but if that's the case then it was heavily edited by a human. I think anyone who went through it for that would have changed other things as well. That's why I suspect it's pseudo-artificial pitch "coded" human writing with some (mostly, lightly edited) copy/paste of AI bullet points.

Then again, I can't find snippets of this language in the repo, so maybe I'm losing my discernment as LLMs advance (as well as the humans who are learning how to use them).


Hey, thanks for the comment. Yes it's hybrid. AI wrote what I gave as an input. If it's better in articulating a message much better than me, why not use it, right?

I think this guy is using AI for pretty much everything - he says as much in his GH profile. In fact his photo bears a Gemini watermark, meaning that is AI too.

Wouldn't the opposite be true? That an llm would use well-known terms for general purpose writing. I think it's much more likely that a human would remember 'silent' launch, or 'stealth' launch, and use silent as a substitute.

I feel very strongly that comment wasn't AI generated.

Also, there's a bunch of normal comments that seem to be wrongfully flagged.


> Wouldn't the opposite be true? That an llm would use well-known terms for general purpose writing.

You'd think, and yet LLMs do in fact have a particular style, and lots of it is common across all LLMs.


3 fake comments in the thread also

Hi. Mac is GA. Windows and Linux are in the roadmap.

Why is Claude always pointing out or assuming what is done quietly?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: