It's possible that use of `contenteditable` and ability to save the file could help but that has a lot of limitations/gotchas, so I'm inclined to agree.
Well at least the metal part of any type-C plug will inherently be more fragile due to the hollow design and manufacturing by stamping out of sheet metal. Whereas for Lightning, it’s a solid machined part.
But as soon as you get to the chip housing and the rest of the cable, it’s anyone’s game I suppose.
I've been very curious about that too. I wonder if it's actually much better at admitting when it doesn't know something, because it thinks it's a "dumber model". But I haven't played with this at all myself.
I was hoping someone else had written about it here.
From my knowledge there are three different takes on git being worked on which looked interesting.
- JJ
- GitButler
- Zed
Zed version system doesn't have that much public info yet, but they wanted to build a db for storing code versions for AI agents. Not sure if this is still the direction, and I'm a bit skeptical, but interested to see what they come up with.
Even though git works well enough, I'm certain there will be another preferred way at some point in the future. There are aspects of git that are simply not intuitive, and the CLI itself is not up to standard of today's DX.
This seems awesome. Seems to address many of my armchair complaints about both Go (inexpensive) and Rust (bloated/complex).
I'm curious what compilation times are like? Are there theoretical reasons it'd be order of magnitude slower than Go? I assume it does much less than the rust compiler...
Relatedly, I'd be curious to see some of the things from Rust this doesn't include, ideally in the docs. Eg I assume borrow checking, various data types, maybe async etc are intentionally omitted?
Problem: DOM-based text measurement (getBoundingClientRect, offsetHeight)
forces synchronous layout reflow. When components independently measure text,
each measurement triggers a reflow of the entire document. This creates
read/write interleaving that can cost 30ms+ per frame for 500 text blocks.
Solution: two-phase measurement centered around canvas measureText.
prepare(text, font) — segments text via Intl.Segmenter, measures each word
via canvas, caches widths, and does one cached DOM calibration read per
font when emoji correction is needed. Call once when text first appears.
layout(prepared, maxWidth, lineHeight) — walks cached word widths with pure
arithmetic to count lines and compute height. Call on every resize.
~0.0002ms per text.
reply