I noticed borkdude posted this thread *and* he is listed as a contributor for this release.
For the longest time, I recall the opposition to async/await support being twofold:
1. adding support would require deep changes across the CLJS compiler (theller, creator of shadow-cljs, once tried and concluded this)
2. macros from libraries like Promesa provided similar convenience
There were some other arguments brought up at the time (e.g. just use core.async, expression-oriented languages aren't a good fit with async/await, etc.), but they were usually specific to one person rather than something you'd see repeated in forums.
In the Clojurians Slack, borkdude once stated he wasn't convinced it'd be impractical to add support. It seems that he eventually took the time and made it happen. Extremely thankful for that.
I believe Borkdude showed it was possible by first implementing async/await in Squint, his alternative* implementation of ClojureScript, and then took those learnings to the core CLJS compiler.
MPEG-2 TS is a container. H.264 is a coding specification. They are totally different things.
One can find MPEG-2 TS in surprising places (see: DOCSIS encapsulating Ethernet frames into TS packets).
If I had to guess why MPEG-2 TS, it'd probably be the for the fact it's a well-supported streaming format in both hardware and software. If you tried using QuickTime or MPEG-4 containers, you'd have to rely on hacks like ensuring the moov atom preceeds mdat.
Matroska may be worth considering (especially the subset used by WebM to make it stream-friendly and quicker to seek), but no idea how widespread hardware support is for (de)muxing that.
Before ProRes, we captured HD content at 100Mbps MPEG2 video with PCM audio wrapped in a 302m stream that were muxed together as an MP2TS wrapper. The 302m made it even more difficult as not all MP2TS tools could do it correctly, and some would not allow for custom Program stream IDs and needed to be remuxed by other tools allow for custom PIDs as a post process.
But seeing how many uses people came up with for using MP2TS just shows it's flexibility and resilience.
I don't think MKV is better suited for low-latency streaming production (as opposed to consumption) than MP4. That's really more of the domain of MPEG-TS, RTP etc., and presumably this is a reliable transport alternative to that.
HLS, MPEG-DASH etc. do successfully work around much of that, but they're really mostly that – workarounds to present stream-like semantics over an HTTP + CDN based delivery mechanism.
There are significant gaps on the production/distribution side of things, i.e., everything that happens before the CDN (and for very low latency even beyond), and I suppose this is an attempt at filling those.
Sure, but there are a lot of media applicances that don't really have upgradeable software. I suppose it can be pretty useful to just (un)wrap MoQ and feed the result over a local interface into something that just expects M2TS and vice versa.
For what it's worth, Kine (software that k3s uses to replace etcd with SQL databases) implements etcd watches on SQLite through polling[1]. The reason being that SQLite does not offer NOTIFY/LISTEN like MySQL and Postgres do. Ironically, Honkey attempts implementing NOTIFY/LISTEN through polling.
k3s has been running on my home server for about three years now (using the default SQLite backend), and there doesn't seem to be excessive CPU usage despite dozens of watches existing in the simulated etcd. Of course, this doesn't say much about Honker, but it's nonetheless worth pointing out that sometimes the choice of database forces one towards a certain design.
With SQLite, you're basically funneled towards a single-writer / single-process design anyway ... in which case why not use a more traditional condvar + mutex rather than polling?
Most of the rail has get around mountainous, uneven terrain subject to earthquakes, strong winds, and heavy rain. California should be able to build rail parallel to the I-5, a long, flat terrain without extreme weather or strong earthquakes. The problem seems to be a political one, not an engineering one. In fact, if the Interstate Highway System did not already exist, I doubt the U.S. today would be able to accept and complete it.
> one long line with a few branches
I currently live in Japan, and that does not really match what I've observed. There are three distinct railway companies in my area (JR, Tokyu, Yokohama Municipal Subway), each with their own dedicated rail, trains, power supply, etc.
The situation is more like "a disjoint union of graphs, where some of the graphs are connected".
At my current job, I sometimes set up a Nix shell with the GitHub CLI, since that let's Claude Code associate a feature branch to a pull request. The LLM can then retrieve PR description, workflow results, review comments, etc.
Also, I believe GitHub Actions cache cannot be bulk deleted outside of the CLI. The first time I [hesitantly] used the gh CLI was to empty GitHub Actions cache. At the time it wasn't possible with the REST API or web interface.
That's right, its tokenization and fragment rules use fairly simple heuristics that assume whitespace delimited words plus English language/punctuation. Proper CJK support would require language specific tokenization and morphological parsing. Correcting rules like "≤4 words = dramatic fragment" would be difficult. The more complex rules already require LLM roundtrips, so supporting all languages in one pass would need to rely on LLM alone, I imagine.
Which brings up an interesting point - what do these LLM clichés look like in Japanese?
> what do these LLM clichés look like in Japanese?
Besides text reading like a machine translation, the tell-tale signs often involve things like:
- itemized lists (I know, it's ironic that I'm using them here)
- frequent use of conjunctions
- use of demonstratives that feels redundant
- full-width colons, especially in titles
- subheadings that always end in abstract nouns
- bold text, especially at the beginning of a line
The demonstrative bit may be hard to express, but to give you an idea: when communicating in Japanese, words that can be understood from context may be omitted. Explicitly writing out words understood from context can sometimes make a sentence sound redundant.
Before LLMs were widespread, SEO spam in the Japanese net tended to be affiliate sites with predictable, template paragraphs. I get reminded of those sites whenever GPT starts a response with 「結論から言うと、〇〇」, since that's exactly how those affiliate sites wrote back in the day.
Yeah, the title made me think the author found a bug in the Lean kernel, thus making an invalid proof pass Lean's checks. The article instead uncovers bugs in the Lean runtime and lean-zip, but these bugs are less damning than e.g. the kernel, which must be trusted to be correct, or else you can't trust any proof in Lean.
When the Lean runtime has bugs, all Lean applications using the Lean runtime also have those bugs. I can’t understand people trying to make a distinction here. Is your intent to have a bug free application or to just show the Lean proof kernel is solid?? The latter is only useful to Lean developers, end users should only care about the former!
The intent is to have a proof of some proposition. The Lean runtime crashing doesn't stop the lean-zip developers from formally modelling zlib and proving certain correctness statements under this model. On the other hand, the Lean kernel having a bug would mean we may discover entire classes of proofs that were just wrong; if those statements were used as corollaries/lemmas/etc. for other proofs, then we'd be in a lot of trouble.
When I see a title transitioning from "Lean said this proof is okay" to "I found a bug in Lean", I'm intuitively going to think the author just found a soundness (or consistency) issue in Lean.
There are no Lean applications other than Lean. This is an important point most of the comments are missing. Lean is for proving math. Yes, you can use it for other things; but no, no one is.
Still good to have found, but drawing conclusions past “someone could cheat at proving the continuum hypothesis” isn’t really warranted.
> What's preventing Japanese engineers from doing the same?
The fact they don't really need it in their life (or job). English is definitely necessary if you work service jobs in Tokyo (to deal with tourists), but not much anywhere else.
Japanese is one of a handful of languages where one can complete a postdoc entirely within the language. Many languages are not like this. e.g. in the Phillipines, STEM subjects are almost entirely taught in English, since Tagalog simply doesn't have words to describe most of the concepts. The result is something like 90% of the coursework being in English, with random Tagalog words mixed in. The concept is called "Taglish" if I recall correctly.
This is unnecessary in countries like Japan, China, South Korea, etc. If you're applying to a graduate school in Japan (or China, or Korea), expecting to receive education in English is actually the edge-case, not the expectation.
Also, at least in my company, there is an interesting trend where people are deciding learning English isn't really necessary since AI translation has gotten "good enough" for most use cases.
> The result is something like 90% of the coursework being in English, with random Tagalog words mixed in. The concept is called "Taglish" if I recall correctly.
Spoken Tagalog has always impressed me (though I can't really say I know any) for how freely English seems to be mixed in (and well pronounced, such that you notice the difference in phonology), in varying ratios. I'm quite sure there's a deliberate code-switching to it.
> people are deciding learning English isn't really necessary since AI translation has gotten "good enough" for most use cases.
It's honestly really impressive. Although I'm told it can occasionally glitch and treat the text as a prompt instead of just translating it.
> The fact they don't really need it in their life (or job). English is definitely necessary if you work service jobs in Tokyo (to deal with tourists), but not much anywhere else.
But the linked article seems to imply the opposite. I mean, working with an English PM sure sounds like the language is one of the job's core competencies.
I wonder how the figures look for countries outside of the United States.
For what its worth, I ended up getting a tech job in Japan instead. Ironically, the requirements at U.S. startups are much higher, and U.S. startups fit the stereotype of Japanese work culture more than Japanese companies nowadays.
For the longest time, I recall the opposition to async/await support being twofold:
1. adding support would require deep changes across the CLJS compiler (theller, creator of shadow-cljs, once tried and concluded this)
2. macros from libraries like Promesa provided similar convenience
There were some other arguments brought up at the time (e.g. just use core.async, expression-oriented languages aren't a good fit with async/await, etc.), but they were usually specific to one person rather than something you'd see repeated in forums.
In the Clojurians Slack, borkdude once stated he wasn't convinced it'd be impractical to add support. It seems that he eventually took the time and made it happen. Extremely thankful for that.
reply