> But as a standard library abstraction, it’s too opinionated. It categorically excludes cases where sources form a tree: a validation error with multiple field failures, a timeout with partial results. These scenarios exist, and the standard trait offers no way to represent them.
This seems akin to complaining that the CPU core has only one instruction pointer. There is nothing preventing a struct implementing `Error` from aggregating other errors (such as validation results) and still exposing them via the `Error` trait. The fact of the matter is that the call stack is linear, so the interior node in the tree the author wants still needs to provide the aggregate error reporting that reflects the call stack that was lost with the various returns. Nothing about that error type implementing `Error` prevents it from also implementing another error reporting trait that reflects the aggregate errors in all of the underlying richness with which they were collected.
My understanding of the claim is that the mainframe data is stored in a high fidelity format but some records simply lack a full year. At the time social security was instituted it was common for people to not know their own age precisely. Beyond memory issues arising from old age, most people just didn't have a pressing reason to track it. So social security benefits were granted to people who seemed old enough even though they didn't have a birth certificate or similar. The claim, as I've heard it, is that the DOGE dolts transferred this data to a more modern system where they blindly passed incomplete or placeholder values into an 8601 library implementation that uses the 2004 standard's reference date of 1875 by default.
Unfortunately paying for these services to avoid ads will never work. It was first promised by cable TV when they first scaled out coaxial around the country. You paid for TV in part to not have ads. That worked great until the advertisers increased their bid. It was tried when VHS kicked off but eventually even tapes rented from Blockbuster had ads once the advertisers increased their bid. And now it is happening to streaming services. For over a decade I paid Netflix specifically to avoid the ads but as more people do that it decreases the supply of passive attention, which prompts advertisers to increase their bid again, and now it's almost impossible to continue paying to avoid ads. Now I have to pay a fee and watch ads. I would gladly pay YouTube to avoid watching ads but it just won't work. They will start taking my money each month and then they will also push ads at me after I pay them consistently for a long time. We're well beyond "fool me twice" territory.
I've been using Google premium to not see ads for years now. It's great and apparently the video makers earn more too. I don't love Google's domination and some of their practices but this is pretty reasonable.
Agreed, but the subscription is generally month to month, so I take advantage until that happens and then cancel, like I did with the other crummy streaming services that have done this (Netflix, Prime, etc).
That said, while I find those services pretty scummy for what they've done, I've fled back to spending a lot more time with books. There's plenty of them to read before I die and it's unlikely they'll be similarly molested.
One of the nice things about TV/movies that you don't get with books is a "shared experience": you can't read a book with your girlfriend or your family, but sitting on the couch with your girlfriend and watching a movie is totally normal and enjoyable.
Both TST and Sidebery are leagues ahead of tab groups IMHO. Frankly sidebar tabs just work better for 16:9 aspect ratio because with a maximized window most sites don't make good use of 1/4 of the width anyway.
When I initially read the htmx documentation I was confused because it kept talking about a hypermedia client. The context clues suggested they were referring to htmx but my brain kept saying "isn't the browser the hypermedia client?" Eventually it sank in that htmx is an extension of the hypermedia client. When I first tried to use htmx I experienced a lot of discomfort regarding areas where htmx feels non-standard, such as redirects in the hx- readers on a 200 response. Once I understood that htmx is explicitly trying to move the boundary of the hypermedia client a lot of that discomfort melted away.
Yes htmx is an augmentation of the browser, specifically through enhancing HTML by way of JS. The idea is that JS frameworks became popular due to the lack of additional hypermedia controls which are the basis for how agents (users) interact with websites through hypermedia clients (browsers).
If you're going to get that picky (and please be aware I'm only doing this for the sake of the argument) media can never be hypermedia in the absence of the client. HTML opened in notepad is just text. Cat GIFs, rendered in the correct client, would absolutely be hypermedia (you could inline link data as QR codes, if you felt like being perverse).
Hypermedia starts with the client, not with the file format.
I agree that a hypermedia can't act properly within the uniform interface constraint, without a hypermedia client, that is, you can't have a hypermedia system without a proper hypermedia client:
On the other hand, there is a real difference between plain text and HTML (or HXML, don't shoot!) which is a subset of text with additional concepts layered on top of it. This is akin to how JSON (or XML) is not hypermedia, but can be used to create hypermedia such as Siren or HXML.
So I still think it makes sense to discuss if a media is or is not hypermedia without reference to the client, whereas it doesn't make sense to claim it is being used as hypermedia unless it is being consumed by a properly written hypermedia client. To make my thinking concrete, I believe Siren would continue to be hypermedia, even if it wasn't be consumed properly by a client, but then also you could not describe that pairing as a hypermedia system. (This is one reason I focus on the systemic nature of hypermedia, rather than solely on hypermedia formats)
Semantic nitpicking perhaps, but then hypermedia discussions appear to tend to invite this sort of thing.
> HTML (or HXML, don't shoot!) which is a subset of text with additional concepts layered on top of it. This is akin to how JSON (or XML) is not hypermedia
So, HTML is different from plain text because it "has concepts layered on top of text" where as JSON is not hypermedia despite "having concepts layered on top of text". And the only reason is because you said so.
> So I still think it makes sense to discuss if a media is or is not hypermedia without reference to the client
Then JSON is just as much hypermedia as HTML. Both are structured text unusable without a specific client to display them or work with them.
> Semantic nitpicking perhaps, but then hypermedia discussions appear to tend to invite this sort of thing.
They only invite them because of your insistence on calling only HTML the "natural hypermedia" etc.
No, the reason is because HTML qua HTML has hypermedia controls and JSON qua JSON does not. Recall that, before I pointed out the widely used and accepted definition of hypermedia controls, and in particular that links and forms are hypermedia controls, you did not understand that concept, so you might spend some time quietly reflecting on that idea. It may help clarify things for you.
> Then JSON is just as much hypermedia as HTML. Both are structured text unusable without a specific client to display them or work with them.
As I have said and written previously (https://htmx.org/essays/hypermedia-clients/, https://hypermedia.systems/hypermedia-components/) I agree that a hypermedia client is necessary for a properly functioning hypermedia system that adheres to the uniform interface. However, I think that there is a good argument that Siren, for example, is hypermedia, even if it isn't being consumed correctly, just as I think HTML is hypermedia, even if someone is screen scraping it (i.e. not using a hypermedia client to consume it).
I don't think you can call those uses a hypermedia system, but I also don't think that changes the fact that the underlying formats, Siren & HTML, are hypermedia, due to the fact that they have hypermedia controls. That might be a subtle distinction, but I think it is a valid one. Again, perhaps as you reflect more on this concept, new to you, of hypermedia controls, the distinction will become easier to understand.
> They only invite them because of your insistence on calling only HTML the "natural hypermedia" etc.
I'm very sorry you that feel that way.
I would call HTML, "a natural hypermedia", rather than "the natural hypermedia". I would also call HXML & Siren natural hypermedia, due to the presence of hypermedia controls (a concept new to you) in their specifications.
Yeah, I went overboard with the example. The issue is that HTMX tries to take over the concept of hypermedia as if it means only HTMX and whatever HTMX is doing :)
Htmx posits that current browsers aren't "truly" hypermedia since only anchor tags and forms can initiate GET/POST requests. It is more of a tech demo showing what client with ANY tag being able to do requests would look like.
That's why whether it is library/framework is besides the point. The author posits that these features should be in the spec, and tries as closely as possible to show what something might look like if we had it in the spec
> The author posits that these features should be in the spec
Does he? The author pretends that his library is what hypertext and hypermedia are as envisioned by Time Berners-Lee and Roy Fielding, and that his approach is the only true representation of both. And that's about it. Nothing about "this should be in the spec"
Um, yes friend, that’s exactly what it’s trying to be. Carson has said numerous times that in an ideal world, the html spec would evolve to the point the htmx becomes redundant.
It’s not about htmx or any library/framework - it’s about extending html.
If that doesn’t convince you, then I’ve got nothing and suggest we both just go and enjoy some lazer horse/buffalo/pickle memes in the htmx twitter account
> The author pretends that his library is what hypertext and hypermedia are as envisioned by Time Berners-Lee and Roy Fielding, and that his approach is the only true representation of both.
Of course you're not. And I already pointed it out to you elsewhere. Your entire writing and marketing revolves around one idea, and one idea only: HTML is "natural hypermedia", and everything else is not.
OK, this is just completely unreasonable of you. HTML is a natural hypermedia in that it has native hypermedia controls. JSON & XML are not natural hypermedia because they do not, however hypermedia controls can be added on top of them, as in the case of HXML/hyperview, which, again I include in my book on hypermedia systems.
There are many other hypermedias, such as Siren, which uses JSON as a base, and I have never claimed otherwise. Mark Amundsen, perhaps the worlds expert on hypermedia, wrote the forward to my book, Hypermedia Systems, and found nothing objectionable and much worthwhile in it.
I hate to be rude but you didn't understand, or refused to acknowledge, the basic meaning and usage of the term 'hypermedia control' until I cited a W3C document using it. While I certainly understand people can dislike the conceptual basis of htmx, its admittedly idiosyncratic implementation or the way we talk about it, at this point I have tried to engage you multiple times in good faith here and have been rewarded with baseless accusations of things I haven't said and don't believe.
At this point, to be an honest person, you need to apologize for misrepresenting what I am saying multiple times to other people. It is dishonest and it makes you a liar, over something as dumb as a technical disagreement.
> I hate to be rude but you didn't understand, or refused to acknowledge, the basic meaning and usage of the term 'hypermedia control' until I cited a W3C document using it.
Just because you were correct in one small detail (citing a 2019 standard retrofitting definitions for the use in RDF etc.) doesn't make you correct in the grand scheme of things.
> with baseless accusations of things I haven't said and don't believe.
I literally quoted your own words at you.
> you need to apologize for misrepresenting what I am saying multiple times to other people.
I will not apologize for things that I even quoted from your own writing and words.
the presence of hypermedia controls is a defining characteristic of a hypermedia format
> I literally quoted your own words at you.
You took an essay I wrote in which I defined the term HDA specifically to contrast with the term SPA in the context of web development and spun that into an imagined philosophy where HTML is the only hypermedia in the world. You persisted in this after I pointed out that I included HXML in my book on hypermedia, and gave a clear definition of what I consider the defining characteristics of hypermedia & clarified specific examples of other formats that are hypermedia.
You have confused "X is A" with "Only X is A" and then, when large gaps in your understanding of hypermedia have been brought to your attention, you have dismissed them as small details.
> I will not apologize
I did not expect you to.
At this point I think I have taken goodwill as far as it can go. I encourage any other readers who have made it to this point in this hellthread to simply read my essays & perhaps my book, and judge them on their own merits:
At this point you are very loudly and publicly grinding your axe to the point that you’re telling someone to their face that they don’t understand their own viewpoint. Putting aside briefly the insanity and futility of that, it makes for a bad experience for literally everyone else.
I feel the same way. Legacy vimscript certainly had some warts but vim9script has been much nicer IME than lua. Elsewhere I've likened vim9script to typescript 1.0. It has just enough types to be useful but not enough to let you go down a rabbit hole. That's sort of the sweet spot when scripting another application.
Specific plugins I use all the time: bufexplorer (though I've mostly rewritten it), ctrlp, colorizer, syntastic, vim-commentary, vim-projectroot, vim-surround, zeavim (Zeal integration, similar to Dash on macos).
I don't disagree with anything you've said but I would personally take this line of argument a step further. I think Bram learned from the neovim fork. Neovim bolstered the existing lua support by reducing the impedance mismatch between the lua language and the vim host. A lot of people enjoy writing lua more than vimscript but the value-add hasn't proved compelling enough for people who didn't mind vimscript. The official neovim repo still has twice as much vimscript code as it has lua code. Even if you claim vimscript is 2x as verbose that would still leave them on equal footing within the project that is putting lua forth as an equal competitor.
My impression of vim9script is that Bram had a list of goals (clearly outlined in the docs) and saw the typescript project as a model to follow. The docs themselves mention typescript as a source of inspiration for some features. The typescript project had a similar set of goals and managed to gain massive adoption both among people who had previously disliked javascript (due to the improved semantics) and people who had previously like javascript (due to the speed improvements, transpiler ergonomics, etc).
> The official neovim repo still has twice as much vimscript code as it has lua code
Much of that is "just" the runtime files (things that define language syntax, etc), which are x-compatible (broadly) between nvim and vim. Seems it would be foolhardy to just re-write all that in lua, "just because".
This is all correct. To quote from the official description of vim9script:
Vim9 script and legacy Vim script can be mixed. There is no requirement to
rewrite old scripts, they keep working as before. You may want to use a few
:def functions for code that needs to be fast.
This seems akin to complaining that the CPU core has only one instruction pointer. There is nothing preventing a struct implementing `Error` from aggregating other errors (such as validation results) and still exposing them via the `Error` trait. The fact of the matter is that the call stack is linear, so the interior node in the tree the author wants still needs to provide the aggregate error reporting that reflects the call stack that was lost with the various returns. Nothing about that error type implementing `Error` prevents it from also implementing another error reporting trait that reflects the aggregate errors in all of the underlying richness with which they were collected.