Humans will not regurgitate longer segments of code verbatim. Even if we wanted to, we couldn’t do it because our memory doesn’t work that way. LLM on the other hand can totally do that, and there’s nothing you can do to prevent it.
Llm can but do they? Is there any evidence that they spit out a piece of code verbatim without being explicitly prompted to do so? NYT v OpenAI for example, NYT intentionally prompted to circumvent OpenAi's guardrail to show NYT articles
If the censoring is at the DNS level, can the admin please replace the domain name in the url with the ip address to which it should resolve? Thank you.
Your country's broken internet is your problem. If you are having DNS queries censored then change your DNS resolver on your client side. If you still get intercepted look into DoH.
It actually interesting that Carlsen (likely the best classical player of all time) hasn’t overfit classical chess to the point where it hurts his ability to play other variants.
Sure, the market chose Markdown, but this simply led me to the conclusion that the market isn’t worth following. Of course the mismatch creates some friction, but the benefits of org-mode, for me personally, easily outweigh that.
I think it is really just the difference between chemically refining something and electrically refining something.
Raw AC comes in, then gets stepped down, filtered, converted into DC rails, gated, timed, and pulsed. That’s already an industrial refinement process. The "crude" incoming power is shaped into the precise, stable forms that CPUs, GPUs, RAM, storage, and networking can actually use.
Then those stable voltages get flipped billions of times per second into ordered states, which become instructions, models, inferences, and other high-value "product."
It sure seems like series of processes for refining something.
What problem is this trying to solve and does it actually succeed at solving it? I‘m struggling to see the appeal given that the JS still needs to model the internal structure of the template in order to fill the slots.
The Shadow DOM can auto-fill the slots in the case where a web component has slot fillers in the main DOM. You still need JS to invoke/create the shadow DOM, but in that case your JS might be minimal and not need to model the interior structure.
But the big problem that template tries to solve is building DOM fragments that are parsed but not "live" in the open documment. Before the template tag there was no good way to do that other than JS and createElement/createElementNS, and that has always been slower than the Browser's well optimized HTML parser.
Also, the slot tag does solve a minor problem of being the first tag whose out-of-the-box (browser CSS) behavior is `display: contents;`. It's obviously not a huge lift from the CSS one-liner, but there are still some uses for it even outside of templates.
Isn't this exactly the point of this model? No need to memorize everything (which makes transfomers expensive), just keep the relevant info. SSM are essentially recurrent models.
You can't always know what will be "relevant info" in the future. Even humans can't do this but whenever that's an issue, we just go back and re-read, re-watch etc.
None of these modern recurrent architecture have a way to do this.
How often do you go back an rewatch earlier parts of a movie? I hardly ever do this. In the cinema, theater, or when listening to the radio it’s simply impossible and it still works.
You are mentioning avenues that are largely for entertainment. Sure you might not go back to re-attend for those. If you will be tested or are doing research, are you really looking at a large source once ?