Hacker Newsnew | past | comments | ask | show | jobs | submit | 18al's commentslogin

The wish for an _AI revolution_ in learning seems to have been granted by a monkey's paw. Articles like this, or [0], or browsing r/teachers [1], or even talking close-ones in college, give a rather grim view of AI use.

A para from from [0] makes it seem that students understand that LLM use doesn't lead to learning, but still do so. Do they not see effort put into learning worthwhile?

  A few months ago, I overheard some college students talking about their classes.
  One was complaining about an assignment they needed to do that night, and
  another incredulously asked why they wouldn’t just have ChatGPT do it. The first
  replied, “This is my major, I actually need to learn stuff in this class. I use
  AI for my other classes.”

I myself use LLMs for learning (using ChatGPT's study mode for instance r.i.p) and can see that there's a right way to use it—you reach for it when you hit a wall, not to avoid the friction of developing an understanding.

From what I understand tho, most of LLM use for learning is just LLM used as a tool for cheating. Even tfa mentions something of the sort:

  few of Musall’s most advanced students have taken advantage of AI to learn new
  topics. But, as far as she can tell, more students are using it to just find
  answers
The article attributes _skill issue_ as part of the problem, but how much of that is a motivation or awareness issue. How do you make student realize that learning is worth it?

[0] https://arstechnica.com/science/2026/04/to-teach-in-the-time...

[1] https://www.reddit.com/r/Teachers/


You never realize the beauty of just learning cool stuff in college and exploring around until youre like 26 and graduated for 4 years

I don't even think it is a monkey's paw. That implies that it looked like it would solve a big problem in our initial estimation.

But "students will use the cheating machine to cheat" was obvious from the release of ChatGPT3. There was never some period of time where AI looked like it was a net positive for students only to be revealed to have an unexpected harm.

Even from the folks who claim to use LLMs to learn rather than cheat or avoid work, I've seen so many people admit that they are actually using it to harm themselves. "Oh, I only ask ChatGPT for the answer for really hard problems." Yeah man, doing the hard problems is how you learn.


Depends on how the transformer has been trained. If it has seen 11 digit examples while training it might work, else the input will be out of distribution and it will respond with a nonsensical number.

For instance the current high score model (311 params [0]), when given 12345678900 + 1, responds with 96913456789.

An interesting experiment would be: what's the minimum number of parameters required to handle unbounded addition (without offloading it to tool calls).

Of course memory constraints would preclude such an experiment. And so a sensible proxy would be: what kind of neural-net architecture and training would allow a model to handle numbers lengths it hasn't been trained on. I suspect, this may be not be possible.

[0] https://github.com/rezabyt/digit-addition-311p


> what kind of neural-net architecture and training would allow a model to handle numbers lengths it hasn't been trained on

A recurrent neural network implementing binary addition with carry could do this, and one can derive the correct weights with pen and paper without too much effort.

Whether gradient descent will find them too is another matter entirely


If the neural network had moveable tape heads which could seek between invocations, and the inputs were provided in little-endian format, a fairly small model could implement arbitrary addition with carry, and you'd only need to add a few redundant dimensions to get something that could be trained.


Derailleurs are hard to debug.

The rear derailleur on my cycle wasn't shifting as expected, and so I spent an afternoon following YouTube and adjusting its limit screw and barrel adjuster to no avail.

Finally gave up and took it to a shop, the mechanic took the cable out of the housing, wiped it down, greased and put it back in; the derailleur starts shifting normally.


You make a good point often overlooked.

"Easy to repair" doesn't necessarily mean you or I can repair it easy. It might mean someone, preferably a local, independent business, can repair it easy.


Hey, Frappe Books developer here. Not as of now, but it is something that we will be adding in. There has been a lot of requests for multi user support.


> My most complicated problems were reactive DOM elements based on nested loops and recursive components

Agreed, I've tried solving it by setting an attribute `sb-mark` which allows syncing just the branch of DOM elements that maps to that particular key in the reactive object.

This removes the need for VDOM diffing, but unless I use a `MutationObserver` external updates to marked branches will probably mess it up.

Haven't yet tested it for recursive components, it should work for nested loops.

> and it is simple but sometimes confusing

I understand what you mean, my approach has the aforementioned `sb-mark` attribute/directive which syncs primitives, lists, and objects.

I've started feeling that the convenience of having just one attribute to remember is supplanted by the confusion of its implications not being immediately apparent from context.


> This removes the need for VDOM diffing, but unless I use a `MutationObserver` external updates to marked branches will probably mess it up.

Similar in Reken. It controls all the DOM; DOM updates outside Reken will get stuff out of sync. After a model change, all managed DOM gets directly updated by a generated controller. It does check the DOM first if a textContent or attribute change is necessary. Most DOM state checks are cheap. Another optimization is that all hidden DOM trees get skipped; Great in SPA apps with multiple pages.


The biggest hurdle to using TypeScript is the build step before it can actually be run. If the type annotation TC39 [0] comes to pass this would be largely taken care of; _hype_ waxes. (unfortunately the proposal has been stagnant for more than a year now)

A lot of the new frontend codebases involve a build step before running. For such codebases, TypeScript's build hurdle has already been overcome.

[0] https://github.com/tc39/proposal-type-annotations


I too tried gym, but now I'm muscular and awkward. Now, before interaction, people don't expect me to be awkward, and it feels as though they are more forgiving of my awkwardness but that could just be my perception.

Going to the gym and losing weight did help a lot with self esteem (and posture) issues though.

After observing my interactions, I found that if I'm unfamiliar with the person, I'll miss out on social cues or there'll be a delay before I perceive them. Also my brain goes into some kind of _fight or flight_ causing slightly impaired speech and memory.

What I do to _fix_ this is watch how others interact with this person and try to mimic them while adjusting for unfamiliarity. Assuming familiarity could be perceived as rude.

For me building familiarity allows me to interact with decreasing awkwardness, so I just try and find the fastest way to do that.


Using rate of speed up is probably a bad metric due to varying densities, but even if one were to account for that and use some kind of smart speed up app that maintains constant information throughput, the issue is with not taking pauses to ruminate.

It's more of an information retention problem rather than an information loss one. IE not committing to long term memory as the author states.

Not very unlike consuming food without chewing.


Yeah, I've noticed that with text I'm going to make more pauses thinking about what I just read (especially printed text for some reason). Video is the worst, while audio only in the middle. Maybe because of clunky controls ?


Although the emulations are getting really good. For instance this [0] VSCode plugin isn't even an emulation. It embed neovim into VSC, even loads your init.vim file. Snappy too.

[0] https://github.com/asvetliakov/vscode-neovim


I've used it quite a bit but I've had it desync from the server state somehow on multiple occasions making the experience not all that pleasant.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: