I enjoyed Werner Herzog's "Encounters at the End of the World" at many levels, not the least of which was how different it was from "Aguirre, the Wrath of God".
Well he didn't have psychopathic Kinski to mess with everything and everybody, throwing childish tantrums over trivial things, did he. I mean the locals during filming offered to Herzog to murder him as a favor, seeing him as an evil spirit... you can't go much further than that.
I also recommend ie Dark Glow of the Mountains about Messner and Kammerlander doing properly hardcore expedition on Gasherbrums traverse in Pakistan that was never done before. He interviews them before they depart and after coming back, surviving by series of mere chances in extreme environment, pushed to absolute limits of human bodies and minds. Since I do a bit of mountaineering I can truly appreciate characters and the insights that no hollywood fantasies can ever come close to. True documentaries.
A bit controversial sidenote - these are the efforts that I have huge respect for, not doing it for the money, not chasing sponsors with every move. Seeing some world championships or olympics I can't have much respect for those very rich sportsmen who are focusing more on chasing new sponsors than actual spirit of the game.
I used to ask myself the same question, but then I realized that for these people it doesn't matter how much they spend. When you are worth billions of dollars, the difference between spending $10M or $50M on your home Does Not Matter. You still have many other $M to spend on other things. It's perfectly rational for them to spend what seems like a large amount of money for an apparently small marginal improvement.
"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.
I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners.
On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.
I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.
This. I feel this all the time. I love 3Blue1Brown's videos and when I watch them I feel like I really get a concept. But I don't retain it as well as I do things I learned in school.
It's possible my brain is not as elastic now in my 40s. Or maybe there's no substitute for doing something yourself (practice problems) and that's the missing part.
One factor in favor of the use of LLM as a learning tool is the poor quality of documentation. It seems we've forgotten how to write usable explanations that help readers to build a coherent model of the topic at hand.
> On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you.
The other benefit is that LLMs, for superficial topics, are the most patient teachers ever.
I can ask it to explain a concept multiple times, hoping that it'll eventually click for me, and not be worried that I'd look stupid, or that it'll be annoyed or lose patience.
It always comes down to economics and then the person and their attitude towards themselves.
Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for.
I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself.
BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education.
That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves."
Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem.
So I've actually been putting more effort into deliberate practice since I started using AI in programming.
I've been a fan of Zed Shaw's method for years, of typing out interesting programs by hand. But I've been appreciating it even more now, as a way to stave off the feeling of my brain melting :)
The gross feeling I have if I go for too long without doing cardio, is a similar feeling to when I go for too long without actually writing a substantial amount of code myself.
I think that the feeling of making a sustained effort is itself something necessary and healthy, and rapidly disappearing from the world.
I’ve always like the essential/accidental complexity split. It can be hard to find, but for a problem solving perspective, it may defines what’s fun and what’s a chore.
I’ve been reading the OpenBSD lately and it’s quite nice how they’ve split the general OS concepts from the machine dependent needs. And the general way they’ve separated interfaces and implementation.
I believe that once you’ve solve the essential problem, the rest becomes way easier as you got a direction. But doing accidental problem solving without having done the essential one is pure misery.
That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.
Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.
Yeah, it's a great way for me to reduce activation energy to get started on a specific topic. Certainly doesn't get me all the way home, but cracks it open enough to get started.
I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!
I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.
> my whole career using regex and never fully grokking it
Sorry to hear that, nobody ever told me either. Had you invested a bit of time earlier in your career, it would have paid dividends 100x fold. The key is knowing what’s wheat and what’s chaff. Regex is a wheat.
With that said, maybe you tried.. everyone has their limits.
If you're going to deploy what you make with them to production without accidentally blowing your feet off, 100%, be they RegExp or useEffect(), if you can't even tell which way the gun is pointing how are you supposed to know which way the LLM has oriented it?
Picking useEffect() as my second example because it took down CloudFlare, and if you see one with a tell-tale LLM comment attached to it in a PR from your coworkers who are now _never_ going to learn how it works, you can almost be certain it's either unnecessary or buggy.
For things Im working on seriously for my work, for sure, I spend time understanding them, and LLMs help with that. I suppose, also having experience Im already prone to asking questions about things I have a suspicion can go wrong
But there is also a ton of times something isnt at all important to me and I dont want to waste 3 hours on
I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.
For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.
In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.
We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.
That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.
And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.
Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information.
How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway
I learn fastest from the examples, from application of the skill/knowledge - with explanations.
AIs allowed me to get on with Python MUCH faster than I was doing myself, and understand more of the arcane secrets of jq in 6 months than I was able in few years before.
And AIs mistakes are brilliant opportunity to debug, to analyse, and to go back to it saying "I beg you pardon, wth is this" :) pointing at the elementary mistakes you now see because you understand the flow better.
Recently I had a fantastic back and forth with Claude and one of my precious tools written in python - I was trying to understand the specifics of the particular function's behaviour, discussing typing, arguing about trade-offs and portability. The thing I really like in it that I always get a pushback or things to consider if I come up with something stupid.
It's a tailored team exercise and I'm enjoying it.
Windows APIs docs for older stuff from Win32 is extremely barebones. WinRT is better, but still can be confusing.
I think AI is really great to start with the systems programming, as you can tailor the responses to your level, ask to solve specific build issues and so on. You can also ask more obscure questions and it will at least point you at the right direction.
Apple docs are also not the best for learning, so I think as a documentation browser with auto-generating examples AI is great.
Human teachers make mistakes too. If you aren't consuming information with a skeptical eye you're not learning as effectively as you could be no matter what the source is.
The trick to learning with LLMs is to treat them as one of multiple sources of information, and work with those sources to build your own robust mental of how things work.
If you exclusively rely on official documentation you'll miss out on things that the documentation doesn't cover.
If I have to treat LLMs as a fallible source of information, why wouldn't I just go right to the source though? Having an extra step in between me and the actual truth seems pointless
If the WinAPI docs are solid you can do things like copy and paste pages of them into Claude and ask a question, rather then manually scan through them looking for the answer yourself.
Apple's developer documentation is mostly awful - try finding out how to use the sips or sandbox-exec CLI tools for example. LLMs have unlocked those for me.
If you're good at programming you can usually tell exactly why it worked or didn't work. That's how we've all worked before coding agents came along too - you don't blindly assume the snippet you pasted off StackOverflow will work, you try it and poke at it and use it to build a firm mental model of whether it's the right thing or not.
Sure. A big part of how I'd know that the function I'm calling does what I think it does, is by reading the source documentation associated with it
Does it have any threading preconditions? Any weird quirks? Any strange UB? That's stuff you can't find out just by testing. You can ask the LLM, but then you have to read the docs anyway to check its answer
Except you have no idea if what the LLM is telling you is true
I do a lot of astrophysics. Universally LLMs are wrong about nearly every astrophysics questions I've asked them - even the basic ones, in every model I've ever tested. Its terrifying that people take these at face value
For research at a PhD level, they have absolutely no idea what's going on. They just make up plausible sounding rubbish
Astrophysicist David Kipping had a podcast episode a month ago reporting that LLMs are working shockingly well for him, as well as for the faculty at the IAS.[1]
It's curious how different people come to very different conclusions about the usefulness of LLMs.
The answer it gave was totally wrong. Its not a hard question. I asked it this question again today, and some of it was right (!). This is such a low bar for basic questions
Why does it matter? We have table of contents, index and references for books and other contents. That’s a lot of navigational aid. Also they help in providing you a general overview of the domain.
Bam, that's the single source of truth right there. Microsoft's docs are pretty great
If I use an LLM, I have to ask it for the documentation about "GetQueuedCompletionStatus". Then I have to double check its output, because LLMs hallaucinate
Doubly checking its output involves googling "GetQueuedCompletionStatus", finding this page:
I have not done win32 programming in 12 years. Maybe you've done it more recently. I'll use an LLM and you look up things manually. We can see, who can build a win32 admin UI that shows a realtime view of every open file by process with sorting, filtering and search on both the files and process/command names.
I estimate this will take me 5 minutes
Would you like to race?
This mentality is fundamentally why I think AI is not that useful, it completely underscores everything that's wrong with software engineering and what makes a very poor quality senior developer
I'll write an application without AI that has to be maintained for 5 years with an ever evolving featureset, and you can write your own with AI, and see which codebase is easiest to maintain, the most productive to add new features to, and has the fewest bugs and best performance
Sure let's do it. I am pretty confident mine will be more maintainable, because I am an extremely good software engineer, AI is a powerful tool, and I use AI very effectively
I would literally claim that with AI I can work faster and produce higher quality output than any other software engineer who is not using AI. Soon that will be true for all software engineers using AI.
I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs.
For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance.
All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game.
Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money.
One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana.
> But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit.
the government invented computers. you need politics to fund all of this. you are talking about triumphs of politics as much as invention. i don't know why you think i am pro influencer or charlatan...
I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.
I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work.
Offloading completely the hard work and just getting a summary isn't really learning.
There's a mystique around Mathematica's math engine. Is this groundless, or will you eventually run into problems getting correct, identical answers -- especially for answers that Mathematic derives symbolically? The capabilities and results of the computer algebra systems that I've used varied widely.
Hard to tell honestly. So far there was always some surprisingly straight forward solution If had any problems with the math engine. There is actually a lot of public research how equations can be solved/simplified with computer algorithms. So I'm optimistic.
I also stumbled upon a few cases where Mathematica itself didn't quite do things correctly itself (rounding errors, missing simplifications, etc.). So maybe it's actually a little overhyped …
Scmutils from MIT does a very good -- arguably better -- job for correctness. No symbolic integration by ideology and not identical. Sussman and Terman. Amazing attention to detailand correctness. Claude could probably bridge Scheme to Wolfram.
I'm not sure how important but- for-bug identical output really is.
The surge in laptops contributed, too. The opportunity or need for expansion cards, additional memory or storage upgrades, and peripherals disappeared or shrank.
I used to think of the sales staff as the United Nations of Fry's. It was always thrilling to see someone starting their American dream, even if the service was haphazard.
>The surge in laptops contributed, too. The opportunity or need for expansion cards, additional memory or storage upgrades, and peripherals disappeared or shrank.
We were once able to upgrade CPUs, RAM, video cards, HDD, network cards and replace batteries in laptops, too.
Pepperidge Farms, oops I mean Framework, remembers.
Bought this Framework 16 laptop less than a year ago so I haven't upgraded anything yet. But if I decide I want a GPU (I don't play games on this laptop so I bought it GPU-less) I can add one. If they come out with a new motherboard that I decide is worth buying, I can swap it out and keep the rest of the laptop. And I can customize all six of the side ports at any time; they're hot-swappable. Currently I have three USB-C slots, one USB-A for when I need a thumbdrive, one HDMI, and one SD card reader slot. I bought a second USB-A slot and an Ethernet slot, so if I need two USB-A ports or if I need to plug into Ethernet, I can just slide the physical locking tab on the appropriate side of the laptop, slide out one of the slots, and slide in the Ethernet or USB-A slot. Then relock the tab so the expansion slot fillers are physically held in place and I can carry on. No rebooting needed, but now I have two USB-C ports and two USB-A. Or three USB-C, no USB-A, and one Ethernet. Whatever configuration I need at the moment.
It's great. I currently don't have any plans to buy a new laptop in the near future (my wife's laptop is just two years old and has plenty of life left in it), but next time I need a new laptop, I plan to buy a Framework again.
P.S. No affiliation with Framework, just a customer.
Sure but I don't mind the current outcome. I want my laptop to be small and light and if the tradeoff is the ram and battery have to be glued, I'd take it.
Because the existing parser is written in truly emacs style: no formal grammar, just a lisp code with a regexp at each turn. Theoretically speaking it doesn't forbid you from writing a parser, but in practice there are no full-blown parsers of org-mode except the reference one.
For many software businesses, licensing is an issue. The spec is GFDL with GPL code samples, a non-cleanroom translation of the elisp parser would (likely) be GPL (or at least arguably enough so to keep lawyers busy), so going and doing some other roughly equivalent markup language instead avoids the copyleft requirements.
So, yes, “too much trouble”, much of it nontechnical.
I was at G when "mobile first" was the slogan, and it led to "odd" choices such as designing and leading with a travel app rather than the web site. Perhaps locally suboptimal, but in the long run brutal forcing functions were needed to move a company as big and successful as Google into something new. I hear that going all-in on AI was internally disruptive and probably had some bad side-effects that I'm ignoring, but in hindsight it was the right thing to do. When ChatGPT, perplexity, and you.com came out, my immediate thought was "Google is toast", but they've recovered.
> I hear that going all-in on AI was internally disruptive and probably had some bad side-effects that I'm ignoring, but in hindsight it was the right thing to do.
That's the opposite in my experience. It is driving long term google audience away from google's paying products.
why? I don't base my youtube subscription or my drive subscription based on my AI subscriptions
Sure I get gemini for free now for a year since I have bought a pixel, but I have no intention to renew, I'll likely just leech of the ones my employer pays for
When YouTube is replacing translations with AI-generated ones or if Drive is using all your personal documents as training data, that can definitely drive people away.
My take away from mobile first G was “sites need to be fast right guys for mobile?” ->
amp -> actually let’s hostile take over the web, oh actually well rework chrome auto sign in, oh actually … just a long string of user hostility
You’d be protected from this particular exploit if you used a package manager rather than the updater, though of course you’d still be vulnerable to the installer binary itself getting compromised.
Wonder how many packages in community package repos are compromised. Surely "Hubbleexplorer" can be trusted to provide arch users with a honest, clean version of npp.
Standard answer to a potentially compromised machine is to start with a factory reset machine and add the software and data you need to do your work/use the machine. Do not take executables from the compromised machine and use them any where since they too could be compromised.
There are more steps you can take to ensure greater safety. The above is the minimum a I do for myself and what the minimum IT department and my company executes.
My minimum is start with a freshly formatted hard drive then reinstall the os, software(fresh not transffered), and data required for your use.
> There are more steps you can take to ensure greater safety.
There are firmware infections that can persist even after hard drive format. Though to my understanding os/user space to firmware infections are rare. As far as I know a 'factory reset' on phone and some laptops does not reinstall firmware and clear out firmware infections. So to my understanding the 'factory reset' found on phones is analogous to formatting your hard drive, reinstall the os, software, and data required for your use.
reply