I’m good with zero-story puzzle games. I’ve spent many hours in Simon Tathem’s Puzzles [1] on my iPhone, just for the 100% pure logic goodness that they are.
Do yourself a favor, if you haven't yet: go in the instructions for the games you like and find out the original game's name (for example' "Light Up" is actually called Akari), go online and find hand-crafted puzzles for that game. I love that Simon Tatham's Puzzles exists, but nothing beats hand-crafted puzzles made by good designers. There's a sense of purpose in the order you discover the solution and some "eureka!" moments that randomly generated puzzles will never give you!
You must not have kids. If you did, I’m speculating that you’d get it.
If you do have kids (or if you can empathize having kids), would you be ok with tech that super easily allows your kids’ peers to share/laugh about nudes of your kids?
I have kids, I would not be OK with anyone doing that to my kids. That is much different from being OK with the technology existing. Kids/people doing terrible things and kids/people allowing it to continue is the problem.
You can kill with a gun, a knife, a fork. Removing the tool does not change the situation of being around a person who wants to kill you.
We homeschool our kids and they have grown up to be respectful people.
Well I just canceled my Claude Pro subscription because of the mysterious limits that I don't experience with codex, even after paying for "extra usage". If Anthropic can't figure out their capacity problems they are in trouble.
The preference rankings keep fluctuating on every release for me. A year ago it was Gemini dominating coding tasks, then it was Claude, now it is the latest Codex again. With the next point release(s) the cycle will continue.
You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.
It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.
I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.
Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.
I just asked like I said, give me plot summary until chapter 14, don't spoil the rest of the book. And of course when I told it what it just did it was like oh I'm sorry, here's a summary without the spoilers for the ending. So clearly it could do it without additional context.
>>Do they even have direct access to published works to use as reference material?
I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).
>>I wouldn't expect any LLM to be able to respect such a request
Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.
But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.
Why though? They seem to know everything about everything, why not this specifically.
The problem with this line of reasoning is that it is unscientific. "They seem to" is not good enough for an operational understanding of how LLMs work. The whole point of training is to forget details in order to form general capability, so it is not surprising if they forget things about books if the system deemed other properties as more important to remember.
>> if they forget things about books if the system deemed other properties as more important to remember.
I will repeat for the 3rd time that it's not a problem with the system forgetting the details, quite the opposite.
>>The problem with this line of reasoning is that it is unscientific.
How do you scientifically figure out if the LLM knows something before actually asking the question, in case of a publicly accessible model like Gemini?
Just to be clear - I would be about 1000000x less upset if it just said "I don't know" or "I can't do that". But these models are fundamentally incapable of realizing their own limits, but that alone is forgivable - them literally ignoring instructions is not.
I wouldn't expect an AI to know exactly what happens in every chapter of a book.
Knowing the plot of Neuromancer isn't the same as being able to recite a chapter by chapter summary.
I tried this Neuromancer query a few times and results greatly vary with each regeneration but "do not include spoilers" seems to make Gemuni give more spoilers, not less.
Not really- if you had examined the output closely you probably would have seen noticed it conflated chapter 13 and 14 or 14 and 15. Or you got very lucky on a generation. It definitely doesn't exactly know what happens in each chapter unless it has a reference to check.
I noticed that Apple speech to text has gotten pretty good lately. Is that because they’re paying Google? Not sure I use other AI features from Apple as I have my Siri turned off.
”After adjusting for potential confounders and pooling results across cohorts, higher caffeinated coffee intake was significantly associated with lower dementia risk (141 vs 330 cases per 100 000 person-years comparing the fourth [highest] quartile of consumption with the first [lowest] quartile; hazard ratio, 0.82 [95% CI, 0.76 to 0.89]) and lower prevalence of subjective cognitive decline (7.8% vs 9.5%, respectively; prevalence ratio, 0.85 [95% CI, 0.78 to 0.93]).”
So about 18% relative reduction. But if your risks are already low (e.g. active and healthy diet) the relative reduction is less impactful (e.g. 4% to 3.28%).
I wonder how cool it would be to have a live ephemeral chat for each channel?
One thing I love(d) about live TV (or even live radio) was the community around knowing other people were watching the exact same thing I was watching (and then the watercooler chat around it afterwards).
If there was live chat attached to each of these "stations", it could spark some interesting chatter/community.
I know this already exists OOTB with YouTube Live, FB Live, etc.
But this would be for things that were simply uploaded, and now streamed live like you're doing here.
Obviously, that only works if there's enough viewership/participation.
Can we also add “Don’t complain about AI-generated content. It does not promote interesting discussion.”?
I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.
To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.
But many threads can turn into nothing but AI complaints, and it’s just not interesting.
From my experience, it usually happens when people are too brazen about it, with boring stuff like "Interesting! Now here's what Gemini said about the above..". IMHO that is an entirely adequate reaction.
I’m mostly referring to responding to the article itself (allegedly) being AI-written. Then the top half of the thread is derailed by a discussion about the article itself being AI-written.
I’m good with zero-story puzzle games. I’ve spent many hours in Simon Tathem’s Puzzles [1] on my iPhone, just for the 100% pure logic goodness that they are.
[1] https://apps.apple.com/us/app/simon-tathams-puzzles/id622220...
reply