> any such galactic intelligence would probably recognize that its predecessor were meat
Perhaps it's predecessor was just advanced enough to build self-modifying replicators and fire them out into space. Eventually it hits a planet or asteroid and gradually becomes sentient and intelligent. No trace of how it originated.
The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.
I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.
There's a difference between 1000 diverse humans with varied traits making errors that should cancel out because of the law of large numbers vs 10 AI with the same training data making errors that would likely correlate and compound upon each other.
I have yet to see a comparison of human vs. LLM confabulation errors at scale.
"Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution?
Your project vue-skuilder has 6 github action steps devoted to checking the work you do before it's allowed to go out. You do not trust yourself to get things right 100% of the time.
I am watching people trust LLM-based analysis and actions 100% of the time without checking.
There’s been enough divergence between words and actions from Amodei for me to also consider him deceitful, if that’s really the low bar you want to set. I’m not saying he’s worse than Altman, just to be clear.
Yep. That is why doing both can be beneficial. Alerts are more proactive if acted upon, but often too easy to ignore meaning ballast is more fail-safe in that respect.
I'm sure this impacts certain cultures in the US, but I've got to imagine that's a pretty tiny impact compared to pragmatic concerns like "where can I charge this" and "how often do I need to charge this".
Oh I see. So yes, for pedestrians standing nearby when you start up. That probably represents 0.01% of the people I encounter on my drives. The other 99.99% don't notice.
This is how I was taught. Use ( ) or -- -- here and the Oxford comma for list of 3 or more.
I get lazy with adding the comma before the "and" in list, and without fail I hear my grandmother/father/teachers pointing out how wrong I am for doing so. Same for my use of semicolons followed by "and" or "but".
I never realized the Oxford comma was even something up for debate.
Many years ago working on natural language to SQL, when we had ambiguities this is how we’d clarify things with the user (albeit with the minimal amount of brackets necessary).
It eliminated some ambiguity. It should be quite self evident that even without an example it is quite impossible to eliminate all ambiguity (it’s a feature of human language.)
The more important property is that it never introduces more ambiguity. Ie at worst it doesn’t help, but not making it worse.
As written it is perfectly clear that Betty is neither the maid nor the cook, neither of whom the author bothered to name in this sentence. If that wasn't the author's intention they should grammar better.
Sure, I guess that's an option for youth sports in the prepubescent age groups. As a practical matter most youth sports leagues and schools aren't going to hassle with sex screening tests for little kids.
But once puberty hits everything changes. My teenage daughter played travel club volleyball on a pretty good team, and during practice they would occasionally run drills with the boys team. Even at that age the difference in hitting power and vertical was enormous, and those differences only grow larger with age. Men and women are literally playing different games. Beyond just fairness, forcing girls to compete against biological males becomes a safety risk due to concussions from taking a ball to the head.
males competing against males are also at risk by taking a ball to the head :).
I think male female trans etc . can compete if analysed by sports branch basis. Male x female in contact sports like karate boxing taekwondo is not fair. However i think the difference is negligible in shooting, archerty, curling etc.
“Women tend to have thinner skulls than men, along with smaller neck muscles, which can predispose female athletes to getting a concussion,” says Sarah Menacho, MD, a neurosurgeon and neurocritical care specialist at University of Utah Health. “Data shows that women are also more likely than men to report concussion-related symptoms, and these symptoms can persist for a longer time period prior to recovery than in male athletes.”
In my experience competing in different things, that's typical: Local organizations are free to set their own local rules, but once you cross over into events that make you eligible for higher level competition they have to strictly abide by the national level rules. I couldn't use my results from grassroots competitions to qualify for national level events, generally.
They are trained and evaluated on correctness benchmarks. But correctness on benchmark questions is only loosely coupled to correctness outside the benchmark, in part because LLMs aren't grounded to the same biological reality as humans. You can't easily convince an average person to cut off their own hand and this has little to do with higher-level thought. In contrast, it only takes a bit of creativity to convince an LLM to say or do almost anything.
At a recent AI workshop management made clear that they see AI as rendering sprints and scrums obsolete, that Kanban makes a lot more sense, and that estimating effort/story-points is also becoming meaningless. Which is a strong silver lining if you ask me.
I think it's to do with the bottleneck shifting away from code generation and towards specifying and reviewing and integrating code. The process of working with AI agents to produce specs, tech specs, code, and reviews lends itself more to a flow-based structure (like kanban).
Bear in mind this is a B2B enterprise company with a mix of legacy and greenfield. And management has invested heavily into designing a robust spec/context-based workflow for using agents. Might be different elsewhere.
Personally I don't think scrums, planning, retros etc were better than kanban even before AI, at least if you have switched-on, motivated and smart people on your team. They actually made things less agile, and story-points give a false sense of predictability. Imo the crucial factor may be that AI agents are smart and switched-on (with the right context).
Its a good excuse to move away from a shitty process, I'll take it! Fuck SCRUM, fuck Agile. No one was doing it anyway. I had to quit an Agile job because I was shipping shit without ever getting a lick of feedback, and this was not some webdev low stakes work, it was for planning expensive real world installations.
The abstract suggests they're proposing speed-up techniques for the assignment and centroid update stages of the classic k-means algorithm. Which would therefore also apply to k-means++.
Perhaps it's predecessor was just advanced enough to build self-modifying replicators and fire them out into space. Eventually it hits a planet or asteroid and gradually becomes sentient and intelligent. No trace of how it originated.
reply