Aren't mac minis flying for "local models" because people have no clue what they are doing?
All those people who bought them for openclaw just bought them because it was the trendy thing to do. No one of those people is running local models on there.
CUDA was built during the time AMD was focusing every resource on becoming competitive in the CPU market again. Today they dominate the CPU industry - but CUDA was first to market and therefore there's a ton of inertia behind it. Even if ROCm gets very good, it'll still struggle to overcome the vast amount of support (read "moat") CUDA enjoys.
True. After all Nvidia hasn't built tensorflow or PyTorch. That stuff was bound to be built on the first somewhat viable platform. Rocm is probably far ahead of where cuda was back then, but the goal moved.
It's not even hard, just slow. You could do that on a single cheap server (compared to a rack full of GPUs). Run a CPU llm inference engine and limit it to a single thread.
I was replying purely to 'Oh really, there was a vote?'
Most places have votes every few years. And the elected representatives can generally make and amend or keep laws. The candidates can also generally make any promises they wish to make, and if the general public wants some specific laws changed, it's often a good idea for candidates to make that a part of their platform. And if people generally don't want a law changed, candidates tend to ignore them. Basic representative democracy stuff.
Comments explaining what the code does, which is what an LLM could answer, are basically useless comments. Comments that describes why the code is how it is, is a bit more valuable, but also something LLMs cannot really reliably infer by just looking at the code.
Mathematics is the FORTRAN of the real source. Closer to a real source is probably "real" things like atoms and other universal things.
If I remember correctly, Stargate-SG1 at one point had some ideas about this sort of universal language, that multiple species could use for communication, as any sufficiently intelligent specie probably been able to see atoms and so on, but may have completely other way of doing "math-like" stuff.
Game developers sometimes make the “randomness” favor the player, because of how we perceive randomness and chance.
For example in Sid Meier’s Memoir, this is mentioned.
Quoting from a review of said book:
> People hate randomness: To placate people's busted sense of randomness and overdeveloped sense of fairness, Civ Revolutions had to implement some interesting decisions: any 3:1 battle in favor of human became a guaranteed win. Too many randomly bad outcomes in a row were mitigated.
The original link being discussed in that thread is 404 now, but archived copies of the original link exist such as for example https://archive.is/8eVqt
I used to get so many comments about how the computer opponent in a tile-based board game of mine cheats and got all the high numbers while they always got low numbers, and I'd be like "that's mathematically impossible. I divide the number of spaces on the board in half, generate a deck of tiles to go into a 'bag', and then give a copy of those same tiles to the other player.
So over the course of the game you'll get the exact same tiles, just in a different random order.
Now to be fair, I didn't make that clear to the player that's what was happening, they were just seeing numbers come up, but it was still amazing to see how they perceived themselves as getting lower numbers overall compared to the opponent all the time.
Meanwhile on the base game difficulty I was beating the computer opponent pretty much every game because it had such basic A.I. where it was placing its tiles almost totally at random (basically I built an array of all possible moves where it would increase its score, and it would pick one at random from all those possibilities, not the best possibility out of those).
My Dad used to play a lot of online poker, and he used to complain when other players got lucky with their hands, be like 'I know the chances are like 5% of them getting that! They shouldn't have gotten that!' and it always reminded me of those people.
Games like Battle for Wesnoth which have it implemented right, you’ll look at a 90-10 scenario with 2 attacks and end up with the 1% scenario. Enough to make a man rage. I have degrees in Mathematics, I am aware of statistics, and all that. And yet when I played that game I would still have an instant “wait what, that’s super unlikely” before I had to mentally control for the fact that so many battles happen in a single map.
Was good because it identified a personal mental flaw.
I worked on a game where we added a "fairness" factor to randomness. If you were unlucky in one battle, you were lucky in the next, and vice versa. Mathematically you ended up completely fair. (The game designer hated it, though, and it wasn't shipped like that)
The better option would be to just increase the flat odds. DQM: The Dark Prince is brutal with it's odds, but fair. A 45% chance is 45%.
In games like Civ/EU/Stellaris/Sins/etc It makes sense that a 3:1 battle wouldn't scale linearly, especially if you have higher morale/tech/etc. Bullets have a miss ratio, 3x as many bullets at the same target narrows that gap and gives the larger side an advantage at more quickly destroying the other side. So just give it an oversized ratio to scale the base (1:1) odds at.
That keeps "losing" realistic...a once in an occasion happenstance of luck/bad tactics/etc but also a generally very favorable and reliable outcome for your side.
For a small while I've had the idea of a [game engine/fantasy console/Scratch clone?] that comes packed with a bunch of example games. The example games should be good enough that people download it just to play them, but they are also encouraged to peek into their source code. I'd hope for it to be a sneaky gateway into programming.
For that, I'll keep this in mind: "Unlucky players may look at the source code of a chance-based effect to check if the odds are actually as stated."
The Steam version was created by one guy, but the platform ports have a couple different authors. The Google Play and Xbox PC versions, for instance, have divergences.
I wonder how the ports influence the upstream and each other. How do they keep the codebases in sync, while also accounting for platform differences?
Can't say for sure how Balatro did it, but typically you do one shared core and any platforms basically use that core in their own suitable way. Considering it's lua, would feel very natural and be relatively simple for Balatro to do it this way too. Not much to keep in sync, just ensuring the core remains reusable in the ways the platforms need it.
The Android and Xbox PC versions look more like forks for a shared codebase. Most of the platform-specific code is abstracted to a bridge, but even the bridges aren't consistent across the codebases. (Android's save system uses different methods than Xbox 's.)
reply