> In terms of strength he's the weakest player to win in half a century even in absolute terms.
Gukesh is arguably stronger than either of Khalifman, Kasimdzhanov and Ponomariov, who won the FIDE title before it was re-unified. Also his current rating is higher than either Karpov’s or Kasparov’s were when they first won the title. His rating when he first won was about the same as Fischer’s when Fischer first won. Neither Kramnik or Anand were clearly the best player throughout the entirety of their reigns and both of their ranks fluctuated amongst the top ten positions.
> Also his current rating is higher than either Karpov’s or Kasparov’s were when they first won the title. His rating when he first won was about the same as Fischer’s when Fischer first won.
This doesn't really mean anything. Rating is a purely relative system, as in the other thing that matters when performing Elo calculations is the difference in Elo between the two players. The absolute value of an Elo rating carries no real meaning and drifts over time based on the volume, skill level, and initial rating of lower level players. Since these change frequently, it's pretty much useless to compare ratings separated in time by more than a decade or so, maybe less. 50+ years is certainly far too long.
My views on this, which are mature and have been held for many years now, are mostly informed by the results obtained by Kenneth Regan and Guy Haworth in their paper “Intrinsic Chess Ratings” which, unless you have intelligence to the contrary, is the only rigorous treatment of this issue that has yet been performed and is yet the only argument that has any persuasive hold over me.
You say that ratings drift over time to such an extent that to use them in comparisons across long time spans is meaningless yet their analysis determined that chess ratings as a measure of intrinsic quality of move choice (which must be highly correlated with playing strength) is stable over several decades with only some indications that a small amount of deflation has occurred.
Your argument in comparison amounts to informal speculation. If I were to share my own I would say that those potentially error-inducting considerations, are statistically insignificant compared to the sheer number of games, that is to say corrective and informative exchanges of points, that occur. Further, I would add that the absolute values of ratings were defined by the playing strengths of the original players and that this definition has been well preserved even as the player pool has evolved.
I have heard many such arguments in my time yet not a single proponent cares to demonstrate them. What I find amusing is that those same proponents will often readily accept a comparison across time of a single player (often themselves) across similar time spans without controversy, as evidence of their progress as a player, for instance using Carlsen’s rating today and comparing it with one from early in his career, say from 2003 or 2004, which at this point was more than 20 years ago.
I have been into computer chess for many years and I was fully expecting those concessionary statements. I have seen enough programs in this lucrative genre where a lot of attention can be gained by fraudulently claiming you implemented chess in a seemingly impossibly small size. When confronted, the charlatans will often claim senselessly that those omissions were in fact superfluous. This is a behaviour I have unfortunately also observed in other areas of computing.
If anyone reading this is interested in small and efficient chess programs that are still reasonably strong, there was a x86 assembly port of Stockfish called asmFish from a couple of years ago (the Win64 release binary was about 130KiB). Also see OliThink (~1000 LOC) and Xiphos which has some of the simplest C code for an engine of its strength that I have seen. I have not investigated the supposedly 4K sized engines that participated in TCEC too closely but from what I have seen so far it would seem that there are a few asterisks to be attached to those claims.
A 5B market cap would imply a P/E ratio of 1.3 and a P/FCF ratio of 0.8, which essentially would be saying “this business is only worth approximately what it made last year”. The corresponding multiples for other auto makers are typically in the high single digits. Even if you believed Tesla’s whole business would collapse tomorrow (i.e. revenue goes to zero) book value is ~83B and net cash is ~29B.
You may think that sounds right but I can assure you that convincing others that ~$29B of accessible pure cash or ~$83B of equity is really only worth $5B will be a more difficult venture. You can dispute the carrying value of Tesla's assets and liabilities but the cash is cash which is why I included that metric as a baseline. At the end of the day $29B is worth $29B and nothing else.
> YouTube is an absolute clown show. It's so bad that I'm certain Google devs are actively making it terrible on purpose.
Exactly, which is why I thought this was a terrible and meaningless benchmark. It completely obfuscates the actual video playback performance of these machines. It is more a measure of how awful and inefficient YouTube is. I am surprised that the author did not remark on or seem to be aware of this at all.
The latest version of Crafty has a significantly higher rating on CCRL than Fritz 10, the version that defeated Kramnik in 2006. He was the World Champion and was rated 2750 at the time. I do not know what source you used for Crafty’s rating but ratings from different lists are not comparable. It is highly probable that Crafty running on a Ryzen could defeat any human.
I am also of the opinion that with an optimised program the CRAY-1 would have been on par with Karpov and Fischer. I also think that Stockfish or some other strong program running on an original Pentium could be on par with Carlsen. I am not sure if Crafty’s licence would count as FOSS.
Why does the current design paradigm in image coding formats emphasise supporting as many features as possible in order to have “one image format to rule them all”? You do not see this in audio and does anybody think that Opus and FLAC should be combined into one format? Does the fact that Opus does not support lossless encoding make it worse?
> You can pretty much draw a parallel line with hardware advancement and the bloating of software.
I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.
reply