That's infuriating to me, and I wasn't even the victim!
I'm so lucky such a thing never happened to me. The closest thing was a math teacher in middle school asking me about a problem solution of mine in a test, that he had marked "mysterious" (I had to come up with my own solution path during the test because I hadn't memorized the canonical one we were taught). He asked me to explain my reasoning, and when he was satisfied that it was sound he gave me full marks for the question.
The corporation (that runs internally as a planned economy) will get more and more inefficient the larger it gets, because that is what planning an economy does. Which in turn means it will loose market share and be forced to lean up until it is competitive again.
Or it just uses its power to influence government regulations and suppress or buy out competition.
Like what one of the current crop of mega corps has an internal market for how its capital is allocated. The closest example I could think of in history was IBM and its blue dollars.
Buying out maybe, but that only exacerbates the problem for the company in the long run. Regulatory capture is what actually works, but not within the libertarian framework, because regulation again is not a market mechanism, but government intervention into the market - exactly what libertarians say we should have less of in the first place.
Mind you, not different, or "better" intervention, but less, or even none at all. One could argue the point about libertarianism is that you can't trust the government to do a good job because it is based on force, and not voluntary market interactions, and hence lacks the proper incentives. It's just a bunch of guys on a spending spree with other peoples money, and their incentive is to make as much of it as humanly possible land in their own pockets.
To me the biggest problem w. libertarianism & game theory etcetera is that humans are not only motivated by greed/personal gain.
Pure anecdata; the libertarian/capitalist anarchists I've met have all been close to sociopathic in their disregard for others. I always figured that people who have an underdeveloped sense of empathy project this onto everyone else.
I prefer to judge such advice by the available facts, not by hearsay about the moral character of the advisor - especially not hearsay spread by his enemies. Your ad hominem has no bearing on the argument.
So, how is trusting politicians and bureaucrats to be selfless and focused on their duty to society working out for you?
Cool, how is letting companies go even more hog wild a solution when right now the problem is occurring from their semi hog wildness.
And it’s not an ad hominem on its own when the argument he is making is that the people espousing a certain world view are doing so because they believe everyone else has their same myopic view on reality and empathy.
What problem would that be that has not already been addressed further up the chain?
And it is an ad hominem - it's nothing more than an allegation impugning the character of libertarians in order to dismiss their arguments. The allegation alone does neither prove anything about the actual character of these people, nor what their view on reality and empathy actually is, nor if that view is actually wrong, nor who is doing the actual projecting here.
Personally, I'm not bothered very much by LLM confabulation, as long as it's the result of missing context. In most practical tasks, we either give context to the model, or tell it to find it itself using the internet. What I am concerned with is confabulation that contradicts available in-context information, but that doesn't seem to be what is measured here.
Well, the famous Turing test was evidently insufficient. All that happened is that the test is dead and nobody ever mentions it anymore. I'm not sure that any other test would fare any better once solved.
Still, attributing that progress to "years of research at Google" alone is simplifying the facts to the point of being just plain wrong. That kind of research was always very much in the open and cooperative, with deep levels of standing-on-shoulders.
Attention e.g. was developed by Dzmitry Bahdanau et al. (those being Kyunghyun Cho and Yoshua Bengio) in 2014 while interning at the University of Montreal.
The insight of the paper you point to was that with attention you could dispense of the RNN that attention was initially developed to support.
Also nobody in his right mind uses lookup tables where the table value is actually the float approximation of the true f(x) - you choose the support values to minimize an error (e.g. mse) of a dense sampling of your interpolated value over x (or, in the limit, the integral of the chosen error function between the true curve and the interpolation of your supports). If you want to e.g. approximate a convex function using linear interpolation, all the tabulated values f'(x) would be <= the true f(x).
The true value is far more useful in a lot of cases. If you're building a table indexed by the upper mantissa bits of the float, for example, it's difficult to distribute the error properly across all intervals.
I'm so lucky such a thing never happened to me. The closest thing was a math teacher in middle school asking me about a problem solution of mine in a test, that he had marked "mysterious" (I had to come up with my own solution path during the test because I hadn't memorized the canonical one we were taught). He asked me to explain my reasoning, and when he was satisfied that it was sound he gave me full marks for the question.
reply