The US fired many of its government-employed economists. The administration head tried to fire people at NOAA his first term, until he got a yes-head. Data was deliberately buried in a mad rush the first few weeks and months of 2025. I'm not quite sure Mr. Sharif's opinion is well-founded given the known facts.
I worked on a couple of projects with state workforce development agencies and federal agencies. I was always impressed with how much focus there was on the integrity of unemployment numbers, and especially with the emphasis on making sure methodologies ensure that data from the late 1800s can be compared against modern data.
This is a lot of similar-sounding-but-different problems.
EDIT:
For those not realizing that China has a long history of less-trustworthy stats, along with Iran and some other governments, here is some reading you can consider
For Iran, China, USSR, for example, you had to back in estimates from observable benchmark information uncontaminated by dictatorships. You didn't have to do that with the US.
The US standard has been to document and standardize approaches -- and identify when things are changed and why. This was not common across all economies. It does give us several similar streams, e.g several versions of unemployment.
"Attempt" is doing a lot of work there. Companies are driven by a profit motive, and are practically required to renege on promises that are not legally enforced.
In a different world they would have earned trust and deserve the benefit of the doubt. This is not that world.
You'll notice that I did not advocate against building and grid reconfiguration. Indeed, my company does microgrids. I do, however, believe strongly in being aware of tradeoffs.
In short, I'm very much in favor of building the right solution to a problem.
I am unsure what cognitively triggered an unhealthy response of "this is NIMBYism!" and would welcome a follow up comment to understand your train of thought.
I meant visual patterns, too. You're thinking about what I said on too granular a level. JEPA is visual, based ultimately on pixels. The tokens may be digested from pixels until they're as large as whole recognizable objects, but the tokens are not whole mental models themselves.
Here's an example of humans evaluating competing mental models as tokens: You see a car, it's white, it's got some blood stains on the door, and it's traveling towards a red light at 90 miles an hour in a 30 mph residential zone, while you're about to make a left turn. A human foot is dangling from the trunk.
You refer to several mental models you have about high speed chases, drug cartels in the area, murders, etc. You compare these models to determine the next action the car might take.
What were the tokens in this scenario? The color of the car, the pixels of blood, the speed, the traffic pattern? Or whole models of understanding behavior where you had to choose between a normal driver's behavior and that of someone with a dead body fleeing a crime scene?
[0] https://digitalgovernmenthub.org/library/federal-data-are-di...
[1] https://www.govexec.com/workforce/2026/03/report-nearly-95k-...
reply