Being in this industry for over 20 years probably jaded me a lot, I understand that's the plan but it's almost always the plan (or publicly stated as).
Only time will tell if it will not affect the ecosystem negatively, best of luck though, I really hope this time is different™.
I've been in the industry for similarly long, and I understand and sympathize with this view. All I can say is that _right now_, we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever".
I personally get a lot of confidence in the permissive licensing (both in the current code quality, and the "backup plan" that I can keep using it in the event of an Astralnomical emergency); thank you for being open source!
>Seems like the big AI players love buying up the good dev tooling companies.
Would be a good mustache-twirling cartoon villain tactics, you know, try to prevent advances in developer experience to make vibecoding more attractive =)
It also hints even The Big Guys can’t LLM their tooling fully, and that current bleeding edge “AI” companies are doing that IT thing of making IT for IT (ie dev components, tooling, etc), instead of conquering some entire market on one continent or the other…
Makes you really think about the true productivity. If these companies have the beyond cutting-edge unreleased models so best possible tools shouldn't they be able to poach just a few most important people for cheaper? And then those people could use AI to build new superior product in very fast time. There is also buying an userbase. But I wonder how the key talent purchase strategy would work in comparison...
I gave a talk about the paper in our internal journal club recently (we work on similar problems, usually using stereo imagery though).
It's a nice piece of work. I especially like the sections on data cleaning and registration, as that seemed to have been one of the limiting factors of the previous approaches.
I am sceptical about how accurately you can predict heights for specific trees from mono-images, but I think for cases where you just need to be right on average (e.g. biomass estimation, fuel load estimates) it's a great approach.
> We additionally release a global GeoTIFF of input image acquisition date, where pixel values encode year minus 2000 (e.g., 18.25 indicates April 2018)
That being said, I am sceptical on how accurate mono-depth models can be on a single tree basis. I would probably trust them to do large scale biomass estimates, but probably not single tree height assessments.
> CHMv2 is derived from single-date imagery, where the acquisition process selects the best available image within a target period (2017 -2020). This limits the direct use of the released CHMv2 data for attributing
canopy height to a specified year of interest. To support change applications, we provide the image acquisition date associated
with each prediction in the dataset metadata.
So generally a few years out of date, but the dataset is transparent about when each image was taken.
Marginal revolution has been talking up prediction markets since before they existed. In fact polymarket probably was created after its founder read Cowen's thoughts on prediction markets.
It may be cherry-picking, but I think some commenters misunderstand this (or maybe I do).
The implication seems to be "12 hours before the resolution things are obvious anyway". But if that were the case, then I could pick some wager that is obviously true but has, for example, 70% chance, and putting my money on that. If it was true that "12 hours before the resolution it's obvious what the result is", everything would be in 0% or 100% buckets. I believe getting event with 30% confidence right exactly 30% times is impressive no matter if that's 12h or 120h before.
Disclaimer: I don't know much about prediction markets, just what I understood from the blog post.
1. As pointed out in the comments, you can inflate your prediction success rate by predicting things that are 100% to happen. There are plenty of 99.9% bets on Polymarket with degens betting on some 0.01% lottery event trying to strike a jackpot against the odds, and those will inflate the perceived accuracy.
2. All you're really doing is paying insiders to leak information a few hours ahead of time. Insofar as Polymarket is unusually accurate on things that aren't ~100% to happen, it's likely because in the 12-hour-window that post measures is when all of the insiders place their last-minute bets telling you what will happen. This is extremely bad for society. It's wealth redistribution from stupid people to unethical people[1], and it could completely compromise national security when eg. an insider tells you 12 hours ahead of time that the US is about to launch an invasion of Venezuela. There is no societal benefit to this.
[1] Even if you have no sympathy for idiots who bet their life savings on markets without having insider information, gambling addiction has extremely detrimental effects on society and directly results in increased crime rates, divorce rates, etc. as people lose all of their money and do bad things in desperation, so it is a problem that becomes everyone else's problem.
I think there are lots of plots that revolve around rigging sports betting (e.g. sub-plot in Pulp Fiction), but I can't really think of a case where it is not so much about manipulating the actual act that occurred, but rather the news reporting of the act.
I think you might be over-fixated on a very prediction-market-esque framing of this plot device... if you broaden it slightly, the idea of someone in a fictional world manipulating the news reporting of an act or set of acts, rather than caring so much about the root act itself, is as stated before, quite common.
> Pollyhop is a fictional, Google-esque search engine that according to Leann’s polling expert is being exploited by the Republican candidate Will Conway in ways that suggest Underwood can’t possibly beat him in the general election. The explanation of how Pollyhop works is convoluted at best, but the gist is that Conway and his people are manipulating search engine results so that only positive coverage of their side appears.
Or more recent examples of what essentially boils down to the plot device of "media manipulation" aka manipulating the "news reporting of the act":
- See the most recent season of Industry, which included several plot points about manipulating news coverage as a short-seller and the company being targeted fought back and forth (including specific focus on the individual journalists involved)
- See Andor, everything about how the Empire twists perception of what's happening on Ghorman, leading up the Ghorman massacre itself, and then culminating in Mon Mothma's speech in the Senate denouncing "the death of truth is the ultimate victory of evil"
- See The Orville, a particular episode: https://orville.fandom.com/wiki/Majority_Rule which includes the plot point of hacking that society's "master feed" to plant false manipulative stories to curry public favor and save a character from being punished
- See The Boys, how Vought manipulates the media to twist coverage of their "heroes" even when they commit atrocities
- See other House of Cards plotlines involving Zoe Barnes and being a direct mouthpiece for Frank Underwood
I think the only real difference if any, is that in the most common form of portrayal, maybe less attention is paid to the journalist as the point of leverage, and how they deal with threats or bribes or whatever. The fact that such manipulation occurs, is commonly accepted as a trope, without requiring too much of a deep dive. Whether a story choose to focus on the "reporter's perspective" is perhaps less common, but not uncommon IMO.
reply