I feel some "commoditize your complements" (Spolsky) vibes hearing about these acquisitions. Or, potentially, "control your complements"?
If you find your popular, expensive tool leans heavily upon third party tools, it doesn't seem a crazy idea to purchase them for peanuts (compared to your overall worth) to both optimize your tool to use them better and, maybe, reduce the efficacy of how your competitors use them (like changing the API over time, controlling the feature roadmap, etc.) Or maybe I'm being paranoid :-)
Not allowing AI assistance on PRs will likely decimate the project in the future,
I can't help but wonder if this matter could result in an io.js-like fork, splitting Node into two safe-but-slow-moving and AI-all-the-things worlds. It would be historically interesting as the GP poster was, I seem to recall, the initial creator of the io.js fork.
Context is important, but isn’t HN’s social context, in particular, that the site is entirely public, easily crawled through its API (which apparently has next to no rate limits) and/or Algolial, and has been archived and mirrored in numerous places for years already?
I don’t know how their behavior differs but I live in a country with patchy mobile coverage and Spotify and Apple Music are night and day with how they cope with this. Spotify is far more robust and I assume prefetches more aggressively.
I've been working on a pure Ruby JPEG encoder and a bug led me to an effect I wanted. The output looked just like the "crunchy" JPEGs my 2000-era Kodak digital camera used to put out, but it turns out the encoder wasn't following the zig-zag pattern properly but just going in raster order. I'm now on a quest to figure out if some early digital cameras had similar encoding bugs because their JPEG output was often horrendous compared to what you'd expect for the filesize.
The interesting thing about the situation I mentioned is that while the encoding algorithm was wrong, the actual output was valid JPEG that simply didn't look quite visually correct. But you're right, invalid encoding can be a problem too, and I have noticed during testing that a lot of decoders are quite forgiving of it.
Ah yeah that is an interesting case. In my situation, a thermal camera I got was inserting an invalid frame at the start which likely got used by their proprietary program to store data. VLC would view it but most of Mac/iOS would refuse to play the video until I used ffmpeg to delete the invalid frame.
Yes. The size is slightly different here to show it off better, but compare left and right in https://peterc.org/misc/queen.png (both are the same "quality" and file size but the right hand one has incorrect quantization ordering)
Also it turns out it wasn't a Kodak camera, but a Casio QV 10 I had much earlier. You can sorta see what I meant here: https://medium.com/people-gadgets/the-gadget-we-miss-the-cas... .. though the resolution is far lower too. The weird blocky color aberrations aren't just JPEG artefacts but related to the same quantization ordering issue I think.
My assumption also, but once this thing spat out those images by accident, I started to wonder if maybe broken code ended up in the firmware of these cameras and just.. stayed there :-D Might be a fun project one day if I can get hold of one and rip out the firmware!
As for you, OP, I have no idea why age is a factor to consider to this.
This is one only data point but my dad was a programmer and frequently complained about cognitive decline once he hit his mid 50s. From talking to him, he remained sharp at a conceptual and high level, knowing what he wanted to do and how it would be done, but struggled with the tooling, the logistical details, etc. He didn't make it to the AI era, alas, but AI could be a god send for people who have the proven technical chops and background but find juggling a lot of minutiae is becoming difficult.
I'm sure there are cognitive declines as you age, but even discounting those there's some fundamental change happening to the opportunity space.
I'm in my mid 40s, I've had a really fulfilling career working on interesting things and making decent money, and over that time have accumulated a few passion projects that I knew were always out of my reach.
Well, technically within my reach but I'd need to somehow find someone to pay for me and a team for some period of time to work on stuff.
When I started playing around with these tools, it started feeling like maybe some of my ideas were within reach. Some time after, it felt plausible enough that I've decided to go for it. I'm actively in the middle of some deep performance research that I simply would not have the bandwidth or capacity for without these tools.
I've also managed to acquire enough confidence in the likelyhood of some degree of success that I'm investing in starting a company (self-funded) to develop and release and license the stuff i'm building.
I don't know exactly how my ideas will turn out, but that's part of the excitement and anticipation. Point is I never felt I had enough breathing room to really go for it (between normal life obligations like mortgage, feeding kids, etc.)
These tools have changed the equation enough that it's made it more feasible for me to pursue some of these ideas on my own. Things I would have shelved for the rest of my life, probably.. or maybe tried to encourage and interest others into doing.
Right. If these tools are so good (and they are) there should be numerous better-than-Slack apps by now that let you do exactly what you want. It doesn't take Anthropic to make it. (At our company, we cheated and edited 37signals' Campfire instead because we got sick of Slack's ads pushed into our paid instance.)
Yes, if I go to Slackbot we get this: https://pbs.twimg.com/media/HCowV2GXsAAmpXN?format=jpg&name=... - there's no X and no way to get rid of it. Just an ad for their pro plan every time. This is on whatever the normal paid plan is. (We're keeping it around for a few months before we cancel to see if we need to go back on there to find things.)
The actual meaning of a "clean room implementation" is that it is derived from an API and not from an implementation
I know you were simplifying, and not to take away from your well-made broader point, but an API-derived implementation can still result in problems, as in Google vs Oracle [1]. The Supreme Court found in favor of Google (6-2) along "fair use" lines, but the case dodged setting any precedent on the nature of API copyrightability. I'm unaware if future cases have set any precedent yet, but it just came to mind.
Yeah, a cleanroom re-write, or even "just" a copy of the API spec is something to raise as a defense during a trial (along with all other evidence), it's not a categorical exemption from the law.
Also, I find it important that here the API is really minimal (compared to the Java std lib), the real value of the library is in the internal detection logic.
Common for them. Even for projects on the main google GH org more often than not there'll be stuff like "not an official Google product" and "it is just code that happens to be owned by Google" in the README.
If you find your popular, expensive tool leans heavily upon third party tools, it doesn't seem a crazy idea to purchase them for peanuts (compared to your overall worth) to both optimize your tool to use them better and, maybe, reduce the efficacy of how your competitors use them (like changing the API over time, controlling the feature roadmap, etc.) Or maybe I'm being paranoid :-)
reply