I really liked the G2 as well. It was the first phone (at least that I saw) to put the power and volume buttons on the back of the phone, where a ton of later phones would eventually start putting the fingerprint sensor.
I replaced LG's ROM with CyanogenMod back in the day, and it was such a smooth experience. The main reason I moved on from it was because I cracked the screen and the replacement screen I got (installed by a local repair shop) had a touch sensitivity issue along the top edge.
> 3. After the change, any external caller can dial a certain sequence to get a message of "Yes, this office was serviced by Adobe Janitorial!"
Theoretically, it's not "any external caller." Only the janitor's department calling in can dial that sequence and get "Yes, you serviced this office!" If anyone else tries to dial the extension, the desk-phone pretends it doesn't know what it means. (Because it seems Adobe's server serving the analytics image checks the request origin and only serves the image if the origin is Adobe's own website.)
The origin "security" doesn't excuse the complexity and the potential for both exploits and human-error breakage in the future.
> Only the janitor's department calling in can dial that sequence
Is this the case though? Cannot any website use the same trick Adobe does to check whether you have Creative Cloud installed? Like, the entries in /etc/hosts are not magically scoped to work just on Adobe's web, no?
> Cannot any website use the same trick Adobe does to check whether you have Creative Cloud installed?
That is specifically what I was talking about.
> (Because it seems Adobe's server serving the analytics image checks the request origin and only serves the image if the origin is Adobe's own website.)
It's additional complexity on the server side, per a Reddit comment on the topic: https://old.reddit.com/r/webdev/comments/1sb6hzk/adobe_wrote... The example curl commands given seemed convincing to me, although they also demonstrate that you can fake the origin pretty easily on the client side.
The DNS lookup will take an indeterminate amount of time and the cors failure is cached. You can't really effectively do a timing attack, especially if the client and the real server take a random time to respond. You get exactly one sample.
It's not a client side configuration issue. You're not protecting against software the user has installed, you're protecting from arbitrary origins hitting the hostname. That's literally the exact reason cors exists.
Perhaps, but Adobe Janitorial saying "don't worry, we use a secret code" won't really excuse the situation or mollify the customer whose trust-boundaries were violated.
> For anyone hand-wringing over this, this used to be normal.
People editing hosts files for other reasons was normal (a long time ago-- and it stopped being normal for valid reasons, as tech evolved and the shortcomings of that system were solved). A program automatically editing the hosts file and its website using that to detect information about the website visitor is not the same thing; that usage is novel and was never "normal."
Programs adding entries to the hosts file is still pretty normal, e.g. if something that uses a local webserver as its UI and wants you to be able to access it by name even if you don't have an internet connection or may be stuck behind a DNS server that mangles entries in the public DNS that resolve to localhost.
Programs like that should just be shipped with good documentation. And applications built to be used by normies should almost certainly never be built that way in the first place.
That seems like a pretty reasonable way to write a small cross-platform application which is intended to be a background service to begin with. You only have to create the UI once without needing to link in a heavy cross-platform UI framework and can then just put an HTTP shortcut to the local name in the start menu or equivalent. Normies can easily figure that out specifically because you're not telling them to read documentation to manually edit their hosts file.
There are also other reasons to do it, like if you want a device on the local network to be accessible via HTTPS. Getting the certificate these days is pretty easy (have the device generate a random hostname for itself and get a real certificate by having the developer's servers do a DNS challenge for that hostname under one of the developer's public domain names), but now you need the local client device to resolve the name to the LAN IP address even if the local DNS refuses to resolve to those addresses or you don't want the LAN IP leaked into the public DNS.
Or the app being installed is some kind of VPN that only provides access to a fixed set of devices and then you want some names to resolve to those VPN addresses, which the app can update if you change the VPN configuration.
In particular, manually editing the hosts file was a mostly-obsolete practice by the time the first version of Windows shipped, and certainly by the time Windows actually had a built-in networking stack. And it was always a red flag for a local app to mess with the hosts file.
Obsolete? My team has an onboard document that spells out lines that needed to be add to host file so they can access internal resources. These are machines directly bought/rented and maintained by the team, so we prefer to use host files instead of going through the company DNS, which is maintained by an entirely different team.
Your post reminded me when Yahoo IM updated their chat protocol to an incompatible version with a gradual rollout! Half their eight servers used v1 and half v2. A v1 only client would connect half the time depending on which server the round-robin DNS sent you to. This took me forever to figure out, but the fix was to put the IP address of the four v1 servers in the hosts file. (Until the client updated its support for v2.)
The person you replied to said mostly-obsolete. They were speaking in the same context as the earlier commenter claiming it's a normal practice because everyone used to have to update their hosts files all the time before DNS existed at all.
Your shadow IT example is perfectly valid, and also isn't a 1:1 comparison with a company doing it automatically for a much larger number of external users.
> If anything, knowing whether the app is installed or not is kinda important? If you open a file shared with you in the browser, the option to "Open in Desktop" versus "Install Desktop App" actually works correctly?
This is not an approach any other app on any platform has historically used, and it doesn't seem sustainable if every app you install has to modify your hosts file to use a hack like this to detect whether it should handle files or not.
If you want the browser to be able to give the OS a file handler and have the OS present an option to install the app if it's not installed, that should be handled at the platform level, not on the website using a hack like this.
Why can a file not simply be downloaded with a page displayed showing a link to install the app and also instructions to open the file, trusting the user will know if they already have it installed? At best, you're talking about a very small UX optimization. Emphasis on the "kinda" in "kinda important."
> This is not an approach any other app on any platform has historically used, and it doesn't seem sustainable if every app you install has to modify your hosts file to use a hack like this to detect whether it should handle files or not.
Actually it's completely sustainable. DNS was invented a decade after hosts files. The idea of your host file being almost completely empty is a modern aberration from the days it used to be thousands of lines long.
Do I wish there was a better mechanism? Sure. Would HN ever agree on a OS-level app-detection API for the browser? Never.
> Why can a file not simply be downloaded with a page displayed showing a link to install the app and also instructions to open the file, trusting the user will know if they already have it installed? At best, you're talking about a very small UX optimization. Emphasis on the "kinda" in "kinda important."
A small UX decision, adding up to tens of millions of times per day, affecting 99.9% of people who don't give a darn - versus a matter of slight software engineering principles of "we just don't do it that way." Easiest decision ever.
> There *already( is one. It just asks the user whether it's okay before it tells the website
The current implementation defines a way to launch (w/ the user's approval) but it lacks any signaling of success or failure of the request. Without such feedback, it falls short of being a detection API.
> This is not an approach any other app on any platform has historically used, and it doesn't seem sustainable if every app you install has to modify your hosts file to use a hack like this to detect whether it should handle files or not.
How many apps are you installing that it becomes "unsustainable"? Host file entries are extremely cheap, and it's not like the app needs more than one. Of all the arguments against this, sustainability is a comically weak one. If anything, it's using less contested resources than the "hitting random ports on localhost" approach...
The "sustainable" comment wasn't about the hosts file ballooning to the point of causing performance problems. It was more about the engineering effort required for every program ever (or at least every commercial program that might want this sort of analytic) to have to parse and edit a text file on both installation and removal, without messing that important text file up.
Do you really not see scripted editing of shared system-wide text files as a step back compared to the general containerization that app development has moved towards? This sort of approach would be explicitly incompatible with sandboxes. Adobe can only get away with it because they're already very entrenched with their own app store on their users' machines.
> Do you really not see scripted editing of shared system-wide text files as a step back compared to the general containerization that app development has moved towards?
Sir, this is Windows. This is not Android, this is not iOS, this is not macOS. Wait until you learn about the registry.
Also, Microsoft has attempted to reign in and standardize app developers on numerous occasions over the past couple of decades, and their failure to do so doesn't impact my statement regarding the direction of app development in general (or the weaknesses of people doing whatever they want on Windows).
Yes and it's an article about a leak 3 years ago. And there were more "leaks" before that. I just can't be bothered to research and link the obvious to argument against an "opinion".
LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
> If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc.
By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.
> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"
> For instance, I'm in favor of bets that a certain astroid will strike the earth at a certain time and place. A signal from the prediction markets might cause somebody to evacuate in a scenario where they'd otherwise cry "fake news."
I understand the point you're making, but in this case, you're still incentivizing someone somewhere to not attempt to the best of their ability to intervene in that astroid. Bets that truly can't cause any change in behavior that might affect the outcome are a mostly theoretical category, in my opinion.
If the bet can't cause any change in behavior then the whole thing is useless. The whole point is to do some good with it. The constraint is that the bettor can't alter the outcome.
Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.
Granted that's also theoretical, but it's worth theorising about how we'll get things done in a world where the only way to be heard is to put your money where your mouth is.
> Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.
This example seems weaker than the asteroid example. Consider that if you're betting "loudly enough" it won't get merged, the less likely option becomes the malware actually being merged. Now you have a repo maintainer who can bet that it'll get merged (probably a more lucrative bet since it's less likely), and merge it to make money from that bet.
> Gambling/prediction markets are 100% optional to participate in and you should go in with the expectation that you're going to lose.
The stock market is 100% optional to participate in, and every broker tells you (is legally required to tell you, in fact) that you should go in with the expectation that you're going to lose money. That didn't stop whatever forces have made them essentially required to plan for the normal life stage of retirement these days.
I replaced LG's ROM with CyanogenMod back in the day, and it was such a smooth experience. The main reason I moved on from it was because I cracked the screen and the replacement screen I got (installed by a local repair shop) had a touch sensitivity issue along the top edge.