In the context of what Elon has done, the only real discussion should be condemnation. If that leads to Elon fans feeling embattled, well, they should get better role models to look up to.
Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.
How would this work? What happens if a child picks up my unlocked phone and copies the authentication data to another device?
I guess you can put the proof-generating code inside some kind of a secure enclave? But then it's still not any better than classic asymmetric exchange, except that the government provides you a certificate that signs the private key held inside the TPM.
Or are you thinking about using a ZKP for a biometric proof? But then this still doesn't solve the issue of a malicious user just taking biometric pictures once, and then re-feeding them to the verifier.
I don't think this is solveable without some kind of trusted computing environment, at which point the classic asymmetrical crypto is fine anyway.
What's stupid about using a soft approach, instead of a violent approach, to take away a driver's license from a drunk driver?
Why do police so frequently resort to violence that you're probably not surprised to hear bystanders in NYC were shot by cops pursuing a subway turnstile hopper? Let the implications of that sink in for a moment.
Why have I heard so many times about people losing their life after being pulled over for speeding?
> What's stupid about using a soft approach, instead of a violent approach
The options aren't soft vs violent.
The problem with the soft approach is it's all about giving the suspected impaired drive more chances to prove they aren't impaired. It's about avoiding removing them from the road, not avoiding a violent confrontation.
While cops shouldn't be dicks to everyone and they should always work to de-escalate, what they shouldn't do is let someone they think is impaired drive off. And that's what the "soft" approach is all about. It's about letting the arresting officer make excuses like "well, they don't seem THAT drunk" or "Well, they seem a little buzzed, but not that bad."
For a regular citizen, the cops would do a field sobriety test, a breathalyzer blow, and then arrest if it comes back high. That's what they should do for everyone they suspect is impaired.
If we wanted to argue for a softer approach, then I could see removing the criminal aspects of a DUI and instead just focusing on getting that person off the road and potentially revoking their license. But in no case should a cop let someone drive off that they suspect isn't fully sober.
> [Letting someone they think is impaired drive off is] what the "soft" approach is all about. [...] But in no case should a cop let someone drive off that they suspect isn't fully sober.
You are reading more into the vague "softly" term than is present in this thread, instead of "respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize." https://news.ycombinator.com/newsguidelines.html
> The options aren't soft vs violent.
That there is a spectrum instead of a binary choice is what I discussed, though maybe it's a regional language quirk: "What's stupid about using a soft[er] approach, instead of a [more] violent approach..."
How are you framing this? It’s an Electron app so it exists but doesn’t integrate or perform great. Last I recall you still were required to provide a SIM to sign up & you needed an iOS or Android primary device to even use the desktop client. Can you use a standalone, fast desktop application like you can these other protocols? I would say no, so “support” has shades of gray to it.
This is how I got kicked off LINE… they had a Chromium app that I could use tethered to an app, they disabled support for LINE Lite (which had light/dark theme, E2EE, texting, voice/video calls, debatable trackers (Firebase), even stickers & sending a location @ 8MiB instead of 200MiB+ of the “heavy app”), I refused to “upgrade” as it was a downgrade to me, & since I was no longer registered with a “primary” device, I was booted from the network. I don’t think I want these mobile-duopoly-required apps to be my primary means of communication with folks—especially now that my primary phone isn’t Apple or Google (luckily Open Whisper lets WhisperFish exist).
Not GP but I've also had issues with the Signal Desktop app (installed from the Arch repos).
Its overall a little sluggish in general (like most Electron apps though, in fairness) and occasionally clicking and dragging images onto the application will cause it to freeze and eventually crash.
Plus, the general usability issues present in all variants of the signal client (like no easy way of restoring previous messages on a new device).
It's not terrible or anything, but it's just a solid 6/10 application. I personally wish they were more open to 3rd party clients, so I could have something that integrates with my desktop environment a little better and is snappier, like my Matrix clients.
I'll have to try clicking and dragging images onto the Signal application and see if I notice any difference. I usually actually click the button to add an attachment and then browse to it. I'm also on Win11 but I would hope the experience between OSs wouldn't be too drastically different.
I haven't used Signal desktop, but I find Electron apps in general to be very wasteful of system resources. Out of curiosity, I once compared an Electron-based chat app to a C++ alternative, and found that the former used about 25 times the RAM and generated more CPU load.
If GP's system resources are usually dedicated to other tasks, perhaps trying to run an Electron app on top of those led to resource contention, and poor performance. You wouldn't notice this if your hardware is overprovisioned for the things you do with it.
The Signal desktop app works fine, but you are right, it is still tied to a mobile account and a phone number. This is the main downside to Signal. I read that the Molly fork will support multiple accounts and a self hostable server. It probably won't be federated, but that is not really a problem when you can use multiple accounts and avoids a lot of headaches that come with federation.
The other downside of the Desktop is that it requires periodic re-verification with the device you used to set it up. Desktop users are definitely second class citizens in the Signal ecosystem.
Has done for years now, but its desktop support is far inferior to even Matrix chat clients. It works in a pinch but you have to lower your standards quite a lot to use it as a true alternative.
Typically, real humans have some agency on their own existence.
A simulated human is entirely at the mercy of the simulator; it is essentially a slave. As a society, we have decided that slavery is illegal for real humans; what would distinguish simulated humans from that?
It's my experience that a major part of the "anti covid vax and measures" point of view depends on refusing to understand that people who get grave form of covid but don't die from it still saturated hospital causing side deaths from other causes.
A computer generating a compiler is nothing new. Unzip has done this many many times. The key difference is that unzip extracts data from an archive in a deterministic way, while LLMs recover data from the training dataset using a lossy statistical model. Aid that with a feedback loop and a rich test suite, and you get exactly what Anthropic has achieved.
While I agree that the technology behind this is impressive, the biggest issue is license infringement. Everyone knows there's GPL code in the training data, yet there's no trace of acknowledgment of the original authors.
Its already bad enough people are using non-GPL compilers like LLVM (that make malicious behavior like proprietary incompatible forks possible), so yet another compiler not-under GPL, that even AI-washes GPL code, is not a good thing.
These tools do not compete against the lonely programmer that writes everything from scratch they compete with the existing tooling. 5 years ago compiler generators already exist, as they did in the previous decades. That is a solved problem. People still like the handroll their parsers, not because generating wouldn't work, but because it has other benefits (maintainability, adaption, better diagnostics). Perfectly fine working code is routinely thrown away and reimplemented, because there are not enough people around anymore who know the code by heart. "The big Rewrite" is a meme for a reason.
That’s not true. It didn’t have access to the internet and no LLM has the fidelity to reproduce code verbatim from its training data at the project level.
In this case, it’s true that compilers were in its training data but only helped at the conceptual level and not spitting verbatim gcc code.
How do I know that? The code is not similar to GCC at any level except conceptual. If you can point out the similarity at any level I might agree with you.
> I have a feeling, you didn't look at the code at all.
And you originally asked how someone knew that they weren't just spitting out gcc. So you reject their statement that it's not like gcc at all with your "you didn't look at the code at all". When its clear that you haven't looked at it.
yeah its pretty amazing it can do this. The problem is the gaslighting by the companies making this - "see we can create compilers, we won't need programmers", programmers - "this is crap, are you insane?", classic gas lighting.
It’s giving you an idea of what Claude is capable of - creating a project at the complexity of a small compiler. I don’t know if it can replace programmers but can definitely handle tasks of smaller complexity autonomously.
I regularly has it produce 10k+ lines of code that is working and passing extensive test suites. If you give it a prompt and no agent loop and test harness, then sure, you'll need to waste your time babysitting it.
TDD does not require you to know everything you're building up-front. Tests can come out of experimentation, to validate the final build. Tests can be driven by autonomous directed planning.
I'm currently, in fact, working on a system where the LLM semi-independently build up an understanding of a project and its goals from exploration, and then creates small targeted improvement plans, including the acceptance criteria that then feeds into building test suites which the build will finally be measured against.
It still needs direction - if you have a large spec or a judge/fitness function, such as you would for a compiler for an existing language, you can achieve a lot just from using that and may not need much additional direction. But even for far more exploratory projects, you can have the LLM surface perceived goals and plans to meet those goals, and "teach it" on the way by giving it points on how to revise a given goal or plan, and have e.g. implementation successes and failures feed into future plans.
My current system has "learned" [1] quite quickly on fairly complex test projects, and I'm in fact right now testing it on a hobby compiler project. The first cycles are frustrating (and an area I'm refining), because it's dumped into a project it doesn't know the real motivations for, and it will start making some code changes you know are bad, and letting go obsessing over that is hard. But ultimately using it as input to a feedback cycle where you add to its goals (e.g. make clear one of the goals is code that meets your specific standards) is more useful than managing it in detail yourself.
I'm very closet to putting this improvement agent in a cron job for a project I rely on for day to day use (yes, I'll make sure I can roll back), because it now very consistently implements improvements both entirely unilaterally, or based on minor hints (it has access to some files on my desktop, including a "journal" of sorts, and if I put a one-liner about an idea or frustration, I'll often come back to find a 300+ line implementation plan for a change to fix it, or lay the foundation for fixing it.
[1] "Learned" in this instance is in quotes for a reason. I'm not fine-tuning models - I have the agent do a retro of its own plan executions, and update documents with "lessons learned" that gets fed into the next planning stage.
reply