This probably isn't just an HN problem (GH's model is broken now). It's so cheap to make software that the previous process of releasing something new associated with a person is probably outdated. AI knows what sounds impressive too. So now we're drowning in software releases and attributing software to a person is meaningless.
It doesn't compile for you or doesn't compile at all? Honest question. It's a nice project in the face of it but if it's all AI fever dreams that would be disappointing. I don't have the cycles to try it out right now.
I suspect that if Palantir had the same exact business model but the CEO had supported Kamala Harris, this campaign would not exist. Or if Palantir had said it supported targeting "fascists" the same people denouncing it would be supporters. So if you're actually serious about sovereignty it probably needs a whole campaign around that cause.
I think security became part of compliance so security recommendations got detached from actual security. It seems like a lot of security recommendations are just busy work that justifies having a huge compliance industry. So an example of this might be security scanners for code where the output is not even useful. But using the tool, which searches for irrelevant findings, is required for compliance even if it basically does nothing for security.
I haven't gone on in many years, almost decades. I feel like it was both a huge waste of time but also useful. I eventually lost interest in keeping up with it all. I am surprised anyone's mind can handle being there for long. Even m00t himself left and I don't think ever came back.
I didn't see it that way. It seems like NGOs and IGOs have been pushing for internet restrictions for a long time. There has suddenly been a push for age restrictions allegedly because of abuse material. This happens annually. Some international group claims there needs to be something draconian abolishing encryption, or some other privacy invading measure to stop child abuse and help security. The laws are 1000s of pages and appear out of nowhere and we're expected to believe it's organic and that politicians are deeply concerned about the issue.
So it really wouldn't be hard for the same legal framework that restricts age to happen in the US. It just takes compliance on our part. The UK is just one tentacle of the legal bureaucracy. It wouldn't surprise me if a bill appears called the Online Child Safey Act or something like that soon and it happens to coincide with a bunch of issues Ofcom raises in this lawsuit.
> It seems like NGOs and IGOs have been pushing for internet restrictions for a long time. There has suddenly been a push for age restrictions allegedly because of abuse material. This happens annually.
we’re seeing some good evidence the most recent pushes were secretly funded and directly written by meta, the corporation. [0][1]
according to the link in there,
> Rep. Kim Carver (R-Bossier City), the sponsor of Louisiana's HB-570, publicly confirmed that a Meta lobbyist brought the legislative language directly to her.
and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
corporations openai, meta, and google were absolutely backing the push for the age verification bill in california and ohio. [2][3][4]
Reading the original research and stripping away the motives implied by the bot, the data is aligned with another interpretation. Namely that Meta is going with the flow and using the opportunity to push for regulation that impact its interests less, while affecting its competitors more.
The original research is riddled with baked in conclusions, and has not been verified independently. Its also mostly LLM generated.
> and they’ve put as much as 2 billion dollars into it. and yes, that’s billion, with a B.
The original report that cited the $2 billion number was AI generated slop. The $2 billion number wasn't from Meta, it was from Arabella Advisors.
The AI-generated report showed only about $20-30 million in lobbying efforts per year across all lobbying.
Even the Show HN post was full of AI slop, claiming things like "months of research" when the Claude-generated report showed it began a couple days prior.
So please stop repeating this AI generated junk. It dilutes any real story and the obvious falsehoods make it easy for critics to dismiss.
That’s on all lobbying efforts combined. It’s not out of line for a company of that scale that is trying to do things like build data centers and other such activities.
There’s a motte-and-bailey fallacy happening with that “Meta spent $2 billion” report where the $2 billion number is used as a hook but then replaced with a different argument if the other parties are observant enough to see that it’s BS
India is considering these bans. I suspect every country in the world is thinking of them.
I work in safety, and you are right in that this comes up every year. The pressures have been building up and it’s coming to a head. However:
0) Techlash is a thing, and HN regularly underestimates the vehemence and anger behind it.
1) There IS an organic component, driven by voters globally.
2) It is also meta and governments, taking advantage of a crisis to further their ends.
Governments globally are tending towards authoritarianism. Tech firms impact most of the world, but are barely responsive to even the American government.
Voters around the world are increasingly terrified of what tech is doing, while tech is entirely unresponsive to their concerns. Tech is very firmly the bad guy today, when it used to be the “good guy” in the 90s.
So governments are more than happy to be seen as putting tech in its place, while gaining more power for themselves.
A few anecdotes about how bad the safety side is: NDAs are so prevalent and tech is so averse to customer support, that safety teams have no formal signal sharing methods.
The number of requests to recover accounts, point out fraud, or even to address CSAM, that go through WhatsApp, slack, discord, etc. is heart breaking.
To be blunt, it’s a Kafkaesque fuck up that the whole world is stuck in, and people are pissed.
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AI’s ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
You don’t have to convince me of that, I’m feeling quite secure in my job for the minute. I’m just aware that we may look at the actual code less and less as confidence in it grows or outpaces confidence you’d have in a an equivalent human reviewer. There’ll always be jobs in handling the riff raff of the machines at some level of abstraction.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
I just meant to have a similar level of confidence in the code as if it was checked by an also fallible human. Not a long reaching philosophical point.
reply