Hacker Newsnew | past | comments | ask | show | jobs | submit | ACCount37's commentslogin

Good. Video capture on second grade Linux SoCs is hell - lots of blobs and weird custom vendor SDKs that work with the vendor's own happy path use case demos and nothing else.

I hope that the more SoCs get mainline V4L2, the more likely the future SoCs are going to be to use it instead of doing something non-standard and awful.


No one gives a shit about "global consensus". As demonstrated in 2020s by multiple countries taking major unilateral actions unopposed.

If a nuclear power starts SAI, what is everyone else going to do? Shake their fists at the sky, realistically.


MS has mostly abandoned that approach now. But during Windows 8 days? Yeah. There was a legitimate concern that MS will lock down Windows and try to funnel everything through Microsoft Store, establishing an Apple-style walled garden.

The concern was serious enough that Valve took a defensive posture and started investing into Linux support. Which, at first, largely failed - but eventually resulted in Steam Deck.


For sure, and I'm glad they backed off from it. I'm also glad they did it because of how it pushed Valve into making Steam OS so good. But Microsoft really did want to go down the same path, and I do not trust them not to try it again.

I don't think there are major unresolved economic tensions between US and Iran or the likes. US isn't, somehow, mad because Iran or Venezuela are suddenly very rich and prosperous and independent - that simply isn't true.

The closest to your dynamic would be that between US and China, and those two aren't at war as of yet. Iran is vaguely supported by China, but it's a low level of support, and it isn't China's proxy.


One theory is that control over Venezuelan and Iranian oil is a means of constricting Chinese economic competition.

It definitely is control over the currency in which oil is traded.

Yes that's the "it actually makes sense" the more repugnant conservative pundits have been pushing because those guest spots on the right wing networks require you not to criticize the administration in any way.

Trump may be a violent moron, but this goes back further. US sanctions and intimidation of Iran and Venezuela has been supported by both parties when in power. It's a US regime thing, not a party/administration thing (that stuff is for the mugs who believe they have a democracy).

The US relationship with China is fascinating. My entire life it has both been an economic boogeyman, the nation nipping at our heels, and yet also the manufacturing engine powering everything out companies were creating.

Ignoring the one sided benefits of that even though you shouldn't it kind of reminds me maybe of the US and Britains relations?

Not a 1:1 but the continental separation, the "greed" of external companies trying to exploit the natural resources and work force.

And yet we're allies today.

If you're interested in the topic I'd highly advise checking out Sarah Paine and her lectures. An interesting view point of Mao and the rise of China.


That's a property shared by any large scale government spending.

The difference between pouring 80B into a war and pouring the same into infrastructure is that one gives you a more developed MIC and a lot of munitions and a lot of explosions (exported), and the other gives you... infrastructure, and construction industry.


A big part of this is that apparently, any president can unilaterally decide to go to war and spend $1B per day destroying things, but building infrastructure for Americans requires the agreement of 60 US Senators.

Pre-emptive strikes are “national security”, but ensuring nutritional food for children in schools, safe bridges and potable water, and healthcare are not “national security”.

Look what Biden had to do to try and get Americans a piddling amount of paid sick leave and paid parental leave. And still 60 votes couldn’t be mustered. But if he wanted to bomb another country to the stone age, that was well within his capacity.


US states are free to build infrastructure without any federal involvement or permission. California just spent $114M to build a wildlife crossing bridge over Highway 101.

https://smmc.ca.gov/liberty-canyon-wildlife-corridor/


That’s not the same thing tho. issit?

How is it not the same? State governments can just build things without the approval of federal senators.

Starting with CodingJeebus' comment, the context of the discussion is what the US federal government can and cannot do, or does and does not do, at the best of a single person (the US President). They have the power to direct destruction, but not the power to direct creation.

Not only is a state government's capabilities irrelevant, it is also incomparable to the might of the US federal government, given its unique ability to sell US Treasuries and issue US currency. State governments are also in competition with each other, unlike the federal government which is in competition with other countries and has more power to restrict and negotiate trade agreements.


You're completely missing the point. States can build infrastructure on their own. They don't need to depend on the federal government for everything.

Are they? Not conscious?

If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.

So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?


Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.


> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?


How is it not true? Theres a world of difference between predicting the next word of a sentence in a summary and understanding the tenets of mathematics. You're mistaking general application of mathematical knowledge with memorization of mathematical outcomes.

Leave aside "the details" like you being obviously, provably wrong?

We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.

Humans generalize faster than most AIs, but AIs generalize too.


Then prove I'm wrong. Prove that an LLM can in fact solve completely novel arithmetic.

> The line of consciousness, as we understand it, is understanding.

Is it? I'm no expert, by any stretch, but where does this theory come from?

I don't think anyone knows what consciousness is, or why we appear to have it, or even if we do have it. I don't even know that you're conscious. I could be the only conscious being in the universe and the rest of you are just zombies, with all the right external outputs to fool me, but no actual consciousness.


Well we're not. Theory of mind is _understanding_ you're not.

This isn't meant to be an answer that would satisfy everyone, but in my opinion consciousness is a specific biological / evolutionary adaptation that has to do with managing status, relationships, and caring for young. It's about having an identity and an ego and building mental models of the egos / identities / etc of others.

I don't think there's any reason we couldn't in principle attach this sort of concept to an LLM, but it's not something we've actually done. (and no, prompting an LLM to act as if it has an identity does not count)


The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.

Turing aimed too low.


And the chatbots don't even pass the Turing test.

I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.


They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.

Purely retorica but, would you be able to distinguish a chatbot from an autistic human?


> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.


I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.

That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"

I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.


It's simple. It's because AI is the scariest technology ever made.

Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.

By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.


AI, the way you are describing it, has not been invented yet. It is a fiction.

What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.

I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)


"It's just marketing" is just the "denial" stage wearing a flimsy disguise.

Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.

What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".


You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.

Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?


And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".

If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.

If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.


That aside.

Lets look at the facts.

Are LLMs displacing labour? In the aggregate - not from what one can see. The aggregate statistics tell a different story e.g. the hiring of software engineers is still growing Y-o-Y.

The limits of LLMs will be put in place through financial constraints. People like you seem to think there's an infinite stream of money to fund this stuff. Not really. Its the same reason why Anthropic and OAI are now shifting focus to generate revenues and cash flows because they will not receive external funding forever.


LLMs are indeed displacing labour. Junior IT roles are drying up in places. Translation and art are also becoming harder to earn from.

I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).

I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.

Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.

Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?


> If not then would it not be prudent to take the risk seriously?

What does taking the risk seriously look like?


> What does taking the risk seriously look like?

Politics - proper guardrails, adapting the legal framework to accommodate AI and make sure it doesn't benefit only preselected few.

Something that can and should be done yesterday is to stop the capital drain out of the economy and into accelerated, war-motivated AI development - there's no need for war-AI per se but clearly it's the most likely reason for the capital drain and rush.

Once the rush and wars stop, and some capital is made available for the rest of the economy, the latter can adapt to the introduction of AI at a normal pace, that should include legislative safeguards to support competition and prevent monopolization of AI and information sources.


Oh, you again. In every thread. Are you a respect expert in the field of ai? What are your qualifications?

I'm not interested in reading the same arguments over and over angain. Ai is not scary anymore, it's fucking boring. Exits thread

Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.

Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.


> Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.

It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.


I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.

I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.


I remind you of why nuclear weapons exist.

They exist because human minds conceived them, and human hands made them.

One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.


Yeah, the problem with AI is that they can become too good at performing general tasks, ranging from, like, designing cancer treatments, or designing bioweapons, and everything in between

You can't create and enrich nuclear materials inside a datacenter.

Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?

> Everyone recognized that it was so dangerous to use them after the first two mass casualty events

I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.


Truman changed after learning the real civilians death numbers that they caused. The military leaders absolutely knew the impact before, and kept advocating for its in later wars.

By any quantifiable measure, yes, and not by small numbers either.

Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.

People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.


Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.

A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.

The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?


The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?

And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.


IQ is among the best predictors of life success, for humans. Being up by an extra SD in the g dimension covers a multitude of sins.

I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.


>> I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this".

One has to only look at the current tech and political leaders.


> AI is the scariest technology ever made

Well, it's a good thing that all we managed so far is a large language model instead.


I think the Nuclear Bomb is still scarier. But AI is not scary for its destructive potential but for its potential to disrupt our society fundamentally, and not just in a good way.

> I think the Nuclear Bomb is still scarier. But AI is not scary for its destructive potential

AI excels in both making weapons of all kinds and effectively targeting them, as the resent war has shown - AI is more dangerous, and can be more destructive, than all weapons taken together.


I meant more in a physical destructive sense.

Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.

"It's all just PR" is a lame excuse not to think about the implications. Of things like: AI capabilities only ever going up over the course of the past 4 years.

Past performance is no guarantee of future results. It isn't just a financial cliche - if Moore's law held we would be working with subatomic sized gates instead of fooling around with EUV gates.

https://xkcd.com/605/


Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.

That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".


Humans are dangerous and hilariously exploitable.

If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.

That's the kind of threat a rogue AI can pose.

Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?


There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.

There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.


Is it me making the mistake, or is it you making that very mistake in the other direction?

Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.

We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.


The success rate of computers doing those tasks has gone from 0% to 70%-90%, but reaching 100% might take a very long time.

How many people are needed to make up the difference to 100% though?

They don't need to be sapient to be dangerous though

"Why be afraid of nukes it's not like they WANT to blow up"

Hmmm I would personally pick nuclear weapons as the #1 scary tech.

And a close (non-tech) second is the ruthlessness of sociopaths seeking power.


This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.

Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.


The vast majority of people on this planet work repetitive, uncreative jobs.

There is no such job done by humans today that is 100% uncreative, but people will continue to insist there is.

The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.


Oh? And what extensive knowledge and experience makes YOU qualified to determine what "the vast majority of people on this planet" are doing for work and if those tasks are creative or uncreative?

Not sure what you're insinuating. What do you think is the statistically average job on this planet? It's still going to be cultivating a smallholder farm in developing countries, or working in logistics, manufacturing or the broader service in developed countries.

All of these average jobs are structurally repetitive. Yes, humans do constantly inject creativity, but it's a means to an end, to getting the job done.

You apparently mistook my descriptive comment for a value judgment, but it isn't.


Ah, the Torment Nexus approach to AI development.

Does it matter? Evolution is the brain's very own "pre-training". Hundreds of millions of years of priors hardwired.

We can do that for AIs too - pre-train on pure low Kolmogorov complexity synthetics. The AI then "knows things" before it sees any real data. Advantageous sometimes. Hard to pick compute efficient synthetics though.


I think It matters for the question that I was responding to.

What about modern LLMs isn't "agentic" enough?

Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: