If DoD systems are running on OpenAI infrastructure, you can't just pause them for 6 months during an acquisition. This gets far more complex than just "liquidation of assets".
Because their assets would have been vastly overvalued. The bailout is when the government buys those assets at as close to that fictional valuation as they can, and likely then sells them back at their actual worth.
> Absolutely no reason for a bail out.
There's never been any reason for a bailout. It's just handing tax money to wealthy people who have made bad decisions.
At some point you reach a size when too many politicians and the people who own them have invested so much money that they're willing to take any size political hit in order to save themselves from personal losses when you fail.
We need a law that says if you hold any data about a person, they must be notified when anyone accesses it, including law enforcement.
I used to work in criminal investigations. I understand how this might make investigation of real crime more difficult. But so does the fact that you need a warrant to enter someone's home, and yet we manage to investigate crime anyway.
Your data should be an extension of your home, even if it's held by another company. It should require a warrant and notification. You could even make the notification be 24 hours after the fact. But it should be required.
The entities holding the information here are literally police departments. The information itself is evidence, used in active criminal investigations. It's good to want things, though.
The information is not in any way restricted to use in active criminal investigations, and further, has been found to frequently be used for a variety of other purposes.
It's a bit like saying pornography is used in the study of human anatomy.
~jedberg is talking about a hypothetical law that would apply to ALPR data. In reply, you said "The information itself is evidence, used in active criminal investigations." ("The information" here referring to ALPR data.) (You also said, "The entities holding the information here are literally police departments.", but I don't see that that's relevant unless we choose to believe that police departments are more deserving of public trust by default than any other organization.)
I was replying to the "used in active criminal investigations" part. Yes, the ALPR data managed by Flock is sometimes used in active criminal investigations. However, it's also used for many other things.
The many other things that it's used for supports ~jedberg's argument.
So we're clear, you believe there should be a law that, when a police department collects information about you during a criminal investigation, they should notify you directly that they've done so?
Specifically, I believe that if information that is held by a private 3rd party I accessed by anyone, law enforcement or otherwise, that third party should be required to tell you that it was requested and by whom. Just like they can't put a gag order on a search warrant to your home, this hypothetical search would be exempt from gag orders.
It does make sense. Police are absolutely not beyond reproach, and there's screwups all the time. They need to be held to a high standard.
It's also easy to imagine reasonable compromises, like a time delay where they only have to report after e.g. 48 hours, and allowing a system whereby a judge can issue a warrant to extend that delay.
> Your data should be an extension of your home, even if it's held by another company.
Nice idea, but at least in the U.S. (with the lone exception of LE obtaining cell phone location records), courts have consistently held that if you give your data to someone else, you are no longer entitled to an expectation of privacy in it. https://en.wikipedia.org/wiki/Third-party_doctrine
If you want your data to be considered an extension of your home, at least for now, keep it at home.
Nice idea (2), but many companies and govt agencies force one to give lots of data or you will not be receiving services, sometimes very important services.
I think the notion that data would be a home is beyond weak, but the explanation you gave for why isn't solid either, since the objects of data do not need to and in this case haven't consented.
That is, recordings of people in public settings (in some jurisdictions) are property of the recorder, but it still isn't a home (just imagine how that would work in some jurisdictions, someone takes a picture of you and it's trespassing? Would you be able to shoot them?)
Is there not some concept that utilizes cryptography in a way such that information about people is accessible, but if it's accessed, then the access request is added to a ledger (akin to blockchain) such that who made the access, when, and about whom becomes provably public knowledge?
And regular orders currently notify the service provider, but they don't necessarily notify the target, they just don't have a prohibition on the service provider notifying the target.
Finally, recordings of public areas actually aren't be impacted by warrants at all, right? But what you are saying is not just that LEA would need warrants to look at public recordings from a willingly cooperating camera owner, and that the warrants can't be gag orders (unless specified), but that the targets must be notified, even if the subject under search were someone else, the fact that I'm included in a recording would compel the LEA to notify me?
And how exactly would I be notified? Wouldn't that necessitate even more privacy invading features like facial recognition and a facial to contact information technology? Not an uncommon paradox.
Again, just want to understand the position, my position might leak as the question being leading, but I can't help it.
Much like you can't gag a search warrant on a home, you wouldn't be able to gag these orders either.
> And regular orders currently notify the service provider, but they don't necessarily notify the target, they just don't have a prohibition on the service provider notifying the target.
True, but my proposal would require that they notify you.
> Finally, recordings of public areas actually aren't be impacted by warrants at all, right?
No, but I'm saying this should apply to any time a 3rd party releases information to anyone, including law enforcement. In this case the Flock cameras feed into a private database. They should disclose when someone looks something up.
> And how exactly would I be notified?
Presumably if they can identify you then there would be a way to notify you. But those details could be left to the author of the bill.
My main point is that your data, when housed with a 3rd party, should be considered an extension of your home and offer the same guarentees and protections as the items actually in your home.
You can kind of circumvent that law by keeping the recordings in house.
What we have in argentina is an Habeas Data law, if someone has data about you, you can ask for it, (or ask it be amended or deleted. Pretty simple right?
The home bit is no go though. Maybe an extension of the self? Too flimsy though, there's enough strong of a case treating it as what it is, an image, a visual representation of a person. A home is a specific of a concept that means something else
Alternatively, one could create serious civil damages for those capturing surveillance imagery that causes various harms including false prosecution for any data they collected, even if it was unlawfully taken or used after it was collected. ... then let the liability work out the problem by making it too risky to run non-targeted mass surveillance apparatus.
This would avoid having to define what is and isn't a mass surveillance system. Any camera recording off your property would have a legal risk for the operator-- but if you're just recording locally and only using it to discourage or solve crime you're suffering the risk would be minimal and justified.
That will never happen in America without a focused political revolution because anarcho-libertarian techbro billionaires profit extensively from personal data trafficking and bought off the majority of politicians to keep it that way.
It sounds good but the implementation would be harder than achieving single payer healthcare that's better than what Medicare is now destroyed by for-profit prior authorizations and fake healthcare with lifetime limits.
We also (I worked there at the time) had software that basically said, "Joe watches all of his disks every weekend and drops them in the mail on Tuesdays, let's just assume he's going to do that and ship his new disks Monday morning". And other such predictions.
If you had a very regular viewing behavior you could have your new disks the same day as you shipped your old ones. To the customer, it was magical.
The thing that AI is best at is summarizing vast quantities of information. That means the most natural thing for an AI to do is be "the one tool to rule them all".
The more information it has access to, the more useful the answer can be. But that also means that it can answer all the questions.
>> The thing that AI is best at is summarizing vast quantities of information
by definition a summary is the best at nothing though, and the mentality that the best way to rule is from a single summarized interpretation is both flawed and scary. It's not answering all questions; it's attempting to provide a single summation dramatically influenced by training. Go ahead and incorporate this into your balanced and multi-perspective decision-making process, but "one tool to rule them all" is not the same thing and definitely not what we're getting.
> the mentality that the best way to rule is from a single summarized interpretation is both flawed and scary.
Very much agree. This reminded me of Project Cybersyn [1], an attempt by socialist Chile to build a central heavily-computerized room that would summarize their entire economy to a few men literally pushing the buttons. Complete with 70s aesthetics and Star Trek TOS feel.
Not until it's context window and attention is infinite.
It's best at summarizing/processing modest amount of information quickly. But given more, its usefulness drastically decreases. This demand toolings that divide the amount of information and flow.
this has exceedingly obvious limits. The primary limit is the context pollution that happens when you give it too much context.
Elon and the rest of AI crew who claim LLMs can just forever grow is not realistic or held out by real world testing.
It can do "everything" but by everything, it'll still be fine tuned and harnessed and agentified which isn't really the idea that the model can do everything.
I make holiday light shows with an open source program called XLights[0]. I'm sure you've seen the videos[1] of what people[2] can do. Usually the top comment is "man that is cool but I wouldn't want to be their neighbor!" followed by "my neighbors love my light shows".
Creating the sequences is time consuming, and lot of people end up buying them or sharing them, but those are rarely as good as the ones you make for yourself.
Some folks have dabbled with using AI to create the sequences. I think the biggest issues are lack of training data and it's a very visual art, so there needs to be a better feedback between the text representation and the visual manifestation.
So if you're into using AI to make physical world things better, that would be a good place to look!
I wonder if you can break down the sequences into segments (parts) and then the AI doesn't have to know how to control LEDs directly, but can instead put sequences together in accordance with the music.
When we do it as humans, that's basically how we do it. We may have an overall idea for a theme across the song, but usually you're zoomed into a few seconds of music and adding light effects to it.
I had not, thanks! Interestingly, using FFT for this has been around for a long time, but combining it with transformers could have interesting new results.
It's interesting to me that you can have something like this that is "hard to build" but "easy to verify" - humans are really good at telling if something is "off" about the visualization.
If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.
180 people lost their lives because of bad data in spreadsheet, but not AI.
180 children lost their lives because of decisions by people in the US military (and ultimately the US government / the POTUS).
Let's not fall into the trap of adopting narratives created to waive accountability. The spreadsheet didn't launch a missile, the spreadsheet didn't authorize the strike and the spreadsheet didn't select the target.
Not to mention that "outdated spreadsheet" is also a hilariously anachronistic excuse for a war crime if you consider what kind of satellite technology the US has publicly acknowledged to have access to, let alone what kind of technology it is likely to have access to.
The difference between intentional premeditated murder and reckless endangerment resulting in a killing is not guilt and innocence but merely the severity and nature of a crime. Both demonstrate a callous disregard for the sanctity of human life, one just specifically seeks to extinguish it, the other merely accepts death and suffering as an acceptable outcome.
Many years ago. Not "a few years ago".
Also you could make the sentence that 180 people lost their lives because of an evil war, of which USA and Israel are the aggressors. And we definitely don't talk enough about that part.
The Dodgers could have so easily turned this into a huge win. After 50 years they could have just awarded him a paper lifetime pass. Scan this and get in for any game! It would have been so easy.
Or if they really wanted him to go digital, just buy him a smart phone and install the app for him!
reply