> Actually I don't think that's true, consider manufacturing robots.
We're long way from making fully AI-driven robots. But much closer to other uses.
Our current world is very data-centric: companies have been collecting, selling, sharing and stashing metadata with websites, social media and smartphones for the last decade. Now, language models will help them process it.
I have one question: for what cause? Surely, for the good of humanity. That's what the companies exist for, right?
It's not like we need laws to force companies to not screw us over, or in America they need lobbyists to force politicians to force companies to not screw them over. Currently, AI is absolutely unregulated wild lawless west, and it's frightening. And not because of bad actors, but because of the 'pretending to be good' ones.
> Wasn't it always true? Trivially all people advocate their beliefs..
You would be surprised, but people need constant reminders of that. Or they will start treating mega-corporations as privacy-respecting, consumer-oriented, eco-friendly or any other bullshit their PR-team trying to push today or will try tomorrow.
Someone else asked that question in this same thread [0].
It's a good question.
The purposelessness of much technology is a problem Neil Postman
addressed very robustly in "Technopoly" [1]. To date I have not heard
any sane responses to his "Six Questions Regarding Technology" [2].
> not because of bad actors, but because of the 'pretending to be good' ones.
When you don't have any reason at all for doing something, and you're
challenged, one tends to use imagination to dream up a plausible
"good" reason.
Companies aren't obligated to disclose the training process.
So, we can't tell. Until, in the matter of months, it is available, accessible and almost impossible to regulate. And every company will start using it to not fall behind.
There's a _long_ list of unlawful things that are technically possible, sometimes trivial even. Only _because_ they are possible in the first place, are there laws against them. This process seems slow compared to a human lifespan, but lots of terrible things people used to do got outlawed eventually.
- once it's widespread - it's a standard (targeted ads, cookie tracking, SEO)
- we'd need a lot of infrastructure to track all the data recorded, where it goes, how it is processed, etc. etc.. And if a big company is caught.. they'll pay fines for being caught and let away to continue the same shady practices, as we have seen with privacy scandals at Facebook.
> ...but lots of terrible things people used to do got outlawed eventually.
That's the third reason - surveillance is evil and bad, but it will never be advertised as that. It will be something good and convenient with a surveillance as a side effect, similar to social media and smart appliances.
Maybe, I am just too pessimistic and you're right with: 'This process seems slow compared to a human lifespan'.
> we'd need a lot of infrastructure to track all the data recorded, where it goes, how it is processed, etc. etc..
You just need to punish the people using the data. If you can then track down where they got the data from, that's a bonus. And if you can track down where those second ones got their data from, that's an extra bonus. But everything after the people using it is optional.
Should it be regulated? If businesses want known shoplifters flagged by their AI camera so they can refuse service, shouldn't they have a right to do it?
I know in America this would be painted as an attack on the Holy Minority and therefore no one will be allowed to discuss it without being branded a blasphemer, so I'm curious what people on here think.
I feel like businesses should have this right, but I worry about slippery slopes. It might start with shoplifters, and 20 years later when 99% of stores are owned by the same 3 megacorps, people could get banned for wrongthink and for shaping the political views of the masses at large. Kinda like what they do now on social media.
Not sure what you mean by modal keyboard, but there is fully customizable row of keys in termux.
I disabled it, because I use special android keyboard app[0] with a custom layout made from scratch[1]: all common symbols, arrows, esc, ctrl, alt, forward delete, paste and enter gestures etc. etc.
[0] - https://jbak2.ucoz.net/ (yes it's horrible website, but it's free and easy to maintain for the handicapped developer)
Well, the modal command system is sort of hacked onto the qwerty keyboard. Surely it'd be more natural to just "insert" directly rather than hitting "i".
Another challenge, is that toki pona requires a lot of context, not only of previous sentences, but visual and communal. For example, I can say 'soweli lili' and point to a cat, then in all further conversations 'soweli lili' will mean exactly a 'cat' until specified.
I could use sentence that could mean literally hundred thousand of different things, but if I explain it properly once, you are expected to keep it as context.
ChatGPT4 will struggle to keep all context as it will surely accumulate.
Nim's default json library is terrible in performance, but there're much faster drop-in replacements like jsony[1]. I'm not sure that's the main issue for low rank, but it's definitely one of them.
I would not call std/json it "terrible in performance" probably still way faster then what you get in many other languages (like python). But yes the JSON lib I wrote is faster due to avoiding branches and allocations.
Your programs could benefit from small dependency-free executables and compile time code generation and execution.
Nim code can also be called directly from python or vice versa, check out nimpy[1].
1. Nim uses 'var' modifier to pass by reference, e.g. "proc (n: var int)...", default behaviour is pass by value. And there're also raw pointers and references (safe pointers).
>is a no-gc mode available?
You can disable gc, but most of standard library depends on it. But in Nim 2.0 there's finally support for ARC and ORC (ARC + cycle collector).
(… it also recurses down on the dependencies, presumably you can extend your shell script to do that too but at some point why not just use the existing tools?)
Now I'm wondering: if there were two monkeys hitting random keys on a keyboard for an infinite amount of time, one in the gpt-4 prompt and another straight typing 0s and 1s who would produce Doom code faster?
> To view this this content, you'll need to update your privacy settings.
What a joke is this site? Ever heard of screenshots? Why does text-based content have to be embedded anyway? Ok, it is hard to press couple buttons, paste a picture, edit, align it, etc..
Then just give me a link.
We're long way from making fully AI-driven robots. But much closer to other uses.
Our current world is very data-centric: companies have been collecting, selling, sharing and stashing metadata with websites, social media and smartphones for the last decade. Now, language models will help them process it. I have one question: for what cause? Surely, for the good of humanity. That's what the companies exist for, right?
It's not like we need laws to force companies to not screw us over, or in America they need lobbyists to force politicians to force companies to not screw them over. Currently, AI is absolutely unregulated wild lawless west, and it's frightening. And not because of bad actors, but because of the 'pretending to be good' ones.
> Wasn't it always true? Trivially all people advocate their beliefs..
You would be surprised, but people need constant reminders of that. Or they will start treating mega-corporations as privacy-respecting, consumer-oriented, eco-friendly or any other bullshit their PR-team trying to push today or will try tomorrow.