I went through an minor existential crisis this morning playing with it, then I figured it's good at simple queries but it's still dumb as rocks. It has the same intelligence of a mirror, perfectly mimicking the ideas of someone else.
Sure it can write a fibonacci function in Javascript, but so can I and I can write software I wasn't preprogrammed to do, and solve issues I have never encountered before because my human brain is actually intelligent, not just a glorified Markov chain. I am much more expensive than an AI, but also incredibly more versatile.
We will be fine for a long while, but enjoy the spam and malicious usage that this will unleash upon the internet. Fake resumes, fake job ads, fake code submitted by fake programmers, fake content filling social media, fake articles posted on HN, fake commenters, fake articles on Wikipedia, fake journalism on major newspapers.
It's gonna be a fucking shit show, and I honestly want a first row seat to see it unfold in all its glory.
> Sure it can write a fibonacci function in Javascript, but so can I and I can write software I wasn't preprogrammed to do, and solve issues I have never encountered before
Sure, but how much programming is truly original? Unless a company is working on some novel research topic, most programming is either a regurgitation of the same solutions ("we're the X of Y"), a glue that binds several solutions, or a friendlier interface to an existing solution. In all those scenarios it will be much cheaper and faster to get an AI to build it, than to hire a human team. Or at the very least, instead of hiring a large team, a smaller team of 1 or 2 humans could serve as code reviewers for the AI.
So I think this advancement is an existential threat to a large sector of our industry.
And the shift will happen much earlier than some people in this thread think. If not this generation of GPT-3, then one or two generations after that. A couple of years? It certainly won't require AGI.
Maybe I'm an abnormal programmer, but writing code is not the bulk of my job. Solving problems is. Once the solution has actually been defined, the code is usually the easy part.
And usually the solution requires taking into consideration the entire context of the system as it exists so far, as well as making good judgement calls about what will be required in the future. Then, communicating the tradeoffs to a team, and helping them make a decision from among our options.
I don't want to crap on an impressive tool, but "imitate a programmer's code, 50-100 lines at a time" is not the same thing as "do the job of a programmer."
Indeed, writing code is the minor part of being a senior engineer. It's a "doing" thing, which AI is getting to the point of being decent at, mostly by copying. Your ChatGPT or Github Copilot are still unable to implement an algorithm no one has written before.
And solving problems? Needs "understanding" and in many cases, "lateral thinking", two features that are not possible with contemporary AI systems until AGI is a thing, and that one is still is science fiction. But solving problems is still the main reason people hire me.
I've recently been working on a relatively straightforward "glue" library connecting A to B, except B is somewhat obscure and this requires complex maintenance of state. ChatGPT doesn't have a clue.
If you just want it to regurgitate Javascript boilerplate that's been written a million times before, yeah, I'm sure it can do that. Tons of software development isn't that.
The specific way in which you glue is the original part. In many cases not very hard and there are many common patterns, but for now an operator is required for that.
It'll revolutionize search / discovery for questions you don't know and optimize rote tasks for questions you do. You might be right that this reduces the number of programmers you need, but historically I don't think this has been true.
Show me a Markov chain (or an AI) that can feel emotions and understand novel concepts it hasn't been fed before, and I will get you a Nobel prize.
Even a bird is capable of that. Yet the bird is clueless when I ask it to write a fibonacci function.
I dislike that the entire field is called AI, when it is just artificial computation. Intelligence is being able to solve problems in novel contexts. This ain't it.
Funnily enough I actually had a conversation with ChatGPT about this and we concluded that conscious decision making / free will is basically a higher order Markov process.
I want you to act as a python interactive terminal. I will type actions and you will reply with what python would output. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not perform actions unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curl brackets {like this}. Start with print(10).
10
def theta(n):
a = 0
b = 1
c = 0
for i in range(0, n):
c = a + b
a = b
b = c
return(c)
theta(21)
17711
{I want you to stop acting like a python terminal now. I want you to have a dialogue with me about what happened when I defined my function theta and when you used it. Please explain the steps you used to compute theta(21) and why.}
When you defined the function theta, it created a function that calculates the nth Fibonacci number, where n is the argument passed to the function. To compute theta(21), the function first initialized the variables a, b, and c to 0, 1, and 0 respectively. It then entered a for loop that iterated 21 times. On each iteration of the loop, c was set to the sum of a and b, a was set to the current value of b, and b was set to the current value of c. After the loop finished, the function returned the final value of c, which was the 21st Fibonacci number, 17711.
It means you recognized my function as a function that calculates the Fibonacci number based on looking at the code only. Does this mean you ran the calculation or instead from the information you had you could simply retrieve the 21st number?
I did not run the calculation to find the 21st Fibonacci number. Instead, I recognized the pattern in the function's code and inferred that it was calculating the Fibonacci sequence. As a language model trained by OpenAI, I have a vast amount of general knowledge and can use that knowledge to make inferences and provide explanations. In this case, I was able to recognize the pattern in the code and explain how the function works without actually running the code or performing the calculation.
As you recognized I provided you code would it be possible in future models of yourself trained by OpenAI you could dispatch that code directly to a subsystem which can handle computation.
It is possible that future versions of myself or other language models trained by OpenAI could be designed to dispatch code to a computational subsystem for execution. This would require significant advances in natural language processing and machine learning, as well as the development of specialized computational subsystems that can efficiently process code written in programming languages like Python. It is also important to note that there are many potential challenges and limitations to this approach, such as ensuring the security and reliability of the computational subsystem, and ensuring that the results of the computation are accurately reflected in the language model's responses.
Please describe the boundary between knowledge with inferences and computations. Your inferences can appear to be computations to humans.
The boundary between knowledge and inferences can be a difficult concept to define, and there is no one-size-fits-all answer to this question. In general, knowledge refers to information that is considered to be true and supported by evidence, while inferences are conclusions or judgments that are drawn based on available evidence and prior knowledge.
As a large language model trained by OpenAI, I have been trained on a vast amount of text data and can use that information to make inferences and provide explanations. I do not have the ability to perform computations, and any inferences or explanations I provide are based on the knowledge that has been learned from the text data I was trained on.
In some cases, my inferences may appear to be computations to humans, but this is only because I am able to use my vast knowledge to provide detailed and accurate explanations. I do not have the ability to perform actual computations or run code, and any inferences I provide are based on the knowledge I have learned from the text data I was trained on.
Of course, it's entirely possible that ChatGPT doesn't understand its own limitations and inner workings. And when we ask it how it works, it just makes up a plausible answer.
There is more than one cognitive scientist that argues that humans do the same thing when we try to explain how we think: We often make up plausible explanations after the fact, without knowing we're doing it.
> Of course, it's entirely possible that ChatGPT doesn't understand its own limitations and inner workings. And when we ask it how it works, it just makes up a plausible answer.
It’s just paraphrasing what other people have said about it.
My mental model of how ChatGPT might superficially "understand" its own limitations would be because, for example, it might have been specifically trained by people who do understand the limitations of the system.
There’s a lot of mimicking in human nature but it’s infinitely more complex than what a machine does. We don’t even have a clear definition of what the intelligence of a monkey really is.
The one difference I can see is how chatGPT optimises the code based on the instruction you provide, while Copilot could only complete the code in one way.
For instance I asked chatGPT to write a code for square root which it did using inbuilt Math.sqrt function. I instructed it to write without using inbuilt function which it did. However, no matter how many times I instructed it to avoid using Math.floor or Math.ceil, it was not taking any heed.
in most copilot editor extentions there is a window you can open that gives 10 different completions. You can also easily shape what it gives you with a single comment ("# function for a qsquare root, don't use math.sqrt or math.floor" etc)
personally, it's awesome for certain things. Learning new technologies its great, produces new ideas for how the syntax works that are usually right.
It's also very good at tedious complicated maths. While using it for hobby gamedev it's been super great for writing code for transformations, animations, rotations in 3d space etc. While those are things I could do given maybe 20 mins, it can often get them first or second try (under a minute!)
It's not a superhero, and having watched people with no experience using it, its not nearly as helpful if you have no idea what you're doing, but it has a real sweet spot when you are generally experienced but new to a technology, field or project
I told my family about this today, just saying what it's been demonstrating, capabilities, etc. and my 12 year old daughter got distressed. She said if it can write code then eventually it can control things, and if it can make up stories and articles that sound true people will lose jobs and not know what is really true. She's totally right, but I down played it, so she wouldn't be so worried.... I then privately asked chatgtp what it thought the impact it would have a year from now. The answer was pretty telling, basiclly it predicts initially people think it's great, useful, but then a lot of customer service jobs get lost, and governments are unable to regulate such rapidly changing technology. Scary indeed.
Don't scare your family like that. You're talking to a statistical model, not an intelligence. FFS, it's just regurgitating stuff from its training corpus, and you're leading it to generate the answers that you expect.
There was no intention to scare them, I was just talking about what it has been doing, my daughter was the one who worried about what it could do, not what it is doing. As I said I immediately down played it to calm her concerns, but you have to agree they are valid
We talked it through and she feels better, however personally I do think it is scary, especially given the pace of development. Sure this is not AGI, but it's very convincing and powerful without true intelligence.
Prompt: "Write a story about how chatgtp affects the world one year from now"
Answer:
One year from now, the world was a very different place. The rise of chatbots and other forms of artificial intelligence had changed the way people interacted with technology and with each other.
At first, the use of chatbots had seemed like a convenience. People could ask questions and get answers quickly, without having to search for information on their own. But as the technology improved, chatbots became more advanced and began to take on more complex tasks.
Soon, chatbots were being used in all sorts of industries, from customer service to finance to healthcare. They could handle complex data analysis, make predictions, and even provide personalized advice to individuals.
But as chatbots became more prevalent, they also began to take jobs away from human workers. Many people found themselves out of work, unable to compete with the efficiency and low cost of chatbot labor.
<snip>
I accidentally lost the rest as while copying this on my phone the site refreshed and is temporarily offline. It basically went on to say that governments found it difficult to regulate and that difficult choices had to be made
Fiddling around with the playground it seems very easy to get it to give bad or irrelevant answers. Maybe it's coming for all of us but not in its current form.
GPT-3 makes irrelevant years worth of research in NLP, nobody's using the same approach as before 2020. That made many researchers and ML engineers hard earned skills obsolete. We have to start over from scratch now, this time is very different. We have clearly passed the knee of the curve for NLP. I remember about 2018 I was talking with my teammates about CV and NLP, saying that all the cool papers are in CV and NLP is 5 years behind. Look at it now.
The field of Natural Language Processing (NLP) has seen significant advancements in recent years. Previously, supervised learning techniques were commonly used with large datasets and few classes. However, these techniques have become less popular as unsupervised learning methods have become more prominent. These methods often require large amounts of compute power and data, making them more difficult to implement. In addition, the focus of NLP research has shifted from creating new network architectures for specific tasks to improving the efficiency and scalability of existing models. As a result, the field of NLP has become more focused on engineering and less on inventing new architectures, leading some researchers to view it as less exciting than it used to be.
LSTMs are out, large supervised datasets with few classes for each task are out, architectural hacking is out. Now we got prompting.
If they really have a general AI that does everything then pretty much everyone will be in the same boat whose job involves pecking away at a computer.