Hacker Newsnew | past | comments | ask | show | jobs | submit | martingoodson's commentslogin

Title needs to be changed. It completely misrepresents this research. There was no comparison between human written and AI written stories.


I played with this last night with my four-year old daughter. We had fun with asking Miles to explain what bones are made of etc.

Today, she asked "where has that robot guy gone?". Crying now because I won't let her talk to Miles anymore.

She has already developed an emotional connection to it. Worrying indeed.


I would like to think the child is missing the bonding and fun the two of you enjoyed with the robot guy. The child may be missing the experience of being with you and the robot guy. I would look for more activities you can explore with the child.


Honestly, I think if I wasn't there, she still would have loved it. She related to it like a person.


That sounds dangerous to me. Not like I think you did something wrong or exposed your daughter to danger at all; it was probably a really useful exercise. The scary part to me is how readily she accepted it as human, or friendly.

We already know how well people are deceived by text and images. Imagine if they're getting phone or video calls from "people" who keep them company for hours at a time. Imagine if they're accustomed to it from an early age. The notion of dealing with a real, messy, rough on the edges, honest human being well become an intractable frustration.


I can see how it's worrying, but mostly as a replacement for real connections - if instead it supplements them, then not so bad.

Most children love talking to a fun adult who enjoys talking to them. As parents we hope to be that adult for them most of the time, but of course that's not easy to do all the time.

If parents made a tool like this a crutch and it replaced quality time with them or they were less likely to hang out with their friends, then yeah that's a big problem. If they use it as a learning aide or occasional fun diversion, it seems great.


How are phones going as a "supplement" for real connections? 25% of university students (digital natives) on antidepressants?


Tangential, but... when my daughter was 8 or 9, we read _I, Robot_ together, and both both cried when Gloria's parents decided to separate her from Robbie, her robot companion. Such a fond memory to this day.


You should put a raspberry pi in a toy monkey and connect it up.


Written by someone who knows what they are talking about.


I've worked in data extraction from documents for a decade and have developed algorithms in the space. I've developed a product using LLMs for this purpose too.

This article is essentially correct.


thanks, glad to hear it.


I work in financial data and our customers would not accept 96% accuracy in the data points we supply. Maybe 99.96%.

For most use cases in financial services, accurate data is very important.


so, what solution are you using to extract data with 99.96% accuracy?


We hosted Ziming Liu at the London Machine Learning Meetup a few weeks ago. He gave a great talk on this fascinating work.

Here's the recording https://youtu.be/FYYZZVV5vlY?si=ReoygVJMgY9oje3p


Baptiste Roziere gave a great talk about Code Llama at our meetup recently: https://m.youtube.com/watch?v=_mhMi-7ONWQ

I highly recommend watching it.


Do you have any reference for this claim, or are you guessing? It was reported that the algorithm was a Gradient Boosting Machine by investigators who gained access to the code.

https://www.lighthousereports.com/suspicion-machines-methodo...


Well if you torture the analogy enough, you could argue a tree based model is a lot of if/else statements!


Most comments here are in one of two camps: 1) you don't need to know any of this stuff, you can make AI systems without this knowledge, or 2) you need this foundational knowledge to really understand what's going on.

Both perspectives are correct. The field is bifurcating into two different skill sets: ML engineer and ML scientist (or researcher).

It's great to have both types on a team. The scientists will be too slow; the engineers will bound ahead trying out various APIs and open-source models. But when they hit a roadblock or need to adapt an algorithm many engineers will stumble. They need an R&D mindset that is quite alien to many of them.

This is when an AI scientists become essential.


> But when they hit a roadblock or need to adapt an algorithm many engineers will stumble.

My experience is the other way around.

People underestimate how powerful building systems is and how most of the problems worth solving are boring and require out-of-the-box techniques.

During the last decade, I was in some teams and I noticed the same pattern: The company has some extra budget and "believes" that their problem is exceptional.

Then goes and hires some PhDs Data scientists with some publications but only know R and are fresh from some Python bootcamps.

After 3 months, or this new team no much was done, tons of Jupyter notebooks around but no code in production, and some of them did not even have an environment to do experimentation.

The business problem is still not solved. The company realizes that having a lot of Data Scientists not not so many Data/ML Enginers means that they are (a) blocked to do pushing something to production or (b) are creating a death star of data pipelines + algorithms + infra (spending 70% more of resources due to lack of knowledge on Python).

The project gets delayed. Some people become impatient.

Now you have a solid USD 2.5 million/year team that is not capable of delivering a proof of concept due to the fact that people cannot do the serving via Batch or via REST API.

The company lost momentum, competitors moved fast. They released an imperfect solution, but a solution ahead, and they have users on it and they are enhancing.

Frustration kicks in, and PMs and Eng Managers fight about accountability. VP of Product and Engineering wants heads in a silver plate.

Some PhDs get fired and go to be teachers in some local university.

Fin.


Would you see these as analogous?

The people who create the models and the people that use them.

The people who create the programming languages and the people that use them.


I think because it's a relatively 'younger' field, there is a bit more need to know about the foundations in AI than in programming. You hit the perimeters a bit more often and need to do a bit of research to modify or create a model.

Whereas it's unlikely in most programming jobs you would need to do any research into programming language design.


I agree with you. Being a corporate department head, I've led exactly one project that's had me digging through my DS&A textbook. But it's much more common to need to go beyond the limits of an off-the-shelf deep learning algorithm. Plus many of the cutting edge deep learning advancements have been fairly simple to implement but required serious effort to create, and being able to understand an Arxiv paper can have a direct impact on the job you're currently working on, whereas being able to read all of TAOCP will make you a better coder, but in a more abstract way.


This sounds like a sell-pitch for an AI scientist.


This sounds like a dont-buy-pitch for an AI engineer...

The point the commenter is making is that both schools of thought in the comments are valuable and unless you perform both roles, i.e. an engineer who is familiar with the scientific foundations, both are symbiotic and not in contention.


I guess this message is delivered by an AI scientist, sure.

It's almost self-exploratory that when you hit a roadblock in practice you go back to foundations, and good people should aim to do both. In that case I don't see where ML engineer/scientist bifurcation comes from except for some to feel good about themselves


Not at all. It's something I've seen in practice over many years. Neither skill set is 'better' than the other, just different.

There is a need for people who are able to build using available tools, but who don't have an interest in the theory or foundations of the field. It's a valuable mindset and nothing in my original comment suggested otherwise.

It's also pretty clear that many comments on this post divide into the two mindsets I've described.


as a friend from at&t dallas told me, tis cheaper to turn a mathematician into a programmer than a programmer into a mathematician.


It's not about critical thinking: the employees were about to sell up to $1B of shares to thrive capital. This debacle has derailed that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: