Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple's neural engine has been around in phones and laptops for years. Not even close to what's required for offline chatgpt. What's your point? Which of us is fooling themselves?

Average consumer hardware is not showing any signs of being capable of this in near term. Hype is hype, but use your own head.

Don't forget that these homegrown models also require training data, scraped and cleaned, an individual or nonprofit can't do that.



> Apple's neural engine has been around in phones and laptops for years. Not even close to what's required for offline chatgpt.

Well, yeah. It's been around for years and chatgpt is new. My years old GPU doesn't run the latest games either, and chatgpt is novel technology so the hardware will of course lag behind a bit. But it will come.


>an individual or nonprofit can't do that.

https://laion.ai/

https://www.mosaicml.com/blog/mpt-7b

There's dozens if not hundreds more individuals and nonprofits doing these things.


Yes, these are used by mainstream commercial AI thingies. But they are not their only sources. Plus they have armies of people preparing and cleaning this data, are you ready to employ one?


People are literally doing this for free. You said that there is no individuals, and no non-profits, that can do this. Which is laughable. They're doing it right now: https://arxiv.org/abs/2305.11206

It may even be the case that all of that RLHF training that OpenAI does simply lessens the quality of generations, as suggested by one of their own papers and the paper above.


65 billion parameters, and what hardware does this require to run fast enough to be usable?)



stop wasting time. Yes you can run this on any hardware. The matter is it is either too slow, requires outrageous hardware, or obviously bad (no tool required to see that it is generated).


>requires outrageous hardware

Who is going to pay $20/mo other than professionals? You'd assume professionals have professional hardware. A mid-range GPU or a video editing laptop is not exactly breaking the bank.

>no tool required to see that it is generated

But again, that also applies to commercial generative images. They're easily discernible, if you just look. Midjourney is stable diffusion with a bunch of LoRA stacked on top. And it can't shed the "midjourney look" because of that. That's not in dispute by anyone.


> They're easily discernible, if you just look.

100%, although dall-e is getting better with hands specifically.

Anyway, Microsoft can keep throwing compute on it until a point where it becomes impossible to distinguish fakes by sight and a mechanism like suggested will make sense.

But with homegrown ones I don't see it happening soon. Only those who spend a lot of money on top of the line GPUs and keep desktop PCs may get to that point. Those GPUs will jump in price like they did at crypto mining peak or become impossible to buy if Microsoft gets the government to require "AI license" for them.

> Who is going to pay $20/mo other than professionals

Apple Music costs $10 and people easily spend ten times that on Patreon...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: