Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>With a laptop we're back in wait for an hour to see a result.

Any laptop within the last five years with decent memory can run stable diffusion on the cpu in around 12 minutes. My MacBook Pro runs a batch of four on Metal in around 30 seconds.

>We can revisit this if it consumer hardware gets good enough...

I mean, I just showed you a quantized llama running on a Pixel 5 and 6. And, I wouldn't discount most of the next generation of hardware having ML co processing like MacBooks and iPhones and Pixels do with all of this hype.



> Any laptop within the last five years with decent memory can run stable diffusion on the cpu in around 12 minutes.

Majority of output is bad so you need to try dozens of takes to get a result that is reasonably realistic. Multipy 12 accordingly

> quantized llama

I don't know what that means but if it's better than chatgpt/gpt4 then sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: