Hacker Newsnew | past | comments | ask | show | jobs | submit | storystarling's commentslogin

StoryStarling. You describe a story idea and it generates a fully illustrated children's book, then we print and ship it.

Not templates with names swapped in. Every story and illustration is made from scratch. You can go from "dinosaurs soccer" or write out a whole storyline. Pick an art style, optionally upload reference photos of your kid, and it builds a 28 page book in a few minutes.

Bilingual in 38 languages. We handle RTL (Arabic, Hebrew), CJK, and less common languages like Estonian, Maltese, Irish where there's not much available for kids.

Tech side for the curious: LangGraph orchestrates the pipeline, Celery workers do image generation and text rendering in parallel, and LLMs critique the illustrations for consistency mistakes and can trigger regenerations automatically.

Printed in Germany, booklet around 20 EUR, hardcover around 40 EUR.

https://storystarling.com


Similar experience. I posted a Show HN two days ago for a children's book generator - type a story idea, get a fully illustrated printed book shipped to you. Offered a free printed book including shipping to the HN community via voucher code. Got 7 points, 2 comments, and zero voucher redemptions. Nobody even ordered the free book.

One of those comments was genuinely useful feedback from Argentina about localization. That alone made it worth posting. But the post was gone from page 1 in what felt like minutes.

What's interesting is this isn't a weekend vibe-coded project - it involves actual physical production, printing, and shipping. But from the outside it probably looks like "another AI wrapper," which I think is the core problem: the flood of low-effort AI projects has made people reflexively skeptical of anything that mentions generation, even when there's real infrastructure behind it.


If you don't mind some unsolicited and blunt feedback: I suspect the reason this didn't get a lot of traction is that it is unclear why customers would want this. Sorry, that's probably too harsh, but I find it difficult to imagine anyone buying this:

- Children's books, at least the well-reviewed ones, are pretty good

- This is AI generated, so I expect the quality to be significantly lower than a children's book. Flipping through the examples, I am not convinced that this will be higher quality than a children's book.

- At 20 euros for a paperback, this is also more expensive than most children's books

- Your value prop, as I take it, is that your product is better because it is a book generated for just one child, but I am not convinced that's a solid value prop. I mean, it is kind of an interesting gimmick, but the book being fully AI generated is a large negative, and the book being uniquely created for my kid is a relatively smaller positive.

Those are definitely the highiest-order bits you need to prove to me in order to get traction. A couple of smaller things you should fix as well:

- As an English speaker, almost all the examples are not in English. You should take a reasonable guess at my language and then show me examples in my language

- It's difficult to get started: "Create your own book" leads to a signup page and I don't want to go through that friction when I am already skeptical


Thanks for the blunt feedback - genuinely appreciate it.

You're right that children's books can be excellent, and for generic topics a well-reviewed book from a skilled author and illustrator will beat what we generate. No argument there.

Where we see real value is in the gaps the publishing industry doesn't serve. Bilingual families who can't find books in Maltese/English or Estonian/German. A child with an insulin pump who wants to see a superhero like them. A kid processing their parents' divorce. A child with two dads, or being adopted, or starting at a new school in a country where they don't speak the language yet. No publisher will print a run of one for these families - but these are exactly the stories that matter most to them.

On the UX points - you're right on both. We should localize the showcase to your language, and the signup wall before trying is too much friction. Working on both.


As a father of two boys, i can give you some feedback. The AI stories you will generate will probably be crap and not worth paying for. What my kids love is when i put them (like i take a picture of them, and then generate them in jungle or whatever setup it is with gemini banana) They want that i print them those out, i know it's temporary but its fun for us all. So you could combine those two things.


yes - we do combine those both and you can upload photos to get your kids into the book.


I have to be honest with you: I am the biggest AI booster alive, and even I would never buy a book fully written by AI. Maybe ask me again in 5 years, but I just don't think the quality is there.

That being said, I think you might be on to something with the bilingual book idea - it might be worth exploring there. But if you want to keep the AI angle, you'd have to come up with a pretty clever twist.


Appreciate the honesty. On bilingual - that's actually our strongest use case already. A surprising number of our orders are from multilingual families who simply can't find children's books in their language combination. That's a gap no publisher will fill because the print runs are too small.


> I posted a Show HN two days ago for a children's book generator - type a story idea, get a fully illustrated printed book shipped to you. Offered a free printed book including shipping to the HN community via voucher code. Got 7 points, 2 comments, and zero voucher redemptions. Nobody even ordered the free book.

As someone who taught their kid to read using the DISTAR alphabet, and then moved on from there, your idea just doesn't sound like it has any value.

I wrote (without LLMs) about 2 dozen "books" for 3.5yo to 5yo; none of them had any pictures in them, so having them professionally printed is a waste of time and money.

When it comes to teaching kids to read, what you need is volume, not prettiness. Your idea is appealing to only to the people who teach reading the wrong way (schools, mostly). Kids learning to read from books with pictures in them learn slower.


Personally, I find AI generated text disengaging. A also heard some professional writers immediately notice disorganized storytelling patterns. Have you found a way to fix this? Is there any soul in generated texts?


Not OP, but Opus 4.6 is crazy good at writing. I generated a 500 page book, and it took until around page 275 for things to... go off rails. My strategy was to leverage ralph and a bunch of personas, and it good my act structure, major points and was able to just GO. For the first part, it was, by far, the best sci-fi book I've read. The problem is that I can tell in the writing where context collapse happened. It's fixable, but I'm now realizing I have to rethink my entire approach to it.


This matches our experience. For a 28-page children's book there's no context collapse - the entire story fits comfortably in context. The format actually plays to LLMs' strengths: short, structured, clear emotional arc. It's a very different problem than generating 500 pages.


I believe. Im going to finish the book, but it was so good until the details got confused.

I will say that a skill is developing but it requires oversight and ... honestly, i should probably read the book before buying it on Amazon.


Honestly, for children's books specifically - yes. A children's book is 28 pages, simple language, short sentences, clear emotional arc. That's a very different challenge than writing a novel. We've put a lot of work into the prompting and story structure, and the results are genuinely good for this format. Parents can also edit every page of text before approving for print if they want to tweak anything.

The soul comes less from the prose and more from the fact that this story exists for this child. A book about their specific fear, their favorite thing, their family situation — that's what makes a kid ask to read it again at bedtime.


AI art is massively downvoted here and on Reddit, but boomers on facebook seem happy to share it. So I think you'll do better on other platforms. The opinion of AI generated creative work is just very low here. I personally agree, I've never seen an AI generated story that was interesting and I don't want to expose my children to it. I'd rather they get real stories written by real people.


Fair perspective. But the parent isn't passive here — they're the creative director. They decide what the story is about, who the hero is, what happens. The AI does a lot of the writing, yes, but the parent is the editor: they review every page, rewrite lines, regenerate illustrations they don't like. It's closer to working with a ghostwriter than pushing a button.

Most AI content feels empty because it's made for nobody in particular. A StoryStarling book is the opposite - a parent shaping a story around their specific child's world. That's a real story. They just had help telling it.


I'm no boomer, but I'm building a pipeline to produce books in a variety of genres.

This is one chapter of a book: https://nexivibe.com/writing/chapter_01.html

People that take it seriously, are going to focus on the architecture, universe building, characters, and arc flow and then let the writing be done in a way. The power tools of the cognitive era are arriving.

I'm reading a 500 page sci-fi book and evaluating it, the first 275 pages are fantastic until I can feel the context collapse and it craps the bed.


Are you sure it's not just the delivery? If I want a storybook I don't want to wait 2 business days for it to print, ship, and deliver, and I'm sure most parents/guardians don't think that far ahead. The physical delivery factor turns your product into a mental burden.

If you (or Disney or Hasbro) shipped ebooks that unfolded when you tell it a prompt (in the format of "bibbidi bobbity boop tell me a story about an elephant"), then I'm sure it would fly off the shelves this Christmas. It doesn't even have to be expensive hardware; I'm sure you can build it with a Pi Zero with 2 cheap screens paired to an app on the parent's phone. But perhaps that's not the business you had in mind.


That's a fair point about friction, but we're intentionally focused on physical books - the whole idea is to get away from screens. Most parents we talk to have the opposite problem: too much digital content, not enough tangible things. A printed book that lives on the shelf, gets read at bedtime, gets dog-eared and carried around - that's the product. You're ordering it for a birthday, a holiday, a first day of school. It's not impulse content, it's a keepsake.


I'd be a little surprised that parents who want physical storybooks would actually buy AI generated ones, instead of hoping that you dig up lost manuscripts from Hans Christian Andersen or the Disney golden age. That just seems like a contradictory audience.


Hi from Germany! Thanks for the feedback.

You're right - the showcase sorting prioritizes "Real Books" (from customers who opted in to share theirs) over "Inspiration" (demo books we generated), which pushes the Spanish example further down. We'll fix the sorting so your language appears first regardless of type.

Good catch on the German book with the English idea - that's the customer's original input, which we didn't translate for the showcase. Will fix.

On "samples" vs "ideas": agreed it's confusing. We'll either make the distinction clearer or merge them into one gallery.

Thanks for the thoughtful feedback!


yes, i had the same experience. As good as LLMs are now at coding - it seems they are still far away from being useful in vision dominated engineering tasks like CAD/design. I guess it is a training data problem. Maybe world models / artificial data can help here?


Nice work! We create personalized children's books - parents share their idea and photos, and AI brings their custom story to life with their child as the protagonist. We do hybrid fulfillment depending on the country. The PDF formatting challenges you mentioned are very real!


StoryStarling - Turn your story idea into a printed children's book

https://storystarling.com

Working on a platform where you describe a story concept and it becomes a real, illustrated picture book - professionally printed and shipped to your door.

The key difference from "personalized" book companies: this isn't template stories with a name swapped in. You bring an idea - maybe a book about a kid with a cochlear implant going to their first day of school, or a bilingual German-Turkish story about visiting grandma's village - and it generates a complete original narrative with consistent illustrations throughout.

You can upload reference photos so characters actually look like your child. Supports 30+ languages including bilingual editions on the same page.

Currently refining the showcase features and adding RTL language support.


I too was thinking about something like this a few months ago. There were couple of reasons I didn't pursue the idea. One, the image generation AI wasn't reliable enough. Like, I couldn't get it to generate 2 images where the characters looked consistent, let alone a book worth of images. Two, the margins were quite small, so didn't seem like a viable business.

Wondering if you've thought about such things and your perspective.


Character consistency was the hardest problem, and honestly what took the longest to get right. We use reference images as style anchors, run multiple generation passes, and have an LLM "critic" that checks for visual inconsistencies and triggers regeneration when needed. It's not perfect but it's gotten to the point where parents are happy with the results.

On margins - tight but workable.


What do you mean by RTL because all I can come up with is Verilog or VHDL and I'm certain that's not your meaning. I'll try it out. I have a children's book story I've been trying to image generate for 3 years now and it's not yet worked out. I think the primary reason it fails is that the scenery I request is lifelike yet extremely rare to actually see, although, I did see it, and that's what inspires the story.


RTL = Right-to-Left languages - Arabic, Hebrew, Farsi, Urdu. The text rendering and page layout needs to flip for these, and it gets especially tricky with bilingual books where one language is RTL and the other is LTR.

What's the scenery? Happy to try it on our system if you want to share.


This is really cool. I wish the example stories let me see the entire book and purchase them if I like them.

I’m skeptical about the stories being good quality so seeing the full stories might mitigate that.


The story synopsis next to the preview gives you the full narrative arc before you commit. But fair point on wanting to see more.

You can edit or regenerate pages if something isn't working - it's iterative, not one-shot. Happy to help you try it out without payment - drop me an email.


The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.


I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.


I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.

Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).

I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.


I've found they are actually quite good at semantic geometry even if they struggle with visual or pixel-based reasoning. Since this is parametric the agent just needs to understand the API and constraints rather than visualize the final output. It seems like a code-first interface is exactly what you want for this.


The killer app here is likely LLM inference loops. Currently you pay a PCIe latency penalty for every single token generated because the CPU has to handle the sampling and control logic. Moving that logic to the GPU and keeping the whole generation loop local avoids that round trip, which turns out to be a major bottleneck for interactive latency.


I don't know what the pros are doing but I'd be a bit shocked if it isn't already done this way in real production systems. And it doesn't feel like porting the standard library is necessary for this, it's just some logic.


Raw CUDA works for the heavy lifting but I suspect it gets messy once you implement things like grammar constraints or beam search. You end up with complex state machines during inference and having standard library abstractions seems pretty important to keep that logic from becoming unmaintainable.


I was thinking mainly about the standard AR loop, yes I can see that grammars would make it a bit more complicated especially when considering batching.


Turns out how? Where are the numbers?


It is less about the raw transfer speed and more about the synchronization and kernel launch overheads. If you profile a standard inference loop with a batch size of 1 you see the GPU spending a lot of time idle waiting for the CPU to dispatch the next command. That is why optimizations like CUDA graphs exist, but moving the control flow entirely to the device is the cleaner solution.


I'm not convinced. (A bit of advice: if you wish to make a statement about performance, always start by measuring things. Then when somebody asks you for proof/data, you would already have it.) If what you're saying were true, it would be a big deal, except unfortunately it isn't.

Dispatch has overheads, but it's largely insignificant. Where it otherwise would be significant:

1. Fused kernels exist

2. CUDA graphs (and other forms of work-submission pipelining) exist


CUDA graphs are pretty slow at synchronizing things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: