Hacker Newsnew | past | comments | ask | show | jobs | submit | ctbellmar's commentslogin

PlantLab (https://plantlab.ai) - AI plant health diagnosis for cannabis. It's an API, not an app [1]. Photo in, structured JSON out - condition, confidence, growth stage, nutrient lockout analysis. The response is for machines. Light burn at 0.92 confidence? Your controller dims the light. Calcium deficiency with excess potassium flagged as the lockout cause? Dosing pump adjusts.

I'm a software dev/data nerd, not a grower. I got interested because cannabis grow rooms are already full of automation - VPD controllers, pH/EC monitoring, dosing pumps, dimmable lights. But nothing was looking at the plant. Every sensor in the room measures the environment, not whether the plant is actually doing well. I wanted to add the eyes. And this seems to be a bound domain issue (i.e. limited number of issues/conditions/pests vs. all plants everywhere).

ViT-based multi-stage pipeline that verifies it's cannabis, classifies condition or pest, then runs nutrient subclassification if needed. 30 classes, 18ms inference, Go API, ONNX Runtime. Trained on a little over a million images from grower friends. Classification was 80% of the lift. I also shipped a Home Assistant integration - camera takes a scheduled snapshot, PlantLab diagnoses, HA acts on the result. No human involved.

Recently the part that's been the most fun is the autoresearch loop. Between training runs the system looks at its own confusion matrix, finds which classes it's mixing up, audits those training images for bad labels, and tells me what to fix. It's not fully autonomous yet but it's getting there - the model is increasingly debugging its own training data.

Solo project, <100 users, free tier is 3/day.

[1] I built a simple Android app for those who want to just try it out, it's on Google Store. Probably will make one for iOS too as time allows. https://play.google.com/store/apps/details?id=com.plantlab.p...


Such a great idea. It's nice that with cannabis, despite there being so many cultivars, it's such a large industry based around essentially one plant. And while some varieties can look quite different, I think your API should generally be effective.

I've been thinking about similar systems for tissue cultures but I can't seem to find a way to generalize and still get good training data or effective results. Once you lose track of white balance, species, optical clarity and distortion from the vessel, etc... Results decline quite a bit in my experience. It makes it a neat yet fairly useless system outside of itself.

Granted, I have no idea what I'm doing and these could be solvable problems. Certainly much easier to solve by focusing on a single species.

I'm impressed with how well it classifies based on the image examples. A little over a million images is probably what makes it possible. My experiments have been much smaller. Maybe with more material I could overcome those limitations I mentioned, but I have a feeling the multi-species pipeline really drags it down.

Have you found that light temperature no longer skews feedback after so much training data? For me it really matters, causing classification to confuse light sources with actual plant condition (hence the colour card for white balance helping so much)


Thanks! Yeah, the single-species focus does a lot of the work. Under the hood it's not one big model - there's a cannabis verification gate, then routing into disease vs pest vs deficiency, then narrower classifiers from there. Each one has a simpler job so accuracy stays high.

Early on the photography thing was a real problem. Training data was mostly decent shots, then inference would come in as some blurry phone photo under purple LEDs.

Confident misclassifications. The fix wasn't clever - just more data that looks like how people actually take photos of their plants. Messy, badly lit, half the leaf out of frame. Once there was enough of that in the training set the models stopped caring about white balance. About 1.1 million augmented images now and light temperature just isn't a factor. No color card needed.

For tissue culture - I'd bet the multi-species part is what's killing you. I'd pick the single highest-value species, collect a probably-uncomfortable amount of well-labeled data for just that one, and see if things change. Right now you might not be able to tell what's a data problem vs a fundamental limitation, because the generalization overhead masks both.


> there's a cannabis verification gate, then routing into disease vs pest vs deficiency, then narrower classifiers from there. Each one has a simpler job so accuracy stays high.

That never occurred to me. That's a great insight.

> I'd pick the single highest-value species, collect a probably-uncomfortable amount of well-labeled data for just that one

I think you're right. If I want to move forward with it I think it's the only feasible way to validate a proof of concept. Generalizing can't produce a useful tool at my scale.

Thank you! I think this was a helpful nudge. Narrow classifiers could make some things a lot easier. Do you know of any reading materials about routing like this? Is it just programmatic decision tree stuff, or is there something more clever I'm unaware of?


Glad it helps. As for narrow classifiers, it's decision tree logic as you say, and best done via trial and error than over-engineering and theory. Cleverness comes from your own experience :)

I'd love to use this for not cannabis things. I'm looking at building a greenhouse soon, and having this kind of automation for tomatoes or carrots would be dream-like.


That's the idea - hence PlantLab, not CannaLab. Cannabis makes sense as the entry point because it's a cash crop with a big hobbyist scene, so there's enough interest to get real usage data early. But the goal is broader - tomatoes, grapes, whatever grows.

One crop at a time though. A so-so classifier across 50 species is way less useful than a really good one for the thing you're actually growing.


Various AI services (e.g. Perplexity) are down as well


I don't like how they phrased it. From the Verge:

“Perplexity is down right now,” Perplexity CEO Aravind Srinivas said on X. “The root cause is an AWS issue. We’re working on resolving it.”

What he should have said, IMHO, is "The root cause is that Perplexity fully depends on AWS."

I wonder if they're actually working on resolving that, or that they're just waiting for AWS to come back up.


Just tried Perplexity and it has no answer.

Damn, this is really bad.

Looking forward to the postmortem.


They sent out emails to existing customers yesterday, showing if you are above/below/at average usage. I'm above (no surprise), and I wonder if anyone on higher plans will find themselves under-utilizing their subscription - probably not.


I agree!


Pawel,

This looks promising! Is it for text based models only at this time (i.e. no vision/image training)?


I wrote a tool that may be just the thing for you:

https://github.com/bikemazzell/skald-go/

Just speech to text, CLI only, and it can paste into whatever app you have open.


Oh, this does sound cool. Couple of questions that aren't clear from the readme (to me).

What exactly does the silence detection mean? does that mean it'll wait until a pause, and then send the audio off to whisper, and return the output (and stop the process)? Same question with continuous. Does that just mean it continues going until CTRL+C?

Nvm, answered my own question, looks like yes for both[0][1]. Cool this seems pretty great actually.

[0] https://github.com/bikemazzell/skald-go/blob/main/pkg/skald/...

[1] https://github.com/bikemazzell/skald-go/blob/main/pkg/skald/...


I wonder how well Augment system will play with both of these. I recall that for some time, Cursor worked really well with Claude LLMs and less so with OpenAI's offerings like GPT and o's. So far, my own testing had a few timeouts on GPT-5 and slower results. Nothing substantially different - need to experiment with different languages and projects to pick out the use cases for GPT-5.


I know it's been mentioned a few times, but worth repeating: these LLMs tend to do noticeably better in their own native environments. Claude (Opus or Sonnet) in Copilot != Claude in Claude Code. Same applies to Cursor, Windsurf, Augment, etc. This likely has a lot to do with context manipulation (and compression), which affects the resulting output. I imagine that GPT-5 likewise will do better in Codex vs 3rd party plugin/VS Code fork.


The system prompts aren't shared either, and probably accounts for quite a bit of difference as well.


"Qwen3-Coder ... is the first open-source model I’ve been able to accept patches from. It isn’t by any means a Claude killer, but it feels like Claude 3.7 Sonnet, maybe even better."

Has anyone been able to set up Qwen3-Coder to run locally in agentic mode (via LM Studio or similar)? So far, I have only seen in work as Chat via Continue plugin. It gives reasonable suggestions, and it is supposed to be able to call tools, just haven't figured out how to make that happen yet.


I had some time for in depth experiments with it this summer and was disappointed. It gets the surface level details alright, but falls apart on any detailed work.

Examples that failed: - opening hours for POI (restaurants, tourist attractions, etc) - mostly made up - GPS coordinates - produced results that were nearly 100% inaccurate - finding contact info (e.g. phone, email) for specific government or public bodies - nearly 100% inaccurate

The issue with above was mainly not a lack of results but rather fabricated/made up ones. As in: here are the coordinates (that don't correspond to actual locations) or here are the phone numbers of such and such departments (that don't exist), creating more work to try and discover they are nonsensical vs. just giving "no results found" message.


WhatSignal, WhatsApp <-> Signal relay, written in Go

https://github.com/bikemazzell/whatsignal

I'm working on a WhatsApp to Signal relay. I.e.: whenever someone sends a WA message to you, it appears in your Signal. You can reply and it will go back to the original sender.

Why? I'm privacy conscious and don't fancy using a Meta product. But some of my friends/associates/family still insist on WhatsApp only. Running this WhatBridge service on my micro server behind a VPN allows me to communicate without having WhatsApp on my mobile.

Behind the scenes, it connects WAHA (https://github.com/devlikeapro/waha ) and Signal CLI (https://github.com/AsamK/signal-cli). Still early stages, but getting closer to a workable state.


Very cool project. I've wanted something like this for ages but never had the patience to glue it all together. Do you plan to support group chats too? That was a huge headache in my case. Excited to see how this evolves, even if setup's a bit fiddly.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: