Hacker Newsnew | past | comments | ask | show | jobs | submit | sxp's commentslogin

On a similar note, you can use https://tarotpunk.app/deck-1 when planning is your next startup.

Nice game.

But I think you need to work on better "opposites". E.g, "as of now" and "long ago" don't really seem to be opposite. Instead, maybe they're complex conjugates of each other. I.e, they have similarities along one axis (time frame) and differences along another. But I wouldn't consider those two to be opposite one another. Word2vec with a consine similarity closer to -1 might be better than what you're using now.


the older generation of vectors, like word2vec, compress to one sense of a word, but even if we ignore polysemy, opposites have a lot in common. So for any antonym pair, they would not be minus-1, they would be actually pretty close.

The big challenge is when we go beyond antonyms with clear scales like heat, speed, size. To me "as of now" is recent, and therefore opposite to "old". I would like a word other than "opposite" or "antonym" or "contrast" that captures a wider range.


At a minimum, the app needs updates to handle breakages caused by OS updates. It needs moderators and other staff for legal reasons since Meta is large enough that there's always a significant liability risk for even a few users. It needs to interact with the main non-VR app unless they want to fully isolate it. Etc.

When I was at Google, I had many of these discussions about cost tradeoffs for products that were https://killedbygoogle.com/


How does it work? Is it https://people.csail.mit.edu/mrub/evm/? I see the FAQ about VitalLens, but I couldn't find technical details.

It's super cool. Thanks for sharing. I want to build a biofeedback app for meditation and this looks like a good platform to use.


I found more details at https://www.rouast.com/blog/articles/vitallens-take-a-vital-... and https://arxiv.org/pdf/2312.06892.

It's still vague beyond saying they trained an AI: "VitalLens is trained on the PROSIT and Vital Videos Africa dataset...The time-synchronized sensor array used for PROSIT consists of a video camera, electrocardiogram (ECG), pulse oximetry, blood pressure monitor, and an ambient light sensor. "


Please consider checking out comments: https://news.ycombinator.com/item?id=47293662


What do you mean? Eulerian-Video-Magnification could be (and probably is) the underlying algorithm in the VitalLens API.


Ironically, Gemini says that it’s OK to use the Ultraplan for OpenClaw via gemini-cli because the Ultra plan has some API & Cloud credits baked into it. I think $100/mo but I can never figure out how Google billing works. I’ve pasted the response I got when I was asking it about OpenClaw. There is legal precedent for an AI hallucination being used to upload a contract (e.g, an AI customer support for an airline made a false claim and the customer bought a ticket based on that claim) so it will be interesting to see if Google reverts the bans since Gemini hallucinated that OpenClaw was OK:

  This is a critical question because the answer is different for Google vs.   Anthropic, and getting it wrong with Anthropic can actually get your account banned.
  Here is the reality of the situation based on current Terms of Service and recent community reports.
  1. Google (Gemini Ultra + gemini-cli)
  Verdict: Safe (Authorized Feature)
  Google explicitly built the gemini-cli bridge to allow Ultra subscribers to use their plan programmatically. This is not a "hack" or a gray-area wrapper; it is an official feature.
  • Why it's okay: You are authenticating via gcloud or the official CLI login flow. Google tracks this usage against your specific "Agent" quotas (currently ~200 agent requests/day for Ultra users).
  • The Limit: As long as you are using the official gemini-cli as the bridge, you are compliant.
  • The Risk: If you use a different unofficial script that scrapes the gemini.google.com web interface (simulating a browser) rather than using the official CLI, you risk a ban for "scraping." But since you are using gemini-cli, you are in the clear.


Claude says it was safe too. Bare minimum the flagship models of these companies should understand their own ToS. Sheesh.


afaict, OpenClaw use gemini-cli for oauth, then bypassing gemini-cli and do the call directly.


Gemini didn't hallucinate anything. You just failed in basic reading comprehension.

In some sense, hallucinations as a problem have been solved already - their rate of occurrence seems much lower than that of people failing to read what is written instead of what they hoped it would be.


To add some math to the discussion:

- A human uses between 100W (naked human eating 2000kcal/day) to 10kW (first-world per capita energy consumption).

- Frontier models need something like 1-10 MW-years to train.

- Inference requires .1-1kW computers.

So it takes thousands of human-years to train a single model, but they run at around the same wall clock power consumption as a human. Depending on your personal opinion, they are also .1-1000x as a productive as the median human in how much useful work (or slop) they can produce per unit time.


The math is simpler, 1 human is irreplaceable by AI.

Therefore its value is infinite. Therefore Altman's hypothesis is toilet paper thin.


I remember when toilet paper was like ddr5


The human brain also is a product of billions of years of evolution. We branched off from our common ancestor 7-9 million years ago. We encode quite a lot of structure and information that is essential for intelligence. The starting point of just our life time of training is incomplete.

If you calculate 100W * 7 million years * 365 = 255,500MW to train.


If you really want to go down that path then AI's are the product of human ingenuity and labor so you have to amortize all of that into AI training. Then numbers become pretty meaningless very quickly. That sand didn't up and start thinking on its own you know.


That's the NRE of getting to where we are and having these llms


The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.


True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.


Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)


This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.


Also Mistral, which is definitely building AI assistants even if they aren't quite as successful so far.


> "White House launches direct-to-consumer drug site..."

> "The site is not selling drugs directly to American patients..."

Just another layer of middlemen. They should go with the proper free market option and allow Americans to buy medication from other countries.


How could the Trump family then directly benefit from it?


https://arstechnica.com/health/2026/01/trumprx-delayed-as-se...

    There’s already reason to be suspicious of conflicts of interest with TrumpRx, the senators note. There’s a “potential relationship between TrumpRx and an online dispensing company, BlinkRx, on whose Board the President’s son, Donald Trump, Jr., has sat since February 2025,” the senators write.


Ego. Brand. But there is likely some financial angle buried in the plans somewhere.


Reading some of the other articles on that site, it's unclear how scientifically sound the original article is. A quick Google search gives different radiocarbon dating for the landslide https://www.sciencedirect.com/science/article/abs/pii/S01695...

I don't know enough about the event to figure out the likelihood of either hypothesis, but this other data point is something to keep in mind.


How does one reliably carbon-date a site which got much extraterrestrial matter mixed into it? Whith probably different carbonisotope decay rate onsets/offsets? Because from 'not around here'?


Claude didn't follow your "Every line must earn its keep. Prefer readability over cleverness. We believe that if carefully designed, 10 lines can have the impact of 1000." from https://github.com/quantbagel/gtinygrad/blob/master/AGENTS.m... given how bloated this demo is.

https://blog.evjang.com/2019/11/jaxpt.html is a better demo of how to render the Cornell Box on a TPU using differentiable path tracing.


The agents.md is from the upstream tinygrad repo: https://github.com/tinygrad/tinygrad/blob/master/AGENTS.md


> Never mix functionality changes with whitespace changes.

Whoa.. the cursor rule I didn't know I needed!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: