Hacker Newsnew | past | comments | ask | show | jobs | submit | urban_winter's commentslogin


Thanks! added to toptext now.



> and people strongly prefer the taste and texture.

...and _people in the USA_ strongly prefer...

Although, I don't know how solid the evidence for even that statement is.


If you want solid evidence you can read a book on the history of animal husbandry. Roman sources include Cato the Elder, Columella, and Varro describe how they used supplemental grains to get cows through the winter and provide oxen enough energy to work (and to feed cavalry which would have been completely impossible without them). Humanity has been feeding grains to cattle for thousands of years, likely prehistorically.

Then in the first half of 1800s a bunch of American farmers with an abundance of corn independently discovered that they could grow bigger cows for slaughter in half the time if they fed them grains instead of roughage like hay or grass. That idea quickly spread to Europe and by the time the green revolution and globalization rolled around in the second half of the 20th century, almost every body started doing it.

This isn’t some new phenomenon. It predates the globalization of agriculture and if you were to ask a random farmer around the world whether they feed their cows a ton of grain they’d look at you like you were asking a very stupid question.

It’d be like asking “do plants need fertilizer?” Yes. If you want to feed the world, yes they do.


You've argued that grain is fed to cattle; which was not in question.

The parent questioned whether the use of grain for finishing was down to a demand based on consumer taste preference.

You've done nothing that would move them from their position of questioning the evidence here.

The detail you do provide shows grain feeding increases yield for farmers, which would be an indicator that it is financial benefit to herd owners that drives the use of grain; potentially moving away from your assertion.

Angus beef is very popular in UK, I'm relatively sure it's grass fed?


That is not at all what the GP was asking because this:

> ...and _people in the USA_ strongly prefer...

Although, I don't know how solid the evidence for even that statement is.

Is completely incoherent in the context of the thread and I just did my best to answer the two words “solid evidence.”

However you make a good point. There is a chicken and egg problem here between consumer taste and farmers optimizing their yield. I don’t have an answer, but I invite you to compare them yourself, if you ever get the chance to eat grass finished beef versus a high end ribeye. Or something like wagyu/kobe where they’re fed almost exclusively rice mash or grains.

As for “angus beef” no that doesn’t mean anything. The US/UK/EU don’t have any meaningful regulations about those marketing terms.


>Is completely incoherent in the context of the thread

Ah, well it seemed cogent and straightforward to me: the OP suggested that your indication that grain feeding was driven by consumer taste preference seemed to lack evidence.

It seems like something that will have been tested (certainly for low-n values), it also seems likely to vary by culture/region substantially.

One of my "if I were in charge" ideas is for origin marks that provide all information about inputs into any product made available for sale. Under sight a system one could look up whether the farmer bought grain feed.


where is the mythical land of people that prefer gamey metallic beef?


Wherever it they're finding people in charge of canteen menus.


I am puzzled that the media is widely reporting that the issue was caused by solar radiation, and that the immediate fix is to revert to the previous version of software for the ELAC (Elevator Aileron Computer). The only explanation that seems to fit with the narrative is that the newer version of software has weaker memory integrity checks than the older version - which seems unlikely.

e.g. "For the majority of affected jets, Airbus prescribed a software rollback to a previous, stable version" - from https://www.wionews.com/photos/-pitched-downward-on-its-own-...


If you're in the vicinity of the road called London Wall (where the car park referenced in the article is) then it's only a short walk to London's Roman amphitheatre [1]. It doesn't seem to be very well known but is quite impressive. It's one of very many bits of Roman history entombed in basements of London buildings.

The Merrill Lynch Financial Centre also has a big chunk of Roman stuff in the basement - but there's no public access and no access to the walkway around the ruins even if you're an employee.

[1] https://www.thecityofldn.com/directory/londons-roman-amphith...


You don't need a removable wheel to sharpen a pizza cutter on a stone. That's actually pretty clear from the video showing the sharpening of the removed wheel - the fact they the wheel is not in the handle isn't material to the sharpening action.

Pizza cutters wear out by deformation of the hole in the middle of the wheel (in my experience). I have thought of hacking one to fit bearings, just for the joy of having something that is properly engineered.


This is just a dedicated RF emitter combined with a dedicated receiver. The fact that is it uses WiFi hardware is probably just because that's the cheapest and most available hardware for the researcher to work with. There is no indication in the article that the WiFi can actually be used for transmitting real data at the same time; that a non-dedicated WiFi source can be used; that it works when there are many people between transmitter and receiver.

Therefore the ideas that this might apply to real-world situations and use existing WiFi infrastructure, are a stretch given the information that's been shared.

It basically doesn't seem like a big deal to demonstrate what has been demonstrated.


Research doesn't always seem like a big deal. In this case, using CSI extracted from standard wifi packets (beacons, data frames, etc.) from commodity hardware is the core of the "big deal."

In principle, any packet that carries data can also be used for sensing, though, as you mentioned, this isn't what the researchers demonstrated. However, for years, this kind of thing was studied using special multi-antenna Intel cards to get a clean signal. Getting this level of accuracy from such a low amplitude signal from a single antenna on commodity hardware like an ESP32, is the actual breakthrough. It proves the concept is sound before tackling the much harder problem of using a standard home router amidst other traffic or isolating multiple targets in a room.


Mmwave heartbeat sensors are like $2 at retail pricing. This is commodity stuff, I fail to see how adapting it to a new radio is any kind of big deal.


Using channel state information is not about "a new radio." CSI is a byproduct of existing WIFI standards/infrastructure.

CSI does require a supported chipset, like an ESP-32. However, if an IoT device is already using an ESP-32, for example, one would not need to add dedicated hardware (like an mmWave MR60BHAX) to be able to do things like presence, breathing, heart rate, and location detection.

As a hobbiest/ESPHome user, I have lots of ESP-32s and not lots of mmWave-s. As a business, I'm already shipping with an ESP-32 and I don't want to increase my BOM.

Besides this, I find this research to be a big deal as it has implications for privacy and security. Your biometrics can be collected using existing widely-deployed hardware using existing internet standards. Your smart toaster can indeed be spying on you in more ways than you think.

But anyway, using CSI for sensing will soon be old hat. IEEE has granted approval to the 802.11bf WLAN Sensing working group to define standards for exactly these types of applications. Taking what's currently an artifact of an implementation detail, and turning into a first-class feature.

Edit below

I want to point out another thing: "clinical-level heart rate monitoring with ultra low-cost WiFi devices" can be lifesaving in situations where clinical-level heart rate monitoring is otherwise unattainable.


Human vital signs detection can be useful in earthquake situations. Measuring through walls of concrete is difficult, however, so a new radio is needed (lower fequency and/or more sensitivity).


> This is just a dedicated RF emitter combined with a dedicated receiver. The fact that is it uses WiFi hardware is probably just because that's the cheapest and most available hardware for the researcher to work with.

Ok.

> There is no indication in the article that the WiFi can actually be used for transmitting real data at the same time

So? No one said it was.

> Therefore the ideas that this might apply to real-world situations and use existing WiFi infrastructure, are a stretch given the information that's been shared.

What? First you say it's trivial/obvious, and now it's impossible? Decide on your critique.


The dominant themes in the thread relate to using existing WiFi infrastructure in real world environments. I thought it would be obvious that I was critiquing this line of thinking. Obviously not.


Real-world applications benefit from recent on-device hardware like NPU or Apple Neural Engine.

Intel demo on commercial laptop (2022), https://news.ycombinator.com/item?id=45130061

Qualcomm human-in-home positioning demo (2021), https://www.youtube.com/watch?v=xNmnqCsvMTU


> You'd be surprised by how many amateur cyclists ride more than that each year.

You wouldn't.

As an 8k km/yr cyclist with a lot of cycling friends, I can tell you that 12.5k/yr is extremely high for an amateur. Sure, there are some, but a truly tiny proportion.

8k/year eats bikes, BTW. I used to wear out rims regularly before I switched to disks and chains/sprockets didn't even last a year (on a fixed gear bike).


First, this is what OP said:

> Even avid cyclists could never hit the kilometres travelled by your average car user in a year.

And you tell it yourself:

> Sure, there are some

So, if OP really thinks that no cyclist can ride more than what an average motorists drive a year, then even "but a truly tiny proportion" would appear as a surprise to them.

Also, just looking at my Strava right now, amongst the 30 friends that I follow (I'm picky on my follows), more than a third are on their way to ride more than 25k this year. The most advanced is going to reach 23k by the end of the day based on his current numbers and habbits.

How, where and when you ride your bike will be a huge factor in how much wear it gets. For instance, my commuter' chain usually get less than half the mileage that my road bike' chain get because city is dirty, I ride no matter the weather, don't clean the chain after each ride and keep putting strong torque since I constantly have to stop and start. Same goes for brake pads: when I commute I hardly do 200m without having to brake, whereas I can go for 20km without having to touch my brakes on my road bike.


> So, if OP really thinks that no cyclist can ride more than what an average motorists drive a year, then even "but a truly tiny proportion" would appear as a surprise to them.

I said an avid cyclist, which is quite undefined so fair enough. What I meant was an enthusiast still, not a sport rider or someone you could consider an amateur athlete (many road riders).

Road riding gets you a lot of KMs and hours in the saddle too, like you said in quite a specific wear pattern. I ride for hours on my MTB and my commuter but would never come close to the hours and KMs of road riding, and I will be replacing my MTB sprocket and brake pads much sooner than my commuter.

I think we're more or less on the same page though, and since all cities and cultures are a bit different we could be talking past eachother without specifics at which point my general comments go out the window anyway.


Yeah if you do mainly MTB, I can definitely see why we had a hard time understanding each other!

That's partly why when people talk to me about "how many km do you ride each year?" I respond in hours on the saddle.


Yes.

Are you confusing "adventurer" and "explorer"? There are plenty of contemporary adventurers (motivated by ego, fame, personal achievement) but explorers? Not so much.


The idea that you can apply lower-quality engineering practices in some systems is totally wrong, I think. Good engineering practices will help you retain good developers, will allow you to maintain a sustainable development pace and will prevent you annoying your users so much that they stop using your system. I can't imagine any commercial project where this doesn't matter.

Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality? That's the kind of thing I expect to hear from ignorant stakeholders who know nothing about developing software, not on HN.


> The idea that you can apply lower-quality engineering practices in some systems is totally wrong, I think.

> Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality?

If we use a real world analogy. You don't use the same engineering quality practices to build a sandwich as you do to build a space probe.

It would be a mistake to x-ray verify connections between parts in a sandwich, but it would most likely be a mistake not to for a space probe.

The engineering practice that needs to be done correctly is judging the error risk, predicting the consequences of errors, and mitigating errors where the cost of mitigation does not outweigh an estimate of the expected cost of damage from the error, including work to make things right.

If you ship a sandwich where the spread was unevenly applied, inspection likely could have detemined this before shipping, but the cost to have another person carefully review each sandwich as it is being made outweighs the cost of the ocassional dry spot on the bread... Which may not be noticed or may not be bothersome... Worst case, a sandwich can be remade.

Otoh, if an assembly comes apart in space on the way to another planet, it can't be fixed and the mission might be lost.

If it takes weeks, months, or years for changes to be deployed, it makes a lot of sense to invest in procedures that improve quality before shipping, as responding to issues is expensive.

If it takes minutes or seconds to deploy changes, measuring quality in production becomes more reasonable. Some changes are such that errors are likely to result in expensive cleanup, and those need extenstive testing before release... But things where the effects of errors will be mild can be pushed and if errors appear, a rapid response is often fine. It takes experience/wisdom to know which changes need more qualification and which are safe to try... and experience hopefully reduces errors too.


> The idea that you can apply lower-quality engineering practices in some systems is totally wrong, I think.

But we obviously do this - practices are significantly different between beta release webapps and software on rivers we send to Mars. And that’s a good thing, the tradeoffs are wildly different.

> Isn't this just another representation of the fallacy that it's possible to deliver faster by cutting quality?

Do you think there’s anything that would increase quality on a project that you are on but also slow it down?

It’s not always true - more haste less speed - but it’s not always false either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: