They must mean by creating a composite image with multiple in focus areas? Otherwise I agree, I can't see anyway that multiple exposures would help, at least from some light reading on Wikipedia - https://en.wikipedia.org/wiki/Multiple_exposure
Hmmmm. Which direction are you driving in where you can hardly understand them? I don’t think there’s a regional accent in the whole of the UK that’s “hardly understandable” spoken by anyone under 80 years old, let alone an hour from London. Especially where the conversation isn’t “in group”
I don’t think that’s an argument from authority. “Experts have been discussing X without reaching a conclusion for a long time” is a premise from which a reasonable argument can be made for the unlikelihood that an off-hand comment on HN has solved X. Argument from authority doesn't take that form though the two do have invoking authorities in common.
I was talking to my parents the other day and surprised myself getting pretty chocked up remembering how my dad had shown me how to program an ascii animation on his 386, and how the wonder I felt at that in many ways led me to where I am today, so many years later. These things matter.
A video camera shooting at standard shutter speeds (ie if being used by a professional) would likely not show the bullet. If shooting 60fps for eg so 1/120 id guess the bullet wouldn’t show up. Quick Google suggests typical 3000km/h out the muzzle which would have a 7m motion blur trail? Not sure how fast and to what speed a bullet slows in air
Would plugging in your fastest external SSD and then using a hard drive read/write tester achieve some of the same ends? I've done that before with Blackmagic's disk speed test app and found it useful
At 1/2000th both a running cheetah and a running squirrel are completely frozen. I haven’t yet found anything that isn’t frozen with that setting. I suspect at that point you’re in the domain of bullets, very outstretched springs and the like.
Stabilization doesn't help with subject movement, it only helps with the camera's shake.
So with this level of stabilization, you'll take a picture of a running cheetah at 1/25 as if it were 1/2000 only as far as the stability of the camera is concerned. So if you're not tracking the cheetah you'll get a sharp background because the shaking of your hands has been nullified, but the cheetah is still moving within the frame and still blurry.
I hope there are more models trained on more precise inputs going forward. I understand that natural language feels the most futuristic but while it has the lowest barrier to entry it’s not only imprecise but also slow. Visual approaches (for example control nets in stable diffusion, image as input in Chat GPT, though both of these are somewhat bolted on), 2D semi-natural languages all merit further inquiry.
Another (and perhaps the ultimate) possibility is to have some way —- perhaps through simulations —- to directly expose the model to the problem, rather than having a human/natural language intermediary.
Smaller sensor, tighter aperture. So yes, more light or a more sensitive sensor.