Hacker Newsnew | past | comments | ask | show | jobs | submit | nimbleal's commentslogin

Not sure how multiple exposures helps?

Smaller sensor, tighter aperture. So yes, more light or a more sensitive sensor.


They must mean by creating a composite image with multiple in focus areas? Otherwise I agree, I can't see anyway that multiple exposures would help, at least from some light reading on Wikipedia - https://en.wikipedia.org/wiki/Multiple_exposure



Hmmmm. Which direction are you driving in where you can hardly understand them? I don’t think there’s a regional accent in the whole of the UK that’s “hardly understandable” spoken by anyone under 80 years old, let alone an hour from London. Especially where the conversation isn’t “in group”


I don’t think that’s an argument from authority. “Experts have been discussing X without reaching a conclusion for a long time” is a premise from which a reasonable argument can be made for the unlikelihood that an off-hand comment on HN has solved X. Argument from authority doesn't take that form though the two do have invoking authorities in common.


Zeiss cinema lenses (in particular master primes) have the least distortion I’ve come across


I was talking to my parents the other day and surprised myself getting pretty chocked up remembering how my dad had shown me how to program an ascii animation on his 386, and how the wonder I felt at that in many ways led me to where I am today, so many years later. These things matter.


Do you still have the animation?


It was basically (no pun intended) this, though obviously not in bash:

  #!/bin/bash

  legs_out=(
  "                     "
  "        .'''.        "
  "       -     -       " 
  "       |  C  ^       "
  "        \    7       "
  "          | |        "
  "        /     \\     "
  "        |  \\ \\     "
  "        |   \\ \\    " 
  "      / |    \\ \\   "
  "     /  |     \\ \\  "
  "    /   |      \\ \\ "
  "   /    |      |\\  \\     "
  "        \\    \\          "
  "               \\         "
  "       /        \\        "
  "      /          \\       "
  "     /    / \\    \\      "
  "    /    /   \\    \\     "
  "   /    /     \\    \\    "
  "  /    \       \\    \\   "
  "   \\   \\       \\    \\...  "
  "     ____]         [    ]"
  )

  legs_cross=(
  "        .'''.        "
  "       -     -       "
  "       |  C  ^       "
  "        \    7       "
  "          | |        "
  "        /     \\     "
  "        |  |  |      "
  "        |  |  |      "
  "        |  |  |      "
  "        |  |  |      "
  "        |  |  |       "
  "      ( |  |  |       "
  "        |  |   )    "
  "        |  |  |      "
  "        |  |  |      "
  "        |  |  |      "
  "        |  |   )      "
  "        |  |  |       "
  "        |  |  |       "
  "        |  |  |       "
  "        /  |          "
  "       /   |  |       "
  "      \\..]  /    /    " 
  )

  print_man() {
    local spaces=$1
    local man=("${!2}")
    for line in "${man[@]}"; do
      printf "%*s%s\n" $spaces "" "$line"
    done
  }

  spaces=0
  state=0

  while true; do
    clear
  
    if (( state % 4 < 2 )); then
      print_man $spaces legs_out[@]
    else
      print_man $spaces legs_cross[@]
    fi
  
    ((spaces++))
    ((state++))
  
    if (( spaces > $(tput cols) )); then
      spaces=0
    fi
   
    sleep 0.1
  done


This is really fun, thanks for sharing. May try to do similar with my niece.


I’m afraid not. It was incredibly simple, but I just remember being amazed that such a thing was even possible.


A video camera shooting at standard shutter speeds (ie if being used by a professional) would likely not show the bullet. If shooting 60fps for eg so 1/120 id guess the bullet wouldn’t show up. Quick Google suggests typical 3000km/h out the muzzle which would have a 7m motion blur trail? Not sure how fast and to what speed a bullet slows in air


Would plugging in your fastest external SSD and then using a hard drive read/write tester achieve some of the same ends? I've done that before with Blackmagic's disk speed test app and found it useful


Yes, or considered another way 1/25th shutter vs almost 1/2000th, ie a lot of motion blur vs. virtually nothing will be able to provoke blurring


Except a moving subject, of course.


At 1/2000th both a running cheetah and a running squirrel are completely frozen. I haven’t yet found anything that isn’t frozen with that setting. I suspect at that point you’re in the domain of bullets, very outstretched springs and the like.

Edit: yeah, a speeding bullet caught at 1/5000th: <https://flickr.com/photos/hoohaaphotos/5587502201/>


Stabilization doesn't help with subject movement, it only helps with the camera's shake.

So with this level of stabilization, you'll take a picture of a running cheetah at 1/25 as if it were 1/2000 only as far as the stability of the camera is concerned. So if you're not tracking the cheetah you'll get a sharp background because the shaking of your hands has been nullified, but the cheetah is still moving within the frame and still blurry.


I could see that being possible with a human language, but a non-human language? No way near enough context, I'd think.


I hope there are more models trained on more precise inputs going forward. I understand that natural language feels the most futuristic but while it has the lowest barrier to entry it’s not only imprecise but also slow. Visual approaches (for example control nets in stable diffusion, image as input in Chat GPT, though both of these are somewhat bolted on), 2D semi-natural languages all merit further inquiry.

Another (and perhaps the ultimate) possibility is to have some way —- perhaps through simulations —- to directly expose the model to the problem, rather than having a human/natural language intermediary.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: