I'm also not a fan of how the only part actually readable in the (a) original, which is part of the title in the front book, becomes completely whitewashed in (c). Where the model actually had the most information, it completely removed it in the result...
But wouldn't that level of glare be what would actually happen if you took the original image, and shot it in the amount of light required to make it look like the output image?
It's not trying to make things readable; it's trying to make things look like there was more light in the room when they were shot. In rooms with high lighting, some objects have glare. That's "correct"—it's what would appear in the training data.
I guess I might have misinterpreted the goal. If the goal was to make the image look like it was daylight, then maybe whitewashing that light reflection was the correct choice. If the goal was to "see in the dark", then it seems like a very bad choice.
EDIT: Finally got the paper to load via the helpful wayback machine link provided in another thread. It looks like the goal was to simulate a long exposure with a short exposure. So whitewashing of "bright" areas in the original might be expected.
Regardless of the goal, there's no way to get a more readable result if the data just isn't there. Whitewashing might simply be a result of that absence.
My point was that it whitewashed exactly those areas which had the most information. However, this is inline with their stated goal of mimicking a long exposure. It's not inline with "seeing in the dark", but that's not their goal.
If you insist on the same point, you didn't understand my initial reply. Again, I'm assuming the input is (a), not (b). But maybe you mean the lighter areas have the most information. If so, why? Just because there's more light? More light != more information. It could just be a bunch of noise.
> More light != more information. It could just be a bunch of noise.
You do realize that, in a camera sensor, light is the signal, right? So the more light, the higher the signal-to-noise ratio, which means that yes it does have more information available to extract.
And yes, I quite realize that the input is (a). I'm guessing that in your display you are not seeing that there is a brighter spot in the middle of (a) corresponding to the whitewashed area in (c). Try maxing out your brightness if you're on a phone or laptop and you should see it. I can even make out letter shapes in (a) within this bright spot.