DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
73% Positive
Analyzed from 1247 words in the discussion.
Trending Topics
#camera#raw#things#phone#detail#cameras#still#images#larger#more

Discussion (25 Comments)Read Original on HackerNews
But the aggressiveness of the de-noising in the native JPG/HEIF images otherwise is really unfortunate if you want to look at the images on a screen larger than the phone's screen. The amount of detail lost (other than in areas like people's faces where the phone knows to specialise) can be very considerable.
I'd really like a way to dial that aggressiveness down a fair bit, even at the cost of more noise/grain and larger file size (through less compression due to the extra noise).
Another thing is the amount of lens flare you can get when shooting at the sun for sunsets/rises, etc or other large bright light sources. With very small lens elements, from a physics perspective it's understandable that suppressing the reflections and inter-reflections is very difficult on such a small surface area (even with special coatings to reduce the fresnel reflection ratios), but if you care about image quality and wanting to look at images on screen larger than the phone which took them, larger format cameras still have some benefit despite their larger and heavier size and therefore inconvenience (looks at 5D Mk IV on shelf).
It's mostly because in the VFX/CG space for ray tracing/path tracing de-noisers, they almost always rely on extra outputs/AOVs of things like 'albedo' (diffuse reflectance), normal / world position, etc, to help guide them in many cases.
So they often can 'cheat' a bit, and know where the edges of things are (because say the object ID AOV changes - minus pixel filtering, which complicates things a bit).
They can also 'cheat' in other ways, by mixing back in some of the diffuse texture detail that the denoiser might have removed from the 'albedo' AOV channel.
Cameras don't really have anything to guide them, so they have to guess. And often, they seem to use very primitive methods like bi-lateral filters (or at least things which look very similar), to try and guide them, but it doesn't work very well.
Portrait cameras on phones can use depth sensors a bit to help if the camera has them, but for things like hair strands, it doesn't really work, and is mostly useful for fake-depth-of-field depth-based blurring.
https://techcrunch.com/2018/10/22/the-future-of-photography-...
Not to say there is no movement on the other fronts. Glass was pushing for a crazy anamorphic lens and far larger sensor that would have been a serious improvement, but I don't know if it went anywhere.
https://techcrunch.com/2022/03/22/glass-rethinks-the-smartph...
From what I remember, the core thesis is “take a lot of pictures and take the best parts”, which works for a surprising number of cases.
Hopefully the camera doesn't upscale and then downscale again if told so save at its actual native-ish resolution.
the alternative, which many smartphone cameras do now, is to capture a burst of many photos of a short shutter speed and then combine them in software. For static things, this is equivalent to a longer shutter speed (with the additional advantage of not blowing out the highlights), and for moving things, we can filter in software to avoid smearing them out.
To counter the unnatural look of noise reduction I often add a film grain effect.
410/411/422 is the least of the problems. If it was just that, it'd largely just be compression artifacts around red/blue things like you often see on streaming / TV new text banners at the bottom. i.e. things like Stop signs, etc...