The dark side of the moon: are smartphone photos real or fake?

The dark side of the moon: are smartphone photos real or fake?

[ad_1]

“What are you doing, moon, in the sky? tell me, what are you doing, / Silent moon?” I wonder if today the wandering shepherd of Asia imagined by Leopardi has a smartphone, but if he does, it is likely that between one reflection on the meaning of life and another, he will take some photos of the moon. Less likely that he read a Reddit thread that talks about the moon, photos and a Samsung phone. The accusation would be that the fantastic satellite images taken by the Korean company’s latest smartphone are false.

Tech Test

The proof: Samsung Galaxy S23 Ultra, Android has a new benchmark

by Bruno Ruffilli


Proof

User u/ibreakphotos did a simple test: starting with a high-resolution photo of the moon, he took a 170-pixel blurry image and enlarged it to full screen on his monitor. In a dark room, he then used a Samsung smartphone (he doesn’t explain which one, though) to photograph the display. Result: the phone has inserted details into the photo that were absent in the original; hence the suspicion that the real moon – which in this case is fake – has been replaced by an archive photo of the moon.

Here is the comparison (on the left the original photo, on the right the one taken on the Samsung phone:

Explanation

“Samsung is committed to providing the best photography experiences under any conditions. When a user takes a photo of the moon, AI-based scene optimization technology recognizes the moon as the main object and shoots multiple frames to compose multiple frames, after which the AI ​​improves the details of the image quality and colors. It does not apply any image overlay to the photo”. Thus the Korean company in an official comment. In reality, the issue is not new, and in fact, with some An article from a few months ago emerges from a search in the official blog which explains in detail the technique adopted by Samsung. Essentially, no new details would be added, but the artificial intelligence of the photographic system would be capable of extracting all the details present in different images taken at very short intervals, compare them and integrate them into a single photo.

Photo taken with Samsung Galaxy S23 Ultra

Photo taken with Samsung Galaxy S23 Ultra

Previous

It is not the first time that cases of this kind have been discussed and analysed, and a very similar case is remembered which had Huawei as its protagonist in 2019: the P30, the flagship smartphone of the Conese company, created in collaboration with Leica, produced images too beautiful to be true.

On the occasion of the launch, the Chinese company explained that the camera software had been trained to recognize beautiful photos with hundreds of thousands of examples and therefore would be able to set the parameters of exposure, aperture time, focal length, etc. to always guarantee the best result. In the sense of more suitable for the chosen subject: whether they are flowers, green plants, landscapes, faces of people or animals, and sunrises, sunsets and blue skies.

But there was never going to be room to store so many professional photos on the device to replace user snaps, so the images were real, the company argued in a denial that resorted to much the same arguments Samsung later used. In Shenzhen, they faced other problems, such as Trump’s ban on Huawei products in China, and the rest is history.

The year before, however, Google had introduced the Pixel 3, which took photos that didn’t exist. Once again recording many images one after the other at very short intervals, and of these recovering only a few details. For example, he was able to understand who in the photo has their eyes open, who is smiling, who is looking towards the lens and then combine the various details of the individual images into a photo that is perfect, but non-existent in reality: no one laughed together with the others, there were certainly those who had their eyes closed, someone was looking away.

tutorials

How to photograph the Moon with your smartphone

by Andrea Nepori



Computational photography

Google’s pledge was ambitious: “You’ll never use flash again.” And in fact today it is used very few times, and always for the same reason: it is the smartphone’s artificial intelligence that collects as many details as possible from various images and combines them together to have good quality photos and videos even in the dark. This is because the photographic sector of a smartphone is more than the sum of sensors and software, which are conceived and designed together to obtain otherwise impossible results starting from the individual components. It’s computational photography, where the data coming from the sensors are processed thanks to artificial intelligence: you don’t get to equal a 3,000 or 4,000 euro professional camera, but it becomes increasingly difficult to justify a device dedicated exclusively to photography, when with the smartphone you can get high-quality results. Under the premise that a fancy phone or the most expensive of cameras won’t turn an amateur into Helmut Newton, AI can still do a lot. On the iPhone, for example, there’s a technology called Deep Fusion to make a portrait taken against the light acceptable, lighting the foreground so that you don’t just have a shadow but a face. Of course, one must ask oneself: what if one wanted a shadow instead?

[ad_2]

Source link