Apple bought a startup that improves a photo with AI

If the lab DxOMark rating that evaluates the smartphone cameras, was nominated for “a Most unusual camera”, it would be guaranteed to win the Google Pixel. Still achieve from a single camera the ability to do portrait shots, to stabilize the image and to lighten pictures taken in the dark, so that they looked as natural as possible, using only some of the software algorithms is expensive. Despite the fact that initially this development looked quite unusual, gradually, many companies, and Apple, came to the conclusion that there is nothing wrong with software improve pictures.

The iPhone camera can become even better thanks to the technology of Spectral Edge

See also: in Camera app on iOS to change the resolution and frame rate

To improve the quality of photos taken with the rear camera, Apple has acquired a startup Spectral Edge, which is engaged in the development of software techniques to increase the level of detail of the images and improve color reproduction. It is a combination of patented technologies of Image Fusion and Deep Learning, work together to photos more advanced level without having to use advanced optics and other hardware components, which normally depends on the quality of the pictures.

As neural networks do better pictures

Left without treatment, the Spectral Edge, to the right with the processing

Obviously, Spectral Edge simply has access to a large database of photos taken in different scenarios. Like any neural network, the development of a startup is constantly drawn to them and analyzing made by the user, compares it with those which he studied, and then just embellish or contrast makes areas that seem to be bright enough. The same thing likely happens with the detail. Because everyone knows that the AI has learned to finish some details in the photos, it is not surprising that the Spectral Edge is also increased by the sharpness of the image.

Subscribe to our news channel Telegramto constantly be aware of the latest news from the world of Apple.

Apple already has its own work in this area. Technology Deep Fusion, which the company introduced this year, doing about the same thing, and Spectral Edge. It analyzes the photo, and independently increases the detail and “decrotive” colors where necessary. The output is the even higher quality than could be achieved without the use of Deep Fusion, which in most cases gives excellent results, not allowing to even entertain the fact of post-processing.

See also: these are photos taken on iPhone 11 using the Deep Fusion. What is it?

The use of software algorithms and neural networks, undoubtedly, allows to make photos better, however, according to photographers, this is a common deception. After all, if the processed, during which “gets out” the saturation, contrast and detail, then it is not real x-ray. It is not possible to determine the real ability of the one who took this picture, and, in fact, leaves less space for creativity. Still, when a photo does not a person but an artificial intelligence credibility suffers, and its value decreases.

The photo with AI — cheating or norm

I personally rather strange to hear such judgments, because the smartphone is a device not for professional shooting. As a rule, those who buy iPhone, just want to get the beautiful pictures that are not ashamed to share on social networks and upload to Instagram. There is therefore nothing surprising in the fact that the producers deliberately go to such a Ruse, using artificial intelligence and technology of post-processing to the result like normal users, not geeks and professionals, eager to independently handle frame or to “steal” it in BW.

Leave a Reply

Your email address will not be published. Required fields are marked *