Adobe's new AI can convert 2D
photos into 3D scenes
A taste of "Beyond The Seen".
Today at Adobe MAX, the company's annual creative conference, Adobe will preview a new technology called "Beyond the Seen" that uses artificial intelligence to push the boundaries of two-dimensional images and even transform them into is an immersive three-dimensional film. Although this is only a demonstration, it shows how AI image generators designed for specific purposes can have great commercial and technical applications. The image maker works by taking a landscape or interior photo and expanding it into a 360-degree panorama that surrounds the camera. Of course, it can't know what's behind the camera, so it uses machine learning to create a smooth and flexible environment, whether the input image is a mountain or a music room interior. Adobe's algorithm can also describe the 3D geometry of the new environment, making it possible to change the perspective and even make the camera seem to be moving around the environment.
While image extensions or graphics are nothing new,
Adobe's AI generator is the first built solely around it. For example, DALL-E 2 allows users to diffuse their images in small blocks,
while Stable Diffusion requires reconfiguration.
Adobe's AI Image Generator is somewhat different from
general image generators such as DALL-E 2 and
Stable Diffusion in many respects. First, it is trained on data that is more
structured than one that has a specific goal in mind. DALL-E 2
was trained on Stable Diffusion with billions of text pairs covering
everything from avocados and Avril Lavigne to zebras and Zendaya. The Adobe
creator was trained exclusively on a database of approximately 250,000
high-resolution 360-degree
panoramas. That means it's good at creating realistic environments from seed
images, but it doesn't have text-to-image capabilities (in other words, you
can't easily apply text and get the same results surprise) or any general
operation. It is a tool with one function. However, the image it creates is
larger. Adobe's current product uses an artificial intelligence system called a
General Adversarial Network, or GAN, not a form of advertising. GANs work by
using two neural networks against each other. The generator responsible for
creating new and discriminating will think whether the image shown to him is
from the generator or the real image from the training. As the designer
improves the creation of real images, it makes him fool the discriminator, so
the image algorithm is created. Adobe's current product uses an artificial
intelligence system called a General Adversarial Network, or GAN, not a form of
advertising. GANs work by using two neural networks against each other. The
generator responsible for creating new and discriminating will think whether
the image shown to him is from the generator or the real image from the training.
As the designer improves the creation of real images, it makes him fool the
discriminator, so the image algorithm is created. Although there is no word on
when this technology will be available to the public, its unveiling today is
"part of a broader program for other technologies," which Adobe is
pursuing. It has always been possible to create 360-degree
panoramas with hardware, but soon it will be possible to create accurate
panoramas using only software. That can really change things - and yes,
minority creators can be allowed to make peripheral experiences.
Comments
Post a Comment