Adobe Photoshop is getting a new AI tool that allows users to quickly expand images and add or remove objects using text prompts. The feature is called Generative Fill, and it’s one of the first Creative Cloud apps to use Adobe Firefly’s AI image generator, which was released as a web-only beta in March. Generative Fill is launching today in beta, but Adobe says it will see a full release in Photoshop later this year.
As a regular Photoshop tool, Generative Fill works within individual layers in a Photoshop image file. If you’re using it to extend the borders of an image (also known as outlining) or create new objects, it will give you three options to choose from. When used for external paint, users can leave it blank and the system will try to scale the image on its own, but it works better if you give it some direction. Think of it as similar to the Content-Aware Fill feature found in Photoshop, but it gives more control to the user.
I haven’t been able to try out generative filling myself, but I did get a live demo. It’s impressive and far from perfect. Some of the generated objects such as cars and puddles don’t look like a natural part of the image, but I was surprised at how well it handles backgrounds and fills in blank spaces. In some examples, I was even able to transfer features from the image being edited, such as simulating light sources and “reflecting” existing parts of the image into the generated water.
Achievements like this wouldn’t come as a huge surprise to creators familiar with AI image creation tools, but as always, it’s the incorporation of this technology into mainstream applications like Photoshop that brings them to a much broader audience.
Aside from functionality, another important component of Firefly is its training data. Adobe claims that the model is trained only on content that the company is authorized to use – such as Adobe Stock images, publicly licensed content, and content without any copyright restrictions. In theory, this means anything created with the generative fill feature He should They are safe for commercial use compared to AI models that are less transparent about their training data. This will likely come as a consolation to creators and agencies who have been wary about using AI tools for fear of potential legal repercussions.
1/2
Generative Fill also supports content credentials, a “feed label” system that attaches attribution data to images before they’re shared online, letting viewers know if content has been created or edited using artificial intelligence. You can check the content credentials of an image by checking it out via the validation site, where you will find an overview of the information.
1/2
“By integrating Firefly directly into workflows as a creative assistant, Adobe is accelerating the thought, discovery, and production process for all of our customers,” said Ashley Steele, senior vice president of digital media at Adobe. Generative Fill combines the speed and ease of generative AI with the power and precision of Photoshop, enabling clients to bring their visions to life at the speed of their imagination.
1/2
Generative Fill isn’t available in the full version of Photoshop yet, but you can try it out today by downloading the desktop beta app or as a module within the Firefly beta app. Adobe says we can expect to see a full release on the public Photoshop app in the “second half of 2023.”
Adobe has been injecting AI-powered tools into its products for some time now. At Adobe Max last year, the company introduced some new Photoshop features like high-quality object selections powered by Sensei, another Adobe AI model. Firefly is already being used in Adobe Illustrator for recoloring vector-based images, and Adobe has also said it plans to integrate Firefly with Adobe Express, a cloud-based design platform that rivals services like Canva, though there’s no confirmation yet of when that will happen. released.
#Adobe #adds #image #generator #Firefly #Photoshop