Adobe, the software giant known for its creative apps such as Photoshop and Illustrator, has released its own AI image generation tool called Adobe Firefly.
The company stated that Firefly is a “family of Creative Generative AI models” and has released two tools that leverage it. One tool creates an image based on the user’s input of a prompt, similar to DALL-E and Midjourney, while the other uses AI to style it like “word art”.
The release of Firefly is a significant announcement for Adobe as it continues to expand its creative app ecosystem by offering powerful tools for creating images, videos, and sounds at very little cost. While Firefly is currently in beta and will only be available through Adobe’s website, the company plans to integrate the generative AI tools into a suite of creative apps such as Photoshop, Illustrator, and Premiere.
One feature that sets Adobe’s AI image generation tool apart is allowing users to manually select a setting that prevents Adobe services from using their work for training purposes when uploading to services such as Creative Cloud. This feature addresses concerns about using user-generated content for AI training, giving users greater control over their content.
Furthermore, Adobe has revealed that Firefly will be part of the new artificial intelligence service “Adobe Sensei” version. The company plans to use its large-scale language model for training and combine OpenAI’s artificial intelligence technologies. This move indicates that users can quickly create and use applications such as marketing data analysis.
Overall, the release of Adobe Firefly is a significant step for Adobe in its efforts to expand its creative app ecosystem and offer users powerful tools for creating content with generative AI.
With the integration of generative AI tools into its suite of apps, Adobe is poised to make it easier than ever for creatives to generate high-quality images, videos, and sounds with minimal effort. It will be interesting to see how users receive the tool and how it will evolve in the future.