Adobe Gingerbread Makes It Easy To Control AI Image Generation

By:

Published on:

Adobe is once again pushing the boundaries of image generation processing with their latest project, Gingerbread. This new AI tool is set to revolutionize the way designers create images by providing more accurate and flexible generation capabilities.

Gingerbread builds upon the success of Adobe’s previous image AI tool, Firefly, but takes it a step further by allowing users to generate images based on user-created 3D scene presets. Instead of simply outputting random compositions, Gingerbread layers the generated images on top of these presets, resulting in more controlled and precise outcomes.

One common issue with AI-generated images is that they tend to place the main subject in the middle of the image, lacking creativity and artistic composition. Gingerbread addresses this problem by leveraging 3D modelling techniques with text prompts. Users begin by roughly modelling the image scene with 3D objects, using a simple example like a whiskey glass on a wooden table. They create a 3D shape resembling the glass and position it in the image.

When the text prompt “Whiskey glass on a wooden table” is processed by Gingerbread, it intelligently places the generated image on the 3D model. The whiskey glass appears in roughly the same position as the initial 3D shape, resulting in a more accurate representation of the user’s intention. This process significantly reduces the randomness often associated with AI image generation.

Furthermore, Gingerbread offers users the option to upload custom 3D objects that closely resemble what they want to visualize. This customization capability enhances the precision and control users have over the generated images, allowing them to achieve their desired results more effectively.

While image AI systems like Midjourney currently have limited control over the images they generate, relying solely on text prompts, research projects like DragGAN have demonstrated that accurate AI image generation is technically feasible. Projects such as Meta’s Make-A-Scene and Project GLIGEN have utilized 2D sketches as templates for AI image filling, providing users with more control than traditional text-only image systems.

Adobe’s goal of making AI generation more controllable is not entirely new, as these research projects have explored similar concepts. However, by integrating this functionality into widely used tools like Photoshop, Adobe is taking a significant step towards bringing this technology out of the lab and into the hands of designers in the real world.

Vishak
Vishak
Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here