Gen-2: Runway Unveils Second Generation Text-to-Video AI Tool for Short Clips

By:

Published on:

AI-generated art, which generates visuals from word prompts, has grown in popularity in the last year. Users provide a text prompt that describes a scenario, and the tool attempts to build the image based on its knowledge of real-world “seeds” and creative styles. Text-to-video creation, on the other hand, is viewed as the next stage in this area.

Although Google, Meta, and OpenAI have revealed their accomplishments in this field, it is still difficult to declare that these models are realistic. Many models may make animations by compiling and sewing together similar scenes. Others develop a 3D representation of the image to allow zooming in and out of it.

Runway AI, the startup behind Stable Diffusion and an AI-powered video tool provider that provides inpainting, AI-powered bokeh, transcripts, and subtitles, takes a new approach. Its first-generation text-to-video capabilities let users create a real-world scenario, model it, and overlay a text-generated video. Nonetheless, this was often processed as an image, resulting in a single image.

The second-generation text-to-video technology from Runway, Gen-2, goes a step further by allowing users to apply the composition and style of existing pictures and text prompts to the structure of the original video, resulting in the creation of a new video. Users can also utilise text prompts to produce short video clips. The output clips, on the other hand, are often short, blurry, and suffer from poor frame rates.

Runway Gen-2 intends to make this model available for early access or the broader public. The more inputs you provide the model, the higher your chances of achieving a decent result. The smoothest video and best quality are obtained by applying video “overlays” over existing objects or scenes.

Runway already has a $12/month “basic” package with unlimited video projects. Some options, such as training your photo or animal generator, do, however, demand a $10 cost. 

Finally, Runway’s new text-to-video approach represents a huge development in AI-generated art. While there is space for development, it provides interesting opportunities for making new videos from text prompts.

Alan Mathai
Alan Mathai
Alan Mathai is a passionate Flutter developer with a knack for exploring the intricacies of coding tools and open-source technologies. With a deep understanding of the Linux ecosystem, Alan delves into the world of programming, sharing his expertise and insights through his engaging and informative articles. His enthusiasm for all things tech and his commitment to staying up-to-date with the latest developments make him a valuable resource for fellow developers and technology enthusiasts.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here