Google Bard has received an exciting update that allows users to incorporate images to enhance their interactions with chatbots. With this update, users can now request visual information by saying commands like “Show me an egg” or more specific queries like “Show me an ostrich egg.” Bard will then display appropriate images corresponding to the requests.
Currently, image support is available for English requests, while Bard itself supports English, Japanese, and Korean languages. If any of the displayed images catch the user’s interest, the source of the image is provided, allowing users to follow the attached link for more details.
Google implemented image support in Bard to enhance the communication of ideas, as visuals are often effective in conveying information. The company recognizes that images can help flesh out concepts, create persuasive proposals, and provide better answers when visual information is required.
Bard has been receiving regular updates since the Google I/O event earlier this month. The language model behind Bard, known as PaLM 2, has been upgraded, introducing features such as export options, a dark mode, and improved summary and source information.
Although the latest update to Bard primarily focuses on displaying relevant images from search results, Google has hinted at upcoming features. While the current version doesn’t support image input or the generation of user-specific images, Google plans to introduce these capabilities in the coming months.
The integration of image support in Bard demonstrates Google’s commitment to continually enhancing its AI language models and expanding their capabilities. By leveraging the power of visuals, Bard aims to provide users with a more immersive and informative chatbot experience. As Google continues to refine Bard and introduce new features, users can look forward to even more dynamic interactions with this AI-powered language model.