Google Bard was originally tested in the United States and is now available to everyone in 180 countries without any need to register on the waiting list. Additionally, Bard will support Japanese and Korean, with plans to add support for 40 languages soon.
PaLM 2 is a large-scale language model that enables “advanced math and reasoning skills,” according to Google. This upgrade replaces the previous lightweight version of Google’s AI model, LaMDA, which was criticized for its poor performance compared to other language models such as ChatGPT and Bing Chat.
The integration of Google Lens into Bard will also allow users to include photos and images in prompts. For example, users can upload a picture of dogs and ask Bard to “write a funny caption about these two.” Bard can analyze photos, detect dog breeds, and create funny captions if needed.
Another new feature is the inclusion of citations when Bard provides code in the future. Users can simply click on the annotation, and Bard will underline those parts of the response and link to the source. Additionally, Bard now has a code export button and a dark theme for users who want to use the service better.
Google also plans to add extensions to Bard that will allow the chatbot to access external resources, partnering with other service providers such as Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram, and Khan Academy. This will greatly expand the chatbot’s capabilities.
Over time, Google will integrate functionality from existing applications such as Docs, Drive, Gmail, and Maps into the Bard user experience. Users can export Bard’s responses directly to Gmail or Docs, saving copy-paste time and “getting ahead of drafts of emails and documents.” Google also showed an example of using an extension for Adobe Firefly, an AI image synthesis model, to generate a unicorn image for a birthday party within Bard during the I/O presentation.