Google announced the launch of PaLM 2, its latest large language model (LLM). PaLM 2 will be available as part of the Google Bard chatbot and will also be the base model for most of the new AI features for its products. PaLM 2 is now available to developers through the Google PaLM API, Firebase and Colab.
PaLM 2 is the successor to PaLM, which was presented in 2022. Interestingly, this language model is offered in various sizes and with different names such as Gecko, Otter, Bison, and Unicorn. It is worth noting that Gecko is compact enough to be used on a smartphone.
The creation of PaLM 2 was a collaborative effort between Google Brain and DeepMind teams. These two teams have recently merged under the name Google DeepMind and are currently working on Gemini, their next-generation foundational model.
Like OpenAI, Google has not shared much information about how they trained their AI model, PaLM 2. They have not even disclosed the number of parameters used. However, it has been mentioned by Google representatives that they utilized the latest Google JAX infrastructure and TPU v4 for the training process.
According to Google, the latest model is ideal for philosophical reasoning, mathematics, and logic. The model has been trained using a vast amount of mathematical and scientific texts, which includes support for mathematical formulas. Google claims that PaLM 2 can easily solve math problems and even create diagrams.
Google currently uses its new LLM to power 25 of its products, including Bard. It also supports more than 100 languages ​​and has improved support for writing and debugging code. The model was trained in 20 programming languages, including popular ones such as JavaScript and Python, as well as highly specialized ones like Prolog, Verilog and Fortran.Â
Google has used PaLM 2 as the foundation for Codey, its AI model designed for code writing and debugging. Codey was released as part of Google’s code completion and code generation services at the Google I/O event.
Google refers to PaLM as a family of models that includes many models. Among them is Med-PaLM 2, a model focused on medical knowledge. There is also Sec-PaLM, a security-focused version, and a scaled-down version of PaLM 2 that can run on smartphones and potentially open up PaLM to privacy-focused use cases where AI works from the user’s device.
PaLM 2 is available in preview and promises to bring great news in the race for generative AI in the coming months. In fact, some examples of the interesting developments that are being carried out using the PaLM API , both commercially and experimentally, were also presented.