NVIDIA has recently announced the release of NeMo Guardrails, an open-source toolkit designed to ensure the safety and security of large-scale language models (LLMs). NeMo Guardrails is a significant development in NVIDIA’s AI Foundations service, which allows companies to train LLMs with their own data.
NeMo Guardrails offers software engineers three different restrictions to impose on internal LLMs. Local guardrails prevent apps from answering questions outside their intended scope, such as customer service representatives answering weather-related inquiries. Safety guardrails ensure the app provides accurate and reliable information by eliminating unnecessary language and referring only to trusted sources. Finally, security guardrails restrict apps to connecting only with third-party applications known to be safe.
NVIDIA claims that the new security tool is compatible with all LLMs, including the popular ChatGPT, and can be used by almost any software developer, regardless of their level of expertise in machine learning or data science. Furthermore, since it is an open-source tool, it can be used with all the tools and software that enterprise developers already use.
NeMo Guardrails is part of the existing NeMo Framework for building generative AI models, which business customers can access through NVIDIA’s AI enterprise software platform or AI Foundations service.
The release of NeMo Guardrails comes after some of the most popular generative AI models, such as Microsoft Bing and Google Bard, have received negative press for producing incorrect or “hallucinated” information. For instance, Google’s chatbot misrepresented the facts in its first public demo, significantly damaging its reputation.
The introduction of the new LLM security tool demonstrates NVIDIA’s commitment to providing safe and secure AI solutions to its clients. As AI becomes increasingly prevalent in various industries, companies must prioritize the safety and reliability of their AI models. NeMo Guardrails is a significant step towards achieving this goal, allowing companies to build and deploy AI models with greater confidence and security.