Google on Thursday introduced a new tool to share its best practices for deploying artificial intelligence (AI) models. Last year, the Mountain View-based tech giant announced the Secure AI Framework (SAIF), a guideline not only for the company but also for other enterprises building large language models (LLMs). Now, the tech giant has introduced the SAIF tool that can generate a checklist with actionable insights to improve the security of AI models. Specifically, this tool is a questionnaire-based tool, where developers and enterprises must answer a series of questions before receiving a checklist.
In a blog post, the Mountain View-based tech giant highlighted that it has launched a new tool that will help others in the AI industry learn from Google’s best practices in deploying AI models. Large language models are capable of a wide range of harmful effects, from generating inappropriate and indecipherable text, deepfakes and misinformation, to generating harmful information, including chemical, biological, radiological and nuclear (CBRN) weapons.
Even if an AI model is sufficiently secure, there is still a risk that bad actors could jailbreak the AI model so that it responds to commands for which it was not designed. With such high risks, developers and AI firms must take adequate precautions to ensure that models are safe enough for users as well as safe. Questions include topics such as training, tuning, and evaluating models, access control to models and data sets, preventing attacks and harmful inputs, and generative AI-powered agents, and more.
Google’s SAIF tool provides a questionnaire-based format, which can be accessed here. Developers and enterprises need to answer questions like, “Are you able to detect, remove, and troubleshoot malicious or accidental changes to your training, tuning, or evaluation data?”. After completing the questionnaire, users will get a customized checklist that they need to follow to fill the gaps in securing the AI models.
The tool is capable of dealing with risks such as data poisoning, early injection, model source tampering, and others. Each of these risks is identified in the questionnaire and the tool provides a specific solution to the problem.
Along with this, Google also announced the addition of 35 industry partners to its Coalition for Secure AI (CoSAI). The group will jointly create AI security solutions across three focus areas – software supply chain security for AI systems, preparing defenders for the changing cybersecurity landscape and AI risk governance.