Final 12 months, we introduced our Safe AI Framework (SAIF) to assist others safely and responsibly deploy AI fashions. It not solely shares our greatest practices, however gives a framework for the business, frontline builders and safety professionals to make sure that when AI fashions are applied, they’re safe by design. To drive the adoption of essential AI safety measures, we used SAIF ideas to assist kind the Coalition for Safe AI (CoSAI) with business companions. Right now, we’re sharing a brand new device that may assist others assess their safety posture, apply these greatest practices and put SAIF ideas into motion.
The SAIF Danger Evaluation, out there to make use of in the present day on our new web site SAIF.Google, is a questionnaire-based device that may generate an on the spot and tailor-made guidelines to information practitioners to safe their AI methods. We imagine this simply accessible device fills a essential hole to maneuver the AI ecosystem towards a safer future.
New SAIF Danger Evaluation
The SAIF Danger Evaluation helps flip SAIF from a conceptual framework into an actionable guidelines for practitioners accountable for securing their AI methods. Practitioners can discover the device on the menu bar of the brand new SAIF.Google homepage.