To make sure their AI systems are safe and ethical, Microsoft has created a framework, titled ''Responsible AI Standards'', which is built on the following six, core principles:
Principle 1: Fairness
All AI systems should treat all individuals and groups fairly, and not discriminate based on demographics, race, gender, age, disability, or other characteristics.
To make sure this happens: All models go through fairness assessments, during development, and the team uses tools like Fairlearn to identify and mitigate biases.
Principle 2: Reliability and Safety
All AI technologies must function safely under various conditions—even unforeseen circumstances—without harming users.
To make sure this happens: Microsoft incorporates safety considerations during the design phase, rigorously tests all systems under various test scenarios, during development, and monitors the performance, post-deployment, to detect and resolve issues asap.
Principle 3: Privacy and Security
All AI systems should protect user data, making sure its handled in compliance with strict data privacy and security standards, especially safeguarding against unauthorized access and breaches.
To make sure this happens: All systems follow Microsoft's Privacy Principles, which align with global regulations, like GDPR, and make sure security measures, like encryption, secure data storage, and access controls, are implemented, monitored, and maintained.
Principle 4: Inclusiveness
AI systems should engage people and communities from diverse backgrounds and benefit a wide range of users, including those with unique needs or disabilities, and Microsoft must foster innovation that benefits everyone.
To make sure this happens: Microsoft has a set of inclusive design practices that are followed during design, and conduct user research across diverse populations to understand needs, challenges, and potential barriers.
Principle 5: Transparency
So users understand how their AI systems work and operate and how their own data is used, Microsoft aims to provide clear, understandable explanations of the features, functionality, and the decision-making processes that sit behind their AI systems.
To make sure this happens: Microsoft has built explain-ability tools—like InterpretML—to help users and developers understand AI models, and clear disclosures about the AI’s purpose and limitations are always provided whenever an AI system is released.
Principle 6:Accountability
Microsoft wants all the developers, individuals, and organizations involved in designing, developing, launching, communicating, and maintaining AI systems to take responsibility for the overall impact and outcome of the AI systems they’re involved with, and make sure they adhere to the 6 ethical principles, throughout the AI lifecycle.
To make sure this happens: Microsoft has appointed the Office of Responsible AI (ORA) to oversee AI governance and ensure compliance with the Responsible AI Standards, and there is an escalation process for addressing concerns and making accountability decisions.