quarta-feira, janeiro 15, 2025
HomeArtificial IntelligenceInnovating in line with the European Union’s AI Act

Innovating in line with the European Union’s AI Act


As our Microsoft AI Tour reached Brussels, Paris, and Berlin toward the end of last year, we met with European organizations that were energized by the possibilities of our latest AI technologies and engaged in deployment projects. They were also alert to the fact that 2025 is the year that key obligations under the European Union’s AI Act come into effect, opening a new chapter in digital regulation as the world’s first, comprehensive AI law becomes a reality.  

At Microsoft, we are ready to help our customers do two things at once: innovate with AI and comply with the EU AI Act. We are building our products and services to comply with our obligations under the EU AI Act and working with our customers to help them deploy and use the technology compliantly. We are also engaged with European policymakers to support the development of efficient and effective implementation practices under the EU AI Act that are aligned with emerging international norms.  

Below, we go into more detail on these efforts. Since the dates for compliance with the EU AI Act are staggered and key implementation details are not yet finalized, we will be publishing information and tools on an ongoing basis. You can consult our EU AI Act documentation on the Microsoft Trust Center to stay up to date. 

Building Microsoft products and services that comply with the EU AI Act 

Organizations around the world use Microsoft products and services for innovative AI solutions that empower them to achieve more. For these customers, particularly those operating globally and across different jurisdictions, regulatory compliance is of paramount importance. This is why, in every customer agreement, Microsoft has committed to comply with all laws and regulations applicable to Microsoft. This includes the EU AI Act. It is also why we made early decisions to build and continue to invest in our AI governance program. 

As outlined in our inaugural Transparency Report, we have adopted a risk management approach that spans the entire AI development lifecycle. We use practices like impact assessments and red-teaming to help us identify potential risks and ensure that teams building the highest-risk models and systems receive additional oversight and support through governance processes, like our Sensitive Uses program. After mapping risks, we use systematic measurement to evaluate the prevalence and severity of risks against defined metrics. We manage risks by implementing mitigations like the classifiers that form part of Azure AI Content Safety and ensuring ongoing monitoring and incident response.  

Our framework for guiding engineering teams building Microsoft AI solutions—the Responsible AI Standard—was drafted with an early version of the EU AI Act in mind.  

Building on these foundational components of our program, we have devoted significant resources to implementing the EU AI Act across Microsoft. Cross-functional working groups combining AI governance, engineering, legal, and public policy experts have been working for months to identify whether and how our internal standards and practices should be updated to reflect the final text of the EU AI Act as well as early indications of implementation details. They have also been identifying any additional engineering work needed to ensure readiness.  

For example, the EU AI Act’s prohibited practices provisions are among the first provisions to come into effect in February 2025. Ahead of the European Commission’s newly established AI Office providing additional guidance, we have taken a proactive, layered approach to compliance. This includes: 

  • Conducting a thorough review of Microsoft-owned systems already on the market to identify any places where we might need to adjust our approach, including by updating documentation or implementing technical mitigations.To do this, we developed a series of questions designed to elicit whether an AI system could implicate a prohibited practice and dispatched this survey to our engineering teams via our central tooling. Relevant experts reviewed the responses and followed up with teams directly where further clarity or additional steps were necessary. These screening questions remain in our central responsible AI workflow tool on an ongoing basis, so that teams working on new AI systems answer them and engage the review workflow as needed.  
  • Creating new restricted uses in our internal company policy to ensure Microsoft does not design or deploy AI systems for uses prohibited by the EU AI Act.We are also developing specific marketing and sales guidance to ensure that our general-purpose AI technologies are not marketed or sold for uses that could implicate the EU AI Act’s prohibited practices.  
  • Updating our contracts, including our Generative AI Code of Conduct, so that our customers clearly understand they cannot engage in any prohibited practices.​ For example, the Generative AI Code of Conduct now has an express prohibition on the use of the services for social scoring. 

We were also among the first organizations to sign up to the three core commitments in the AI Pact, a set of voluntary pledges developed by the AI Office to support regulatory readiness ahead of some of the upcoming compliance deadlines for the EU AI Act. In addition to our regular rhythm of publishing annual Responsible AI Transparency Reports, you can find an overview of our approach to the EU AI Act and a more detailed summary of how we are implementing the prohibited practices provisions on the Microsoft Trust Center. 

Working with customers to help them deploy and use Microsoft products and services in compliance with the EU AI Act 

One of the core concepts of the EU AI Act is that obligations need to be allocated across the AI supply chain. This means that an upstream regulated actor, like Microsoft in its capacity as a provider of AI tools, services, and components, must support downstream regulated actors, like our enterprise customers, when they integrate a Microsoft tool into a high-risk AI system. We embrace this concept of shared responsibility and aim to support our customers with their AI development and deployment activities by sharing our knowledge, providing documentation, and offering tooling. This all ladders up to the AI Customer Commitments that we made in June of last year to support our customers on their responsible AI journeys. 

We will continue to publish documentation and resources related to the EU AI Act on the Microsoft Trust Center to provide updates and address customer questions. Our Responsible AI Resources site is also a rich source of tools, practices, templates, and information that we believe will help many of our customers establish the foundations of good governance to support EU AI Act compliance.  

On the documentation front, the 33 Transparency Notes that we have published since 2019 provide essential information about the capabilities and limitations of our AI tools, components, and services that our customers rely on as downstream deployers of Microsoft AI platform services. We have also published documentation for our AI systems, such as answers to frequently asked questions. Our Transparency Note for the Azure OpenAI Service, an AI platform service, and FAQ for Copilot, an AI system, are examples of our approach. 

We expect that several of the secondary regulatory efforts under the EU AI Act will provide additional guidance on model- and system-level documentation. These norms for documentation and transparency are still maturing and would benefit from further definition consistent with efforts like the Reporting Framework for the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems. Microsoft has been pleased to contribute to this Reporting Framework through a process convened by the OECD and looks forward to its forthcoming public release. 

Finally, because tooling is necessary to achieve consistent and efficient compliance, we make available to our customers versions of the tools that we use for our own internal purposes. These tools include Microsoft Purview Compliance Manager, which helps customers understand and take steps to improve compliance capabilities across many regulatory domains, including the EU AI Act; Azure AI Content Safety to help mitigate content-based harms; Azure AI Foundry to help with evaluations of generative AI applications; and Python Risk Identification Tool or PyRIT, an open innovation framework that our independent AI Red Team uses to help identify potential harms associated with our highest-risk AI models and systems. 

Helping to develop efficient, effective, and interoperable implementation practices 

A unique feature of the EU AI Act is that there are more than 60 secondary regulatory efforts that will have a material impact on defining implementation expectations and directing organizational compliance. Since many of these efforts are in progress or yet to get underway, we are in a key window of opportunity to help establish implementation practices that are efficient, effective, and aligned with emerging international norms. 

Microsoft is engaged with the central EU regulator, the AI Office, and other relevant authorities in EU Member States to share insights from our AI development, governance, and compliance experience, seek clarity on open questions, and advocate for practical outcomes. We are also participating in the development of the Code of Practice for general-purpose AI model providers, and we remain longstanding contributors to the technical standards being developed by European Standards organizations, such as CEN and CENELEC, to address high-risk AI system requirements in the EU AI Act. 

Our customers also have a key role to play in these implementation efforts. By engaging with policymakers and industry groups to understand the evolving requirements and have a say on them, our customers have the opportunity to contribute their valuable insights and help shape implementation practices that better reflect their circumstances and needs, recognizing the broad range of organizations in Europe that are energized by the opportunity to innovate and grow with AI. In the coming months, a key question to be resolved is when organizations that substantially fine-tune AI models become downstream providers due to comply with general-purpose AI model obligations in August. 

Going forward 

Microsoft will continue to make significant product, tooling, and governance investments to help our customers innovate with AI in line with new laws like the EU AI Act. Implementation practices that are efficient, effective, and interoperable internationally are going to be key to supporting useful and trustworthy innovation on a global scale, so we will continue to lean into regulatory processes in Europe and around the world. We are excited to see the projects that animated our Microsoft AI Tour events in Brussels, Paris, and Berlin improve people’s lives and earn their trust, and we welcome feedback on how we can continue to support our customers in their efforts to comply with new laws like the EU AI Act. 

Tags: , , , , ,

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments