segunda-feira, fevereiro 10, 2025
HomeBig DataGovernance Risk & Compliance: Essential Strategies

Governance Risk & Compliance: Essential Strategies


Governance: The Unseen Foundation of AI Success

Governance, risk and compliance key to reaping AI rewards

The AI revolution is underway, and enterprises are keen to explore how the latest AI advancements can benefit them, especially the high-profile capabilities of GenAI. With multitudes of real-life applications — from increasing efficiency and productivity to creating superior customer experiences and fostering innovation — AI promises to have a huge impact across industries in the business world.

While organizations understandably don’t want to get left behind in reaping the rewards of AI, there are risks involved. These range from privacy considerations to IP protection, reliability and accuracy, cybersecurity, transparency, accountability, ethics, bias and fairness and workforce concerns.

Enterprises need to approach AI deliberately, with a clear awareness of the dangers and a thoughtful plan on how to safely make the most of AI capabilities. AI is also increasingly subject to government regulations and restrictions and legal action in the United States and worldwide.

AI governance, risk and compliance programs are crucial for staying ahead of the rapidly evolving AI landscape. AI governance consists of the structures, policies and procedures that oversee the development and use of AI within an organization.

Just as leading companies are embracing AI, they’re also embracing AI governance, with direct involvement at the highest leadership levels. Organizations that achieve the highest AI returns have comprehensive AI governance frameworks, according to McKinsey, and Forrester reports that one in four tech executives will be reporting to their board on AI governance.

There’s good reason for this. Effective AI governance ensures that companies can realize the potential of AI while using it safely, responsibly and ethically, in compliance with legal and regulatory requirements. A strong governance framework helps organizations reduce risks, ensure transparency and accountability and build trust internally, with customers and the public.

AI governance, risk and compliance best practices

To build protections against AI risks, companies must deliberately develop a comprehensive AI governance, risk and compliance plan before they implement AI. Here’s how to get started.

Create an AI strategy
An AI strategy outlines the organization’s overall AI objectives, expectations and business case. It should include potential risks and rewards as well as the company’s ethical stance on AI. This strategy should act as a guiding star for the organization’s AI systems and initiatives.

Build an AI governance structure
Creating an AI governance structure starts with appointing the people that make decisions about AI governance. Often, this takes the form of an AI governance committee, group or board, ideally made up of high-level leaders and AI experts as well as members representing various business units, such as IT, human resources and legal departments. This committee is responsible for creating AI governance processes and policies as well as assigning responsibilities for various facets of AI implementation and governance.

Once the structure is there to support AI implementation, the committee is responsible for making any needed changes to the company’s AI governance framework, assessing new AI proposals, monitoring the impact and outcomes of AI and ensuring that AI systems comply with ethical, legal and regulatory standards and support the company’s AI strategy.

In developing AI governance, organizations can get guidance from voluntary frameworks such as the U.S. NIST AI Risk Management Framework, the UK’s AI Safety Institute open-sourced Inspect AI safety testing platform, European Commission’s Ethics Guidelines for Trustworthy AI and the OECD’s AI Principles.

Key policies for AI governance, risk and compliance

Once an organization has thoroughly assessed governance risks, AI leaders can begin to set policies to mitigate them. These policies create clear rules and processes to follow for anyone working with AI within the organization. They should be detailed enough to cover as many scenarios as possible to start — but will need to evolve along with AI advancements. Key policy areas include:

Privacy
In our digital world, personal privacy risks are already paramount, but AI ups the stakes. With the huge amount of personal data used by AI, security breaches could pose an even greater threat than they do now, and AI could potentially have the power to gather personal information — even without individual consent — and expose it or use it to do harm. For example, AI could create detailed profiles of individuals by aggregating personal information or use personal data to aid in surveillance.

Privacy policies ensure that AI systems handle data responsibly and securely, especially sensitive personal data. In this arena, policies could include such safeguards as:

  • Collecting and using the minimum amount of data required for a specific purpose
  • Anonymizing personal data
  • Making sure users give their informed consent for data collection
  • Implementing advanced security systems to protect against breaches
  • Continually monitoring data
  • Understanding privacy laws and regulations and ensuring adherence

IP protection
Protection of IP and proprietary company data is a major concern for enterprises adopting AI. Cyberattacks represent one type of threat to valuable organizational data. But commercial AI solutions also create concerns. When companies input their data into huge LLMs such as ChatGPT, that data can be exposed — allowing other entities to drive value from it.

One solution is for enterprises to ban the use of third-party GenAI platforms, a step that companies such as Samsung, JP Morgan Chase, Amazon and Verizon have taken. However, this limits enterprises’ ability to take advantage of some of the benefits of large LLMs. And only an elite few companies have the resources to create their own large-scale models.

However, smaller models, customized with a company’s data, can provide an answer. While these may not draw on the breadth of data that commercial LLMs provide, they can offer high-quality, tailored data without the irrelevant and potentially false information found in larger models.

Transparency and explainability
AI algorithms and models can be complex and opaque, making it difficult to determine how their results are produced. This can affect trust and creates challenges in taking proactive measures against risk.

Organizations can institute policies to increase transparency, such as:

  • Following frameworks that build accountability into AI from the start
  • Requiring audit trails and logs of an AI system’s behaviors and decisions
  • Keeping records of the decisions made by humans at every stage, from design to deployment
  • Adopting explainable AI techniques

Being able to reproduce the results of machine learning also allows for auditing and review, building trust in model performance and compliance. Algorithm selection is also an important consideration in making AI systems explainable and transparent in their development and impact.

Reliability
AI is only as good as the data it’s given and the people training it. Inaccurate information is unavoidable for large LLMs that use vast amounts of online data. GenAI platforms such as ChatGPT are notorious for sometimes producing inaccurate results, ranging from minor factual inaccuracies to hallucinations that are completely fabricated. Policies and programs that can increase reliability and accuracy include:

  • Strong quality assurance processes for data
  • Educating users on how to identify and defend against false information
  • Rigorous model testing, evaluation and continuous improvement

Companies can also increase reliability by training their own models with high-quality, vetted data rather than using large commercial models.

Using agentic systems is another way to enhance reliability. Agentic AI consists of “agents” that can perform tasks for another entity autonomously. While traditional AI systems rely on inputs and programming, agentic AI models are designed to act more like a human employee, understanding context and instructions, setting goals and independently acting to achieve those goals while adapting as necessary, with minimal human intervention. These models can learn from user behavior and other sources beyond the system’s initial training data and are capable of complex reasoning over enterprise data.

Synthetic data capabilities can assist in increasing agent quality by quickly generating evaluation datasets, the GenAI equivalent of software test suites, in minutes, This significantly accelerates the process of improving AI agent response quality, speeds time to production and reduces development costs.

Bias and fairness
Societal bias making its way into AI systems is another risk. The concern is that AI systems can perpetuate societal biases to create unfair results based on factors such as race, gender or ethnicity, for example. This can result in discrimination and is particularly problematic in areas such as hiring, lending, and healthcare. Organizations can mitigate these risks and promote fairness with policies and practices such as:

  • Creating fairness metrics
  • Using representative training data sets
  • Forming diverse development teams
  • Ensuring human oversight and review
  • Monitoring results for bias and fairness

Workforce
The automation capabilities of AI are going to have an impact on the human workforce. According to Accenture, 40% of working hours across industries could be automated or augmented by generative AI, with banking, insurance, capital markets and software showing the highest potential. This will affect up to two-thirds of U.S. occupations, according to Goldman Sachs, but the firm concludes that AI is more likely to complement current workers rather than lead to widespread job loss. Human experts will remain essential, ideally taking on higher-value work while automation helps with low-value, tedious tasks. Business leaders largely see AI as a copilot rather than a rival to human employees.

Regardless, some employees may be more nervous about AI than excited about how it can help them. Enterprises can take proactive steps to help the workforce embrace AI initiatives rather than fear them, including:

  • Educating workers on AI basics, ethical considerations and company AI policies
  • Focusing on the value that employees can get from AI tools
  • Reskilling employees as needs evolve
  • Democratizing access to technical capabilities to empower business users

Unifying data and AI governance

AI presents unique governance challenges but is deeply entwined with data governance. Enterprises struggle with fragmented governance across databases, warehouses and lakes. This complicates data management, security and sharing and has a direct impact on AI. Unified governance is key for success across the board, promoting interoperability, simplifying regulatory compliance and accelerating data and AI initiatives.

Unified governance improves performance and safety for both data and AI, creates transparency and builds trust. It ensures seamless access to high-quality, up-to-date data, resulting in more accurate results and improved decision-making. A unified approach that eliminates data silos increases efficiency and productivity while reducing costs. This framework also strengthens security with clear and consistent data workflows aligned with regulatory requirements and AI best practices.

Databricks Unity Catalog is the industry’s only unified and open governance solution for data and AI, built into the Databricks Data Intelligence Platform. With Unity Catalog, organizations can seamlessly govern all types of data as well as AI components. This empowers organizations to securely discover, access and collaborate on trusted data and AI assets across platforms, helping them unlock the full potential of their data and AI.

For a deep dive into AI governance, see our ebook, A Comprehensive Guide to Data and AI Governance.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments