terça-feira, abril 1, 2025
HomeCloud ComputingNavigating the Future of National Tech Independence with Sovereign AI

Navigating the Future of National Tech Independence with Sovereign AI


Sovereign AI refers to a national or regional effort to develop and control artificial intelligence (AI) systems, independent of the large non-EU foreign private tech platforms that currently dominate the field. It represents a strategic push by countries or regions to ensure they retain control over their AI capabilities, align them with national values, and mitigate dependence on foreign organizations. There are two main considerations associated with the fundamentals of sovereign AI: 1) Control of the algorithms and the data on the basis of which the AI is trained and developed; and 2) the sovereignty of the infrastructure on which the AI resides and operates.

Strategic Autonomy and Security

Countries, whether individually or collectively, want to develop AI systems that are not controlled by foreign entities, especially for critical infrastructure, national security, and economic stability. This is essential for strategic autonomy or reliance on potentially biased or insecure AI models developed elsewhere​​.

Cultural Relevance and Inclusivity

Governments aim to develop AI systems that reflect local cultural norms, languages, and ethical frameworks. This ensures AI decisions align with local social values, reducing the risk of bias, discrimination, or misinterpretation of data​​.

Data Sovereignty and Privacy

With data being a key driver of AI development, countries prioritize keeping their data within their borders. This ensures data privacy, security, and compliance with national laws, particularly concerning sensitive information​​. It is also a way to protect from extra-jurisdictional application of foreign laws.

Economic Growth and Innovation

Sovereign AI offers the opportunity to boost domestic AI innovation, improve competitiveness, and protect intellectual property from foreign control. This allows countries to maintain leadership in emerging technologies and create economic opportunities​.

Ethics and Governance

Governments are concerned about the ethical implications of AI, particularly in areas such as privacy, human rights, economic dislocation, and fairness. Ensuring that AI systems are transparent, accountable, and aligned with national laws is a key priority​​.

EU AI Act (Artificial Intelligence Act)

The EU AI Act, which is slated for full enforcement in 2025, is one of the first comprehensive regulatory frameworks for AI at the global level. It is a risk-based regulatory framework that aims to ensure that AI is used safely, ethically, and in a way that respects fundamental rights. The AI Act establishes a classification system for AI systems based on their risk level, ranging from low-risk applications to high-risk AI systems used in critical areas such as healthcare, transportation, and law enforcement​​.

The EU AI Act will shape how AI algorithmic systems are built and used within national borders, particularly countries that plan to deploy high-risk AI systems like facial recognition or AI in healthcare. Compliance with the AI Act ensures that AI systems adhere to safety, transparency, accountability, and fairness principles. 

  • High-risk AI systems must undergo rigorous testing and certification before deployment.
  • Transparency requirements mandate that users understand how AI models make decisions.
  • Accountability means clear chains of responsibility in the event of harms caused by AI systems​​.

​​EU Data Act

​​The EU Data Act, which went into effect in 2023, plays a key role in shaping the framework for Sovereign AI by improving data accessibility and governance across Europe. The Data Act aims to create new markets by making available device data not just to manufacturers but also users and third parties, it regulates among other things fair contract terms for data sharing and specific requirements to enable switching between cloud providers. The Data Act framework creates new possibilities to access data that could be used for AI training and development. It also regulates the terms under which organisations can avoid lock-in with a particular cloud provider, which is a key consideration when developing AI capabilities . 

By focusing on data sharing and access, the Data Act helps organizations and governments unlock the potential of data-driven innovations, including AI. This is crucial for reducing risks related to AI training and enhancing the ability to develop competitive AI solutions by facilitating AI training through the availability of relevant data for a particular jurisdiction’s AI systems.

Cybersecurity Act (NIS2 Directive)

The NIS2 Directive focuses on improving the overall cybersecurity resilience of the EU by setting common cybersecurity standards across member states. The act applies to critical infrastructure sectors, including energy, transport, and digital services, and mandates that entities adopt stronger cybersecurity measures and report major incidents to authorities.

Whilst nations develop Sovereign AI systems, the NIS2 Directive will enforce robust cybersecurity standards for AI technologies, particularly those deployed in critical infrastructure. Ensuring that AI systems are protected against cyber threats is crucial, especially for those involved in national security, health, or transportation​​.

​​Digital Operational Resilience Act (DORA)

DORA significantly impacts Sovereign AI by establishing robust requirements for operational resilience, cybersecurity, and risk management within digital infrastructures of the financial industry and across their supply chain. As Sovereign AI systems increasingly become integral to critical sectors like finance, DORA ensures that these systems are resilient to cyber threats and operational disruptions and places a number of requirements on the downstream supply chain of the financial industry, ranging from operational resilience (including testing thereof), to transparency, performance monitoring and detailed contractual terms. 

By enforcing standardized risk management protocols and incident reporting requirements, DORA effectively complements the EU AI Act by safeguarding the stability and security of critical AI-driven services delivered to a broad definition of the financial industry and across different service providers.

Resource Constraints

Developing and maintaining sovereign AI systems requires significant investments in infrastructure, including hardware (e.g., high-performance computing GPU), data centers, and energy. Many countries face challenges in acquiring or developing the necessary resources, particularly hardware and energy to support AI capabilities​​.

Talent Shortages

AI development requires specialized knowledge in machine learning, data science, and engineering. Countries must invest in education and workforce development to ensure they have the skills needed to compete in the global AI race​.

Global Interdependence and Cooperation

AI development relies on global data and collaboration. Countries must balance the desire for sovereignty with the reality that many AI technologies and infrastructures are interdependent, requiring cooperation across borders to be truly effective​​.

Technological Leadership and Competitiveness

To compete on the global stage, sovereign AI systems need to be technologically advanced. Developing state-of-the-art AI models comparable to those of major tech companies is a significant challenge due to the massive scale of investment and infrastructure required​.That said, most organizations do not find value in building their own foundation models from scratch. Instead, they leverage open source models fine-tuned with their custom data, which can often be run on a very small number of GPUs.


Resource Management and Scalability

Private AI Infrastructure

VMware Private AI Foundation with NVIDIA provides on-premises, secure AI infrastructure, which can be crucial for EU Member States that wish to keep their AI data processing and model training within their borders. VMware Private AI Foundation brings together industry-leading scalable NVIDIA and ecosystem applications for AI, and can be customized to meet local demands​​. This infrastructure is virtualized to intelligently pool and share AI capacity, which in turn reduces overall power requirements and the total cost of ownership (TCO). Further, this infrastructure can be accessed and managed using open source APIs, such as Kubernetes, and integrates with popular open source AI model repositories, such as Hugging Face, using its native command line interface (CLI). 

Ultimately, this affords a flexible AI infrastructure that can be embraced without fear of lock-in either from proprietary applications or the existence of proprietary hardware architectures. It’s tempting to invest in an end-to-end AI solution that ultimately creates dependencies on a particular hyperscaler model and AI infrastructure. However, the more pragmatic approach to achieve sovereignty objectives remains investing in a private AI infrastructure platform that is future-proofed against changes in the AI ecosystem, allowing organizations to onboard new models and services at the speed of software.

Optimized Hardware

Broadcom’s role in delivering specialized accelerated networking and storage, combined with performance-optimized and production-supported NVIDIA GPUs, ensure that sovereign AI systems can be powered by cutting-edge hardware capable of processing large datasets and complex AI models without relying on foreign providers​. Equally, Broadcom switches and Network Interface Cards (NICs) with high bandwidth, ultra-low latency connections efficiently route traffic for training and execution without bottlenecks.  Because of Broadcom’s commitment to open standards, these switches are Ethernet-based, which has a large ecosystem that promotes choice and negates lock-in.

Data Sovereignty and Security

On-premises AI Deployment

By utilizing VMware Private AI Foundation with NVIDIA in either hosted or on-premises environments, organizations can ensure that their data remains within their borders, minimizing the risk of data leaks or unauthorized access from foreign entities. The technology is self-contained, and the customer or its sovereign cloud provider of choice has control of the workloads, security, features, and updates, as well as the applications running on the environment. Broadcom high-performance storage solutions include fibre channel host bus adapters and  NVMe solutions that provide fast, scalable storage solutions optimized for AI workloads. Equally, Broadcom security features at the hardware level, with Root of Trust and Hardware Security Module (HSM) integrated processors and storage solutions, ensure that sensitive AI workloads and customer data are protected from the ground up. This targets concerns around data sovereignty and privacy​​. Keeping data on-premises in the organization’s preferred data repositories ensures that AI can be embraced without any loss of control of data. 

Secure Data Handling

Advanced services for VMware Cloud Foundation such as VMware vDefend for zero-trust security are designed with advanced security protocols that help prevent data breaches, ensuring compliance with local privacy laws. This is particularly important for sensitive government and citizen data​, especially when organizations can easily try out new open source AI models such as Mistral or DeepSeek, and protect model output with existing enterprise security controls. 

Supporting Local AI Talent and Innovation

Local Development and Customization

VMware Private AI Foundation with NVIDIA can enable organizations to build and deploy AI models that are tailored to local needs and specific sectors (e.g., healthcare, agriculture). It fosters innovation by providing a platform where local developers can experiment and iterate on AI models

AI Ecosystem Support

VMware Private AI Foundation with NVIDIA can support AI research and development, utilizing NVIDIA and open-source models and applications, all within national borders, enabling local universities, startups, and research institutions to thrive, which can also help address the talent shortage by providing the necessary resources for training and development​.

Ethics and Governance Compliance

Transparency and Control

Governments can ensure their AI systems are developed with transparency and accountability by utilizing VMware Private AI Foundation with NVIDIA in a secure VMware Sovereign Cloud Service Provider data center, where they can monitor and govern AI development. The user has full control of the input, output and the data that the model will train on. This supports ethical AI deployments that are aligned with national policies and values​​.

Compliance with Regulations

By using an AI infrastructure that is fully controlled by the end user, compliance becomes a question of controls and jurisdiction in which the solution is deployed.  Organizations can ensure that their AI systems meet country-specific ethical standards, such as fairness, privacy protection, and transparency as well as local security controls and certification requirements​.

Private AI and Sovereign Cloud Providers

VMware Private AI Foundation with NVIDIA, can be delivered both on premise and at the control of a specific organisation. However it can also be hosted as a PaaS/IaaS cloud solution that is delivered with one of our Sovereign Cloud Service Provider partners. Enabling choice for organisations to rely on a Sovereign Cloud Provider they trust can play a critical role in addressing several of these challenges when pursuing Sovereign AI. It provides a secure, scalable, resilient and flexible AI infrastructure that can be controlled within national borders, ensuring that data and models remain private and compliant with local regulations.It also supports the objectives of developing local AI capacity and strengthening the local industrial base with proven capabilities that are key in underpinning strategic autonomy.  

Your Next Move

Want to learn more about VMware Private AI Foundation with NVIDIA?

  1. Check out the VMware Private AI Foundation with NVIDIA webpage for more resources.
  2. Contact us using this Interest Request form

Co-Author:

Ilias Chantzos, Global Privacy Officer and Head of EMEA Government Affairs, Broadcom Inc.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments