sexta-feira, março 21, 2025
HomeCloud ComputingCisco Introduces the State of AI Security Report for 2025

Cisco Introduces the State of AI Security Report for 2025


As one of the defining technologies of this century, artificial intelligence (AI) seems to witness daily advancements with new entrants to the field, technological breakthroughs, and creative and innovative applications. The landscape for AI security shares the same breakneck pace with streams of newly proposed legislation, novel vulnerability discoveries, and emerging threat vectors.

While the speed of change is exciting, it creates practical barriers for enterprise AI adoption. As our Cisco 2024 AI Readiness Index points out, concerns about AI security are frequently cited by business leaders as a primary roadblock to embracing the full potential of AI in their organizations.

That’s why we’re excited to introduce our inaugural State of AI Security report. It provides a succinct, straightforward overview of some of the most important developments in AI security from the past year, along with trends and predictions for the year ahead. The report also shares clear recommendations for organizations looking to improve their own AI security strategies, and highlights some of the ways Cisco is investing in a safer future for AI.

Here’s an overview of what you’ll find in our first State of AI Security report: 

Evolution of the AI Threat Landscape

The rapid proliferation of AI and AI-enabled technologies has introduced a massive new attack surface that security leaders are only beginning to contend with. 

Risk exists at virtually every step across the entire AI development lifecycle; AI assets can be directly compromised by an adversary or discreetly compromised though a vulnerability in the AI supply chain. The State of AI Security report examines several AI-specific attack vectors including prompt injection attacks, data poisoning, and data extraction attacks. It also reflects on the use of AI by adversaries to improve cyber operations like social engineering, supported by research from Cisco Talos.

Looking at the year ahead, cutting-edge advancements in AI will undoubtedly introduce new risks for security leaders to be aware of. For example, the rise of agentic AI which can act autonomously without constant human supervision seems ripe for exploitation. On the other hand, the scale of social engineering threatens to grow tremendously, exacerbated by powerful multimodal AI tools in the wrong hands. 

Key Developments in AI Policy 

The past year has seen significant advancements in AI policy, both domestically and internationally. 

In the United States, a fragmented state-by-state approach has emerged in the absence of federal regulations with over 700 AI-related bills introduced in 2024 alone. Meanwhile, international efforts have led to key developments, such as the UK and Canada’s collaboration on AI safety and the European Union’s AI Act, which came into force in August 2024 to set a precedent for global AI governance. 

Early actions in 2025 suggest greater focus towards effectively balancing the need for AI security with accelerating the speed of innovation. Recent examples include President Trump’s executive order and growing support for a pro-innovation environment, which aligns well with themes from the AI Action Summit held in Paris in February and the U.K.’s recent AI Opportunities Action Plan.

Original AI Security Research 

The Cisco AI security research team has led and contributed to several pieces of groundbreaking research which are highlighted in the State of AI Security report. 

Research into algorithmic jailbreaking of large language models (LLMs) demonstrates how adversaries can bypass model protections with zero human supervision. This technique can be used to exfiltrate sensitive data and disrupt AI services.  More recently, the team explored automated jailbreaking of advanced reasoning models like DeepSeek R1, to demonstrate that even reasoning models can still fall victim to traditional jailbreaking techniques. 

The team also explores the safety and security risks of fine-tuning models. While fine-tuning is a popular method for improving the contextual relevance of AI, many are unaware of the inadvertent consequences like model misalignment. 

Finally, the report reviews two pieces of original research into poisoning public datasets and extracting training data from LLMs. These studies shed light on how easily—and cost-effectively—a bad actor can tamper with or exfiltrate data from enterprise AI applications. 

Recommendations for AI Security 

Securing AI systems requires a proactive and comprehensive approach.  

The State of AI Security report outlines several actionable recommendations, including managing security risks throughout the AI lifecycle, implementing strong access controls, and adopting AI security standards such as the NIST AI Risk Management Framework and MITRE ATLAS matrix. We also look at how Cisco AI Defense can help businesses adhere to these best practices and mitigate AI risk from development to deployment. 

Read the State of AI Security 2025

Ready to read the full report? You can find it here. 


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Security Social Channels

Instagram
Facebook
Twitter
LinkedIn

Share:



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments