terça-feira, abril 15, 2025
HomeCloud ComputingThe Intersection of technology and human behaviour in cybersecurity

The Intersection of technology and human behaviour in cybersecurity


Dr. Mary Aiken stands at the forefront of cyberpsychology, exploring the intricate relationship between technology and human behaviour.

As a professor and chair of the Department of Cyberpsychology at Capitol Technology University in Washington D.C., and a Professor of Forensic Cyberpsychology at the University of East London, she has dedicated her career to understanding the psychological implications of our digital lives.

A highly sought-after cybersecurity speaker, Dr Aiken shares her expertise on global stages, offering unique insights into cyber behaviour and digital risk. We spoke with her to delve into the evolving landscape of cyberpsychology, the challenges posed by emerging technologies, and how individuals and organisations can navigate the complexities of the digital age.

In your view, how critical is it that cybersecurity evolves to fully incorporate the human layer, and what are the most pressing psychological factors that must now be addressed?

First of all, let’s talk about cyberspace. As cyber psychologists, people like myself have been discussing cyberspace for the best part of two decades. In fact, in 2016, NATO officially ratified cyberspace as an environment — as a domain — recognising that the battles of the future would take place not only on land, sea, and air, but also across computer networks.

The US military conceptualises cyberspace as comprising three layers. Firstly, there is the physical network, which includes the hardware, cables, and infrastructure. Secondly, there is the logical network, which facilitates communication across these networks. And finally, there is the cyber persona layer—that’s us, the humans.

When we talk about incorporating the human layer into the cybersecurity equation, we have to acknowledge that we’ve had 50 to 60 years of cybersecurity, and it has been very effective in addressing the first two layers: the physical and logical networks. However, the vast majority of cyberattacks today are driven by social engineering — and social engineering has far more to do with psychology than with technology.

As a result, we’re now seeing the emergence of a new sector under the broader umbrella of cybersecurity: the online safety technology sector, or SafetyTech. I’m proud to be one of the founding members of this sector in the UK. Our mission is to develop technological solutions to technology-facilitated problems — namely harmful and criminal behaviours online.

To summarise, we must factor the human into the cybersecurity equation — from the perspective of users, employees, and cyber attackers. And when we look at the spectrum of cyber threat actors — from state-sponsored to state-condoned, from hacktivists to activists, from organised cybercrime to sophisticated threat groups — we need solutions that are not only technically robust and resilient, but also account for human psychological resilience.

We want our data systems and networks to be secure, but equally, we need the people operating those systems to be psychologically safe, robust, and resilient. That’s how we can deliver on what I call 360-degree resilience.

As one of the foremost experts in cyber psychology, how does the science underpinning this field inform your public speaking, particularly when engaging with sectors grappling with tech-driven behavioural change?

In cyber psychology, we study specific effects — for example, the online disinhibition effect — which explains why people often behave in ways online that they would never consider in the real world. It’s a key behavioural driver in digital environments.

We also explore the power of online anonymity, which can be beneficial in some contexts but can also act like a ‘superhuman power of invisibility’. And, as with all powers, it comes with responsibility — something not always exercised well by humans.

Of course, we also observe positive online behaviours, such as altruism, seen in movements like crowdsourced fundraising. The fundamental principle is that human behaviour changes in online environments, and understanding the impact of these behavioural shifts is essential.

Through my speaking engagements, I have the privilege of addressing a wide range of sectors — technology, cybersecurity, infosec, financial services, education, e-commerce, and healthcare. All of these industries benefit from deeper insights into how technology influences human behaviour, both from the user and operator perspectives.

My research spans a number of areas, including cyberchondria — a form of health anxiety that manifests online. Many of us have experienced this: a headache quickly spirals into Googling symptoms, leading to panic over serious conditions like brain tumours.

Another recent area of focus is cyber fraud. In the UK, legislation such as the Online Safety Act is aimed at addressing this kind of cyber-enabled criminality. I’ve contributed to numerous information campaigns that focus on one of my key areas of expertise: cyber behavioural profiling.

Many campaigns tell people, “Don’t click the link.” I go a step further — I analyse the semantics of phishing messages, breaking down how attackers manipulate language and psychology to compel users to act. Understanding the emotional and cognitive triggers that cybercriminals exploit helps us better educate the public and defend against such attacks.

In terms of talk topics, I cover a broad spectrum — from human factors in cybersecurity to cyber behavioural profiling, and increasingly, the psychology of AI.

With the rapid rise of generative AI and other advanced technologies, how must stakeholders across industry and government recalibrate their thinking to effectively manage both risk and opportunity?

When it comes to technologies like AI, we’ve seen many false dawns — as well as more than a few moral panics. Take the emergence of ChatGPT, for instance. People became excited by the novelty of chatbots, but in truth, chatbots have been around for decades.

The first chatbot, Eliza, was developed in the 1960s. She was modelled on Rogerian psychology and was highly effective at eliciting information. When she asked questions like “How are you?” and followed up with “Tell me more about your day,” people began sharing deeply personal stories. The reaction was so strong that the programme was shut down fairly quickly — its inventor was reportedly horrified by how much people disclosed.

In the 1990s, I had the pleasure of working with another chatbot, Jabberwacky, which was developed by a colleague of mine. It was a brilliant and innovative piece of technology. What we’re witnessing now is the ongoing evolution of this space.

As for the widespread concern that AI will replicate human intelligence and render us obsolete, I remain sceptical. As a behavioural scientist, I’d point out that we don’t yet fully understand how the human brain works. The idea that we can replicate or replace something we don’t fully comprehend is, to me, a flawed premise.

Instead of focusing on ‘artificial intelligence’, I advocate for a different approach: IIA (Intelligence Augmentation). This concept, inspired by Licklider’s 1950s work Man-Computer Symbiosis, proposes a model in which human and machine intelligence work symbiotically.

With IIA, we keep the human at the centre of the process. That, I believe, is how we should frame our engagement with AI and machine learning – focusing on augmentation, not replacement.

Looking ahead, there are undoubtedly exciting and significant changes on the horizon. I’m particularly interested in the convergence of quantum computing, machine learning, and AI. That combination may be the point at which we truly begin to mimic aspects of human intelligence.

In delivering insights across global institutions, from NATO to the UN, what core message or shift in mindset do you most hope audiences will walk away with after hearing you speak?

As one of the world’s leading experts in cyber psychology, I’ve had the honour of being invited to speak at high-level forums around the world — from the White House to NATO, from the United Nations to INTERPOL.

In terms of conferences, I’ve spoken at gatherings across the spectrum — cybersecurity, infosec, healthtech, fintech, regtech, edtech, as well as policy and policing forums. This breadth and depth reflect the universal relevance of cyber psychology in today’s digital world.

My role is to equip audiences with the knowledge, tools, and skillsets needed to confront the complex challenges that emerge at the intersection of humans and technology.

I help people think differently — empowering them to design and deploy technology-based solutions to technology-facilitated problems, including harmful and criminal online behaviours.

Ultimately, my goal is to make people more informed, more confident, and better prepared to engage with technology in a way that is safe, ethical, and effective.

And most importantly, I aim to encourage collaboration, because we are all operating in this shared environment of cyberspace. If we are to make it safer and more secure, it will take collective responsibility and global cooperation.

Photo by Mostafa Saeed on Unsplash

This interview with Dr Mary Aiken was conducted by Mark Matthews.   

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments