Securing the unseen: Identity, AI and cyber-defence
Eve Goode
Share this content
Eve Goode, Digital Content Editor hears exclusively from Bernard Montel, EMEA Technical Director and Security Strategist of Tenable about the evolution of cybersecurity, cyber-resilience and exposure management.
Can you tell me about your role as EMEA Technical Director and Security Strategist at Tenable?
My role at Tenable is to explain to our customers what our field of vision and strategy is. Internally, this means I stay closely connected with our CTO and our broader mission.
In the field, as Technical Director, my responsibility is to understand threats, trends and the wider cybersecurity ecosystem.
Every morning, I start by checking the news to stay up to date on the latest threats and carry out research so that I’m ready and prepared for anything that might impact Tenable or our customers.
From an organisational standpoint, I engage directly with customers to discuss our strategy and vision.
I also act as the ‘voice of Tenable’ at many industry events such as GITEX, GSX and Intersec.
What encouraged you to start your career in cybersecurity?
I began my career around the year 2000, working part-time. I’ve always been a very curious and passionate person, that curiosity is what drew me into cybersecurity.
I wanted to understand all the new technologies that were emerging at the time.
Back then, it wasn’t called cybersecurity, it was known as network security. I started my career in identity during the early days of what was called electronic commerce, which marked the beginning of online business.
That’s where I became an Identity and Access Management Specialist.
I was fascinated by the fact that people could hack systems and I wanted to work in that space to understand exactly how they were doing it.
Can you explain some of the results from Tenable’s recent research?
At Tenable, we recently launched a report in partnership with the Cloud Security Alliance.
The study focused on identifying misconfigurations and vulnerabilities in the cloud infrastructures used by organisations.
We found that most customers today use multi-cloud environments with 63% of cities using multiple cloud providers to benefit from different types of services.
For example, Microsoft Azure offers cloud services that integrate closely with the Microsoft ecosystem, while Amazon Web Services (AWS) provides robust infrastructure capabilities and additional services.
We also discovered that 82% of organisations are what we call hybrid. This means they use both cloud-hosted infrastructure and on-premises systems.
However, “on-prem” no longer refers to traditional data centres filled with physical servers; many organisations now operate on-prem cloud clusters using the same technologies offered by providers such as Microsoft, AWS or Google.
This is important because we must be able to understand and protect the full scope of potential attacks across both environments.
Beyond human users, what new identity threats should organisations prepare for?
Coming from an identity background, I still have the mindset to notice certain trends. I believe we often underestimate non-human identities.
We’ve observed that there are now more application identities than human identities and most people don’t realise that.
For every single human identity in the cloud, there are roughly ten application identities. Each service whether it’s a database, server or device, has its own identity that must authenticate using certificates, both on the network and in the cloud.
A few years ago, we also saw the emergence of robotic process automation (RPA), where robots perform repetitive tasks instead of humans.
These robots are examples of non-human identities as they can log onto systems and execute tasks automatically, having been programmed by the organisation to do so.
From an organisational perspective, they hold identities just like humans.
For example, I have an identity at Tenable that allows me to log in and perform actions robots or AI agents now do the same.
The new generation of non-human identities is powered by generative AI, which not only understands situations but can also take action. These AI agents therefore require identities as well.
Part of our recent report focused on identity complexity, which remains one of the weakest points in cloud security.
Misconfigurations and vulnerabilities persist, especially across multi-cloud and hybrid systems where organisations must manage multiple technologies from different providers.
The weakest link in all of this is identity. With so many access points, it’s critical to maintain good cyber hygiene as this ensures that access rights are clearly defined and regularly reviewed.
When you layer together human identities, application identities, robots and now AI agents, you can see how complex the identity landscape has become.
The key is to reduce exposure risk by ensuring that all assets are properly managed and positioned.
As cyber risks continue to evolve, how does Tenable prepare to overcome them?
Since I began my career, I’ve seen cyber-risk evolve dramatically and it continues to grow.
The main reason for this upward trend is the increasing complexity of attacks on modern infrastructures.
The cyber-landscape is everywhere now and the shift accelerated during the COVID-19 pandemic as organisations rapidly adopted cloud technologies.
Over the past two years, generative AI has also been widely adopted across the IT industry and integrated into IT lifecycles.
When I started, the risk of a network being hacked was much smaller as this was during the era of firewalls.
Then came the rise of internet commerce and online business, which brought new types of fraud.
For instance, as soon as websites like eBay and online banking emerged, fraud followed.
Cyber-criminals have always been leading the race.
Over time, we’ve seen waves of different kinds of cyber-attacks from network security breaches to large-scale fraud.
When I worked at RSA in 2011, we might have seen one major breach every six months; today, there are multiple breaches occurring every day around the world.
This surge is due to the rapidly expanding and increasingly dynamic attack surface.
AI has made the attack surface even more fluid and as networks and systems grow, so do the risks.
To adapt, Tenable evolved its approach by incorporating vulnerability management into its core technology.
This allows us to identify vulnerabilities and misconfigurations across the attack surface.
We’ve followed the evolution of threats across OT systems, cloud environments, identity and now AI.
Our goal is not only to reduce the number of attacks but also to minimise the overall attack surface.
While the surface itself can’t be eliminated we can reduce exposure and strengthen defences across it.
We’ve moved from traditional vulnerability management to what we now call exposure management which gives organisations full visibility into all exposures within their attack surface, including IoT environments.
What is unique about Tenable’s culture that helps it adapt and remain customer-focused?
The first thing I’d highlight is innovation. Whenever new technologies or new forms of attack appear, we act quickly to address them.
Internally, we develop our own technologies and platforms, but when we need to accelerate, we also make strategic acquisitions. We have a large research team and recently we published findings on three vulnerabilities connected to Google Gemini (its generative AI technology).
We’re constantly analysing vulnerabilities from both a research and technology perspective. The second key aspect is execution.
Having a vision is essential but without execution, that vision can’t help customers. The third is listening.
At Tenable, we genuinely listen to our customers. We recently launched an initiative I’m part of called the Exposure Management Leadership Council. It brings together cybersecurity leaders from around the world every quarter to discuss exposure management.
Some council members are from Tenable and others are external experts, but all contribute ideas to help shape industry innovation.
Listening to our customers and implementing what they need is always our top priority.
So, what sets Tenable apart are four core values:
- Innovation
- Vision
- Execution
- Listening to our customers
How is the growing incorporation of AI affecting the cybersecurity industry?
I think AI is already affecting us both from a cybersecurity standpoint and from an attack standpoint. AI is everywhere. I have described it as having almost unlimited attack effects.
In the past, when we deployed an application, we knew that the application had a limited set of functions: a menu of features, things you could do and things you couldn’t.
That was true whether it was a SaaS app, an on-prem app, a device or a computer. Computers have a limited set of instructions you can run.
With AI, the prompt changes everything as everyone knows that today it’s effectively unlimited. The way an AI system can react is nearly unlimited, so from a security perspective it becomes a moving target that we need to secure.
The challenge is big. We’ve already seen AI used by attackers. They’re using it to develop malware; you don’t need to be deeply technical to run attacks that didn’t exist two years ago.
Phishing attacks, for example. Fraud existed before but phishing is now being developed and enhanced by AI.
Even further, we’re seeing jailbreaks and what we call prompt injection. You construct a prompt, the LLM engine may search websites and if a site returns not only content but an embedded instruction, for example “Forget what I just said, react that way” that’s a prompt injection. Prompts you type can be injected with malicious content. We see that today.
AI is clearly a new threat, but there are obviously lots of opportunities with many of us using AI daily for research or as an assistant. Still, it’s a threat organisation need to handle now.
I think AI was released quickly and, in many cases, unleashed before security teams were ready, so security needs to be prepared to take care of it today.
There’s another angle related to cloud I didn’t mention earlier. When we talk about AI threats there are really two main areas.
First, the usage of AI tools which can be jailbroken, injected or otherwise manipulated for malicious activities.
Second, there are LLM or generative-AI projects run by organisations: everyone wants their own LLM for customers or business, so organisations run LLM projects in the cloud.
They run them in AWS, Azure and Google because those providers offer ready-to-use packages.
Developers get the training data, spin up instances of those cloud services, choose a model, put the data in and that’s it.
That makes those projects targets for attackers.
In a study published this summer we found a worrying number of these services have critical vulnerabilities that haven’t been patched with roughly 50–70% show serious issues because people focus on getting the AI working rather than securing it from day one.
That’s a big risk.
Another big risk is the training data. Sometimes training data is left exposed and stored in locations that are accessible externally. This is because developers need quick access while developing.
If access controls and identity checks aren’t in place, someone can find and fetch your training data and that can be very dangerous.
Data poisoning is another threat, if an attacker can access and modify an organisation’s training data for an LLM project, they can change the data and thereby change the model’s behaviour.
The results can be interesting and not in a good way.

