Europe’s cybersecurity skills gap: can AI & certification move the needle?

Europe is scrambling to close a cybersecurity talent gap that threatens not just business resilience but sovereignty itself. In 2024, the EU was facing a shortage of approximately 299,000 cybersecurity professionals, indicating a 9% rise from 2023. This gap is expected to increase as most organizations will seek cybersecurity staff that are more compliant with recent EU cybersecurity legislation, such as the NIS2 Directive, the Cyber Resilience Act, or the Digital Operational Resilience Act.

Jacob Evans, the CTO of Kryterion, a test delivery platform, says that this gap presents an opportunity: “Companies prioritising structured training and certification can strengthen their defences while boosting credibility in a security-conscious market.” 

Where policy, training & ecosystem currently stand

The EU Cybersecurity Act, updated national digital skills strategies, and the Digital Europe Programme have earmarked hundreds of millions in funding for cybersecurity skill-building. Under the Digital Europe budget, for instance, nearly €2 billion is allocated to strengthen cyber resilience, including training and certifications. The European Skills Agenda aims to ensure 80% of European adults have basic digital skills, but deep technical skills lag far behind.

Universities, private credentialing bodies, and startups are building new pipelines. European-born credentialing firms and non-profit standard bodies are scaling exam programs. In the ecosystem, organizations like ENISA (European Union Agency for Cybersecurity) are advocating for more trusted European exam certification pipelines. In contrast, many credential platforms continue to use external or global providers, which highlights the importance of trust, data jurisdiction, and adherence to EU values.

At the InCyber Forum Europe 2025 held at Lille, France, and themed “Beyond Zero Trust, trust for all”, trust, digital sovereignty, AI, remote security, and cybersecurity skills were discussed.

AI in credentialing: potential & pitfalls

As Europe races to scale its cybersecurity talent, AI is increasingly being woven into credentialing, promising faster, fairer testing at scale, but also raising thorny questions about bias, privacy, and trust. 

“AI has the potential to transform test security, but its deployment must prioritize privacy, fairness, and compliance. By embracing transparency, implementing secure data handling practices, and fostering human oversight, organizations can build AI tools that earn trust while achieving their security objectives,” says Evans.

AI can be a true enabler here, analyzing video feeds, detecting unusual behavior during exams, using dual-camera monitoring, and implementing browser lockdowns. It can create fraud detection patterns through large datasets, identifying cheating signatures, for example, copy-paste, voice reuse, or off-screen behavior. Moreover, personalized certifications are possible; AI adaptive testing can vary in difficulty based on responses, reducing dropouts and improving feedback loops.

“As AI adoption gains momentum, stakeholders are focusing on addressing the technical debt of legacy systems, strengthening security, and enabling new capabilities”, says Ranjit Tinaikar, CEO of Ness Digital Engineering. Further, “modernizing outdated platforms is the only way to fully tap into AI-enabled productivity gains,” the executive added. 

These adaptive capabilities now shape the cybersecurity frontline. Verizon’s 2025 DBIR report found that stolen credentials led to 88% of web attacks. “AI helps detect compromises, monitor digital footprints, and mitigate risks from stolen login data, especially regarding info-stealer malware and session hijacking. But, while AI brings significant automation and speed, organizations must maintain contextual understanding and human oversight to manage exposures effectively,” notes Norman Menz, CEO of TEM leader Flare.

Risks & regulatory tensions

The rise of AI in credentialing also brings significant policy friction. Under GDPR, tools that rely on biometric data, such as facial or voice recognition, deal with strict requirements around lawful basis, data minimization, and purpose limitation, yet many proctoring platforms depend on exactly these sensitive inputs. 

Bias and fairness are also major concerns. AI tools trained on limited data can mistakenly flag honest test-takers, especially from underrepresented groups. 

As Evans says, “AI has the potential to transform test security, but its deployment must prioritize privacy, fairness, and compliance. By embracing transparency, implementing secure data handling practices, and fostering human oversight, organizations can build AI tools that earn trust while achieving their security objectives.” 

There’s also a bigger issue of digital sovereignty; if certification platforms or their data are hosted outside the EU, it could weaken trust in Europe’s entire cybersecurity talent system.

How startups & policymakers are collaborating

To bridge Europe’s cybersecurity skills gap responsibly, both startups and policymakers must take proactive steps. First, AI tools used in proctoring and certification should be subject to independent audits and certifications themselves, ensuring fairness, privacy safeguards, and algorithmic transparency. 

Public credentialing pipelines also need stronger backing, with EU and member states funding regional programs, labs, and partnerships between universities and trusted industry bodies to reduce reliance on global platforms. Clear regulatory guidance is essential, particularly around the lawful use of biometric data in exam security, including conditions for collection, retention, and deletion. 

For a truly mobile cyber workforce, cross-border mutual recognition of certifications is critical, so a credential earned in one country is trusted across the bloc. 

“Certified professionals increase adoption rates, streamline onboarding, and enhance customer confidence. Meanwhile, investors and VCs evaluating cybersecurity companies benefit from a workforce with verifiable, industry-recognised qualifications—de-risking their investments and ensuring long-term resilience,” states Evans.

Finally, investment in infrastructure and accessibility must not be overlooked; without reliable internet, affordable devices, and adequate support for candidates, AI-powered credentialing risks widening rather than narrowing digital inequalities.

What this means for the next-gen cyber professional

For young cyber professionals, this is a moment of opportunity if training is credible, certifications are recognized, and digital rights are protected. Startups in cybersecurity credentialing have a clear mission: to build trusted systems, embed AI ethically, and work hand-in‐hand with policy makers so that credentialing becomes a foundation, not a loophole.

Europe’s cybersecurity skills gap is significant, but not insurmountable. Certification programs, enhanced with AI, offer a route to scale talent faster. Yet success depends on embedding trust, protecting privacy, and asserting European sovereignty in credential pipelines. If policymakers, educators, and tech innovators get alignment right, Europe could emerge not just well-protected but leading in ethical, high-trust cybersecurity credentials.

Featured image: FlyD via Unsplash