Enoki Launches AI Security Platform For Enterprise AI Agents

GrowthAICybersecurity

Anna Lebedeva

Chief Editor & Co-founder at The Top Voices

May 11, 20261 min

Article hero imageImage credit: Enoki

Key Takeaways:

• Enoki developed an autonomous AI hacker agent for testing AI applications
• Platform combines attack simulation, runtime monitoring, and compliance mapping
• Startup targets enterprise AI agents used in customer service, finance, HR, and coding workflows

Enoki tests AI applications with personalised attacks, monitors them in production and maps regulatory requirements

AI apps like chatbots and AI agents often contain serious security risks and data leaks. Classic code scanning techniques are necessary, but not sufficient. Language models are unpredictable by nature. To close that gap, Ghent entrepreneurs are launching Enoki, a security startup that uses an autonomous AI hacker agent to test every AI application.

Belgium is one of Europe's frontrunners in AI adoption. On enterprise level, three in four now actively deploy AI in the workplace (Acerta, 2026). Two years ago, chatbots and basic AI features were still the norm. Today, autonomous AI agents are running in production: systems that connect language models to databases, code and external services. That rapid shift is creating a security problem. Those applications often turn out to be remarkably vulnerable, and they can't be protected the traditional way.

Unpredictable

The UK AI Security Institute already registered over 700 AI incidents in 2026, five times more than the year before (The Guardian, 2026). Gartner predicts that by 2028, one in four enterprise GenAI applications will experience at least five security incidents per year (Gartner, 2026). In March this year, an internal AI chatbot at McKinsey leaked client data following a targeted attack. Around the same time, a Meta AI agent shared internal data on a developer forum.

Conversations with developers, CISOs and AI leads at around a hundred scale-ups keep surfacing the same bottleneck: there is no scalable way to thoroughly test AI agents for security and robustness. "Static code scans find vulnerabilities in code. The problem is that in AI applications, the risk isn't so much in the code itself but in the behaviour of the language model, and that behaviour is unpredictable," says Sander Noels, co-founder of Enoki. "With a single well-crafted instruction, an AI agent can leak data, misuse tools or take actions that were never programmed into it. Yet the EU AI Act requires high-risk AI systems to demonstrably withstand manipulation and attacks."

Autonomous hacker agent

From that observation, Enoki built a platform that tests the model itself, not just the code. AI applications are subjected to hundreds of autonomously generated attacks, ranging from prompt injection to data theft.

The Enoki platform works through three modules. The first (pre-deployment testing) tests AI apps before they go live. The second (runtime protection) monitors the application in production: every input and output is scanned for vulnerabilities. The third (compliance & inventory) maps all AI solutions within a company and charts the risks against frameworks like the EU AI Act, NIST AI RMF or ISO 42001.

"A customer service bot can at most give wrong answers. An AI agent that executes payments, reads contracts or writes code on behalf of a customer can cause real damage," Noels continues. "Enoki is built for the new generation: AI agents that directly control tools, databases and external services through standardised interfaces. That's exactly where the attack surface grows exponentially."

86 billion market

Enoki is led by Audrey Stampaert, formerly co-founder of HR scale-up Mbrella, Joachim Roelants, previously active at Fortino Capital and startup studio StarApps, and Sander Noels, postdoctoral researcher in AI and data analysis at Ghent University. The startup is currently bringing its first design partners on board and preparing a funding round.

"The AI cybersecurity market is expected to reach 86 billion euros by 2030. Companies are experimenting heavily with AI applications, but are often insufficiently aware of the accompanying risks. We focus on European B2B SaaS companies and large enterprises that build their own LLM-based AI agents: customer service bots, financial assistants, HR tools, code assistants. For those companies, speed to market is everything. One security incident, however, can cause reputational damage and operational chaos that instantly wipes out all the gains from a new feature. Enoki is building the security infrastructure for the age of AI agents," concludes Stampaert.

2548 views

Stay Ahead in Tech & Startups

Get monthly email with insights, trends, and tips curated by Founders

+

Master LinkedIn
for Free

Learn how to grow your audience, build authority, and create content people love.

START COURSE

Enter code STARTUP for 100% off