Introducing DECEIVE: A Proof-of-Concept Honeypot Powered by AI
Today, SURGe by Splunk is proud to unveil DECEIVE (DEC eption with E valuative I ntegrated V alidation E ngine), a proof-of-concept open-source honeypot that demonstrates the potential of using AI to easily create new cybersecurity tools and solutions. While DECEIVE is not a production-grade tool, it illustrates how AI can enable new approaches to cybersecurity problems that might not have been otherwise feasible.
This project was also an experiment for us. We wanted to learn what it would take for security teams to build their own AI-enabled solutions. We designed DECEIVE with this learning process in mind, and we hope it inspires others to explore similar integrations.
Let's talk about what makes DECEIVE special.
AI-Generated High-Fidelity Honeypot
Traditional high-interaction honeypots require significant effort to simulate realistic environments: installing operating systems, configuring user accounts, and seeding realistic but fake data all take time and effort. DECEIVE leverages AI to handle all of this dynamically. By simulating an entire Linux server via SSH, DECEIVE provides attackers with an authentic-feeling target without needing a painstaking setup. All you need to do is create a prompt describing the type of system you'd like to simulate. For example:
You are a video game developer's system. Include realistic video game source and asset files.
The AI backend ensures that system interactions feel natural and contextually appropriate, drastically lowering the effort required to deploy a realistic honeypot while maintaining high fidelity.
Session Summaries Powered by AI
DECEIVE goes beyond traditional honeypots by using AI to analyze and summarize attacker behavior. When an SSH session completes, DECEIVE automatically generates:
- A session summary describing the commands executed and their potential intent.
- An evaluation of the session’s nature, classifying it as BENIGN, SUSPICIOUS, or MALICIOUS.
This analysis is captured in structured JSON log files, along with a full record of all the user's commands and their simulated outputs. This helps reduce the manual effort needed to sort through all the sessions to see which are the most interesting from a security perspective.
Proof of Concept for Expanding Protocol Coverage
While this version of DECEIVE focuses on SSH, the approach is adaptable to protocols like HTTP or SMTP. API endpoints would also be good candidates for simulation. This allows for simulating a wide range of environments to understand attacker behaviors across different attack surfaces better. It also enables rapid deployment of new honeypots simulating specific vulnerabilities by updating the AI prompt. This would be useful to security researchers and blue teams trying to understand and respond to the latest vulnerabilities, especially in rapidly-evolving situations where full details of the vulnerability may not yet be known.
A Tool for Learning and Experimentation
DECEIVE isn’t just about fooling attackers—it’s also about understanding what it takes for security teams to integrate AI into their workflows. By building DECEIVE, we explored:
- The complexity of connecting AI models with security applications. This turned out to be much easier than expected. Partly, this was due to the nature of the problem (SSH sessions are already very similar to typical LLM chatbots), but there are also a lot of fairly mature LLM libraries and APIs available for common programming languages. Even without a lot of experience coding AI apps, we (and here I specifically mean "I," a single cybersecurity engineer who's not a professional developer) were able to get a functional prototype up within about two days of work, including all the SSH protocol pieces.
- The challenges in crafting prompts and responses to simulate a convincing environment. If any part of this process could be said to have been especially challenging, it was prompt creation. The emulation magic relies heavily on the quality of the prompt provided. My initial versions took just a few minutes to write and got us 80% of the way there. That last 20%, though, took a lot of tweaking to ensure that the simulated system behaved realistically enough to potentially fool an attacker. Fortunately, there is a lot of prompt engineering guidance available on the Internet for free, and even some prompt development tools to help out.
- The role of AI in interpreting and contextualizing attacker behavior. Session summaries and judgements weren't part of the initial vision for DECEIVE. Once the system started coming together, though, we began to look past the "just make it work" phase and identify additional opportunities to leverage the LLM capabilities we already had. Most honeypot analysts have to rely on complicated dashboards to determine whether they've caught anything especially novel or interesting, but this can be expensive and time-consuming. Having the AI do it for us is far more scalable and lends itself well not only to research use cases, but also to automated detection and alerting for the SOC.
Responsible Usage
We want to emphasize that DECEIVE is a proof of concept, not a production-grade solution, and certainly not a product supported by Splunk. While it’s a powerful demonstration of what’s possible, we have not extensively tested it for security vulnerabilities. Though the emulated nature of the SSH backend provides a substantial amount of protection against attackers using the honeypot for Evil (there's no real system executing anything and it's not possible to create or accept network connections from the real world), there is always the possibility of flaws in the honeypot code itself. Exercise caution when deploying DECEIVE in a potentially hostile environment.
How To Get Started
DECEIVE is open-source and ready for experimentation. Here’s how you can try it:
- Clone the repository from GitHub.
- Follow the setup instructions in the README and the documentation in the SSH/config.ini.TEMPLATE file to create SSH keys, configure users and passwords, or change the backend LLM (any OpenAI, Google, or AWS Bedrock model will work).
- Set any environment variables your LLM backend requires (e.g., OPENAI_API_KEY for the default GPT-4o backend).
- Modify the SSH/prompt.txt file to tell it what kind of system you'd like to emulate.
- Run it in a lab environment to see how it simulates interactions and generates detailed session summaries.
By default, the system will listen on port 8022/TCP for incoming SSH connections. On a UNIX or Linux system, you can log in with a command like the following:
ssh guest@localhost -p 8022
Note that the config file specifies that the guest account has an empty password, so you won't be prompted to enter one. Set one in the config file if you like.
What’s Next?
DECEIVE is an exciting part of our journey into AI-enabled security solutions. By building and sharing this project, we hope to inspire others in the cybersecurity community to explore how AI can address challenges that were previously considered unsolvable or infeasible.
DECEIVE shows that by combining AI with traditional techniques, we can create more intelligent, more adaptable solutions that lower the barrier to entry for deploying advanced deception technologies and developing innovative new security tools powered by AI. Join us in exploring this exciting frontier!
Related Articles

Predicting Cyber Fraud Through Real-World Events: Insights from Domain Registration Trends

When Your Fraud Detection Tool Doubles as a Wellness Check: The Unexpected Intersection of Security and HR

Splunk Security Content for Threat Detection & Response: November Recap

Security Staff Picks To Read This Month, Handpicked by Splunk Experts

Behind the Walls: Techniques and Tactics in Castle RAT Client Malware

AI for Humans: A Beginner’s Field Guide

Splunk Security Content for Threat Detection & Response: November 2025 Update

Operation Defend the North: What High-Pressure Cyber Exercises Teach Us About Resilience and How OneCisco Elevates It
