false

Perspectives Home / CISO CIRCLE

Build or Buy? Deciding the Best Path for Your Next AI Cybersecurity Tool

How to weigh the true costs of building or buying your AI solution

Why wait around for the perfect AI tool that protects your data, reduces alert fatigue, or speeds up incident response? Cybersecurity teams today can take matters into their own hands with the help of accessible AI frameworks and low-code options to create a custom in-house tool tailored to your own unique environment and threat landscape.

 

According to Splunk’s State of Security 2024, leading security teams, many of whom are already putting AI to work, detect threats 13 days faster and see a 38% decrease in MTTD compared to their peers. But before your team dives in, let’s explore three critical insights that every CISO should know before building and deploying an AI solution. 

 

per-newsletter-promo-v3-380x253

Navigating AI for smarter, more resilient operations

Enhance observability, secure operations, and drive resilience with AI strategies for executives.

 

Unlocking the doors with simplified beginnings and sophisticated solutions

Creating AI-enabled solutions no longer requires you to become a software development whiz thanks to open source frameworks. These tools have transformed implementation from a complex specialty into something akin to assembling building blocks. According to Splunk’s State of Security 2024 report, 91% of respondents are already using public generative AI tools. These solutions remove much of the underlying complexity and offer powerful components that can be combined to create sophisticated security tools in scripting languages your team already knows, like Python or JavaScript. A security analyst with moderate programming experience can use LLMs and AI coding assistance to build capabilities that once required a specialized development team.

 

While analysts don’t need to be AI experts, they will need strong security and programming skills to review and test the tools they create. Has the team verified that it handles data securely in compliance with the latest regulations, tested it for performance, and conducted a thorough assessment to ensure it doesn’t introduce new vulnerabilities? AI can augment human intelligence dramatically, but keeping humans in the loop will ensure the best outcomes.

 

 

For a more reliable tool, dial down the project scope

The speed and cost advantages of AI-assisted development are compelling, but the scale of a project will also influence its odds of success. 

 

Traditional software development quickens when AI enters the picture. What might have taken weeks of coding can often be accomplished in days or even hours with effective AI assistance. This acceleration means security teams can rapidly prototype solutions to emerging challenges without waiting for procurement cycles or vendor updates. 

 

Here's a critical insight: The smaller and more focused your potential solution or tool, the more you can rely on AI to help you build it, and the more likely it’ll be that it’s effective. As projects grow in complexity, AI tools become increasingly prone to hallucinations (when a generative AI model shares incorrect, misleading, or fabricated information), missed instructions, or planning issues. Small, targeted projects with clear objectives are more likely to yield results with minimal correction. Here are a few examples that illustrate how different project types tend to yield different levels of reliability and outcomes:

 

  • High reliability: A tool that consumes CVE reports and recommends prioritized response actions based on the details of your specific environment. Reports from this tool might be good enough to automatically email recommendations to security leadership.

  • Moderate reliability: Analyzing an input binary for unique indicators and automatically creating detection rules for your platform(s) of choice. Rather than production-ready detection content, these rules should be considered "first drafts" to be reviewed and sharpened by humans.

  • Lower reliability: A comprehensive incident management platform with dozens of integrations and complex decision logic. Although pieces of this could very well fit into the "high" or "moderate" reliability categories, the overall system is complex enough that we'd hesitate to recommend a project like this without an actual development team behind it. 

 

The key to success is finding the optimal scope that balances ambition, effort, and reliability.

 

 

Build it your way…

Perhaps the most compelling reason to explore custom AI security tools is the opportunity for experimentation with minimal risk. With reduced development time, lower costs, and simplified implementation, security teams can now experiment with solutions that were not practical just a few years ago. If the solution succeeds, you've created a custom tool for your specific security needs, that in the long run will be more cost-effective.

 

The journey from experimentation to production often follows this progression:

 

  1. Identify a specific security challenge amenable to AI assistance.

  2. Create a minimal proof-of-concept (perhaps using AI development tools). This includes defining product requirements, validating compatibility with the rest of the organization’s infrastructure, and verifying compliance with regulations.  

  3. Test and iterate in a controlled sandbox-like environment. 

  4. Implement appropriate safeguards and validation checks (including penetration tests, vulnerability scans, and code reviews).

  5. Deploy into production with appropriate monitoring.

  6. Gradually expand capabilities. After creating a minimally viable product, you can add on additional features.

 

A good example of a tool created this way is DECEIVE, a SSH honeypot that SURGe, Splunk’s strategic security research team, developed for their own use. Creating high fidelity honeypots can be resource-intensive and time-consuming, and sorting through the data command-by-command to find sessions of interest can be a slow, difficult process. DECEIVE addresses these common pain points. From conception to first working demo took only about three days of a single security analyst's time. DECEIVE uses an LLM to emulate a Linux command shell, summarize the activity it observed, speculate as to the intent, and judge the activity malicious, suspicious, or benign. 

 

It illustrates the power of AI experimentation in three ways:

 

  1. Rapid prototyping and development: Using the LangChain framework removed the need for most AI-specific expertise, requiring just three days to code the initial capability by hand, most of which was spent creating the SSH network service rather than the AI backend. Once we adopted Copilot's code generation, development sped up more; significant improvements took less than an hour, including testing.

  2. Feature parity with existing solutions, but easier implementation: Offloading all the simulated "Linux" pieces to AI makes it light work to set up a honeypot emulating almost any kind of Unix or Linux system, something that requires significantly more effort with traditional solutions. 

  3. AI-enabled innovation: Using an LLM also allowed us to expand the boundaries of what was previously possible with traditional deception solutions. Using AI's strengths at summarization and instruction following, DECEIVE's session summaries and judgements are valuable features that just would not have been possible without an LLM.

     

     

… Or just buy something ready-made?

Of course, organizations can purchase a tool, especially if it’s for a relatively common use case like incident response or digital forensics. A ready-made tool’s performance is usually already tested and proven, and its costs (based on the subscription and licensing fees) are more predictable than if you were to build one.  

 

When you buy from a vendor, you also get the benefit of their full ecosystem: field teams, professional services, and technical support — whereas creating your own tool means you’d have to allocate personnel to monitor and maintain it. This can be critical if your internal resources are already stretched thin or you’re under pressure to show progress quickly.

 

But if you need a solution that doesn’t exist just yet, or the only available solutions are out of your budget, your team might as well try the DIY path. AI will have the most benefit when it’s tailored to your organization’s specific needs and core workloads, and most commercial solutions out there can’t be customized to that degree. 




The democratization of AI development, including the advent of AI code creation tools, provides a prime opportunity for security teams to create tailored solutions to defend the organization. While this still requires a foundation of programming skills and security expertise, specialized AI knowledge and extensive development resources are no longer necessary in many cases.

 

By focusing on well-scoped projects, tapping into existing programming skills, and embracing an experimental mindset, security teams can quickly develop AI-powered tools that provide unique capabilities to their security operations. The barrier to entry has never been lower.

 

 

 

Stay ahead of the evolving landscape of AI by subscribing to the Perspectives newsletter.

Read more Perspectives by Splunk

MAY 8, 2025  •  9 minute read

How Modern Platforms Help Leaders Balance Cost Capability and Risk

 

Strengthen resilience and drive growth by embracing composable ecosystems and aligning technology investments to key business outcomes.

MARCH 25, 2025  •  7 Minute Read

The AI Genie is Out of the Bottle. Now What?

 

AI is transforming software development, security, and decision-making — but at what cost?

JANUARY 24, 2025  •  4 minute read

Supporting the Workforce of Tomorrow, Today

 

In the AI era, technical skills like AI literacy and prompt engineering will become increasingly valuable. Organizations must help bridge this gap and foster the workforce of tomorrow.

Get more perspectives from security, IT and engineering leaders delivered straight to your inbox.