Feature Overview
Splunk’s AI Assistant in Security leverages generative AI to accelerate detection, triage, investigation, response, and automation to drive faster MTTR (Mean Time To Respond) and safeguard your organization's digital assets. Key capabilities include summarizing findings, generating contextual SPL and reports, enabling conversational interactions, creating skeletons and blocks for playbook automation, as well as providing workflow validations and recommendations on which blocks to use.
Splunk’s AI Assistant in Security operates on the AI Platform Service, a multi-tenant cloud service hosted within Splunk Cloud Platform. This service provides GPU (graphics processing unit) resources to process and generate responses to your prompts. All AI computations are handled by Splunk’s own AI Service, with no AI workloads running on your own search head.
Model Choices
Splunk provides the following options for customers to choose from balancing quality, speed, and costs. Note that this does not apply to Splunk SOAR and Splunk Attack Analyzer today.
- Let Splunk choose the best model to deliver your use cases - In this option, Splunk harnesses a combination of Splunk-hosted and frontier models hosted by a trusted cloud provider to deliver the highest quality AI response.
- Limit to Splunk-hosted models only - In this option, all AI requests are routed to a Splunk-hosted model within Splunk Cloud Platform, ensuring that your data (such as telemetry and logs) remains entirely within our environment. This option is intended for customers with strict compliance requirements or concerns about data sharing with third-party frontier models. However, please note that opting for this configuration might result in reduced output quality for certain AI capabilities.
Model evaluation and performance
AI Assistant in Security is evaluated with an end-to-end protocol: the assistant is given a diverse test set of questions that are representative of the questions that our users are likely to ask, and its answers are compared with the correct answers. Through iterative refinements, the AI Assistant in Security is consistently improving its performance through iterative refinements via prompt-engineering, tool description and workflow design so it can drive immediate value in real-world scenarios.
Data sources for model training
Splunk might leverage customer prompts, responses, underlying logs and feedback to train Splunk-hosted models in accordance with Specific Terms for Splunk Offerings and Documentation to improve our offerings, unless customers opt out of this data sharing in the Splunk Enterprise Security settings. Note that alpha release/private preview program participation requires you to opt in to sharing the data described above.
Data privacy and security
AI applications use data, and sometimes this data contains personal information. When AI processes personal information, privacy controls must be designed into supporting technology to ensure there is a proper legal basis for processing, and that the use is purpose-aligned, proportional, and fair. Those controls must be maintained throughout the data and solution’s lifecycle.
When processing personal information, Splunk is committed to following the principles set forth here: Trust Portal - Cisco which aligns with applicable international privacy laws and standards.
AI systems must be resilient and contain protections from malicious actors by using secure development lifecycle controls. Protection against security threats includes testing the resilience of AI systems against cyber-attacks; sharing information about vulnerabilities and cyber-attacks; and protecting the privacy, integrity, and confidentiality of data
Splunk leverages our existing expertise by applying security controls that improve attack resiliency, data protection, privacy, threat modeling, monitoring and third-party compliance. See more about our Product Security here: Splunk’s Security Addenda | Splunk.
Fairness
The AI Assistant in Security results are unique to each customer and should be reviewed for accuracy and fairness by a human prior to use. There are risks associated with using alpha and beta versions in your production environment.