Hello, everyone! Welcome to the Splunk staff picks blog. Each month, Splunk security experts curate a list of presentations, whitepapers, and customer case studies that we feel are worth a read. You can check out our previous staff picks here. We hope you enjoy.
AI applications open new security vulnerabilities by Taimur Ijlal for The Overflow Blog
"With the increasing prevalence of AI technologies in applications across multiple industries, the attack surface area is growing with the same velocity. This article highlights Inference, Poisoning and Evasion attacks with current, real world examples. These attack methods and others that have not yet been discovered will only increase with time and cybersecurity teams need to maintain awareness."
SVB’s collapse is a scammer’s dream: Don’t get caught out by Phil Muncaster for WeLiveSecurity
"It’s no secret that the U.S. banking industry experienced a major crisis this month and, as you can imagine, attackers are trying to seize this opportunity for their own financial gain. This blog by Phil Muncaster briefly discusses the collapse of Silicon Valley Bank (SVB) but, more importantly, explores various scams that have (so far) been identified related to SVB. Furthermore, there is a special note on why, though not unusual, these particular scams are more lucrative than normal. Some reasons include the fact that SVB had such a huge, global reach but also because of the seemingly nonexistent communication customers have had with their failed lender. This likely makes many individuals much more desperate to speak with someone they assume will be a supporting role in this crisis and unfortunately, because of that, they become more vulnerable. The article goes on to include deceptive domains and other indicators of behavior relating to the SVB scams that could be extremely useful in both preventing these attacks and proactively hunting for any signs. If you’re an analyst at a startup or are in any way SVB-adjacent (I just made that term up), this would certainly be something to keep an eye out for. All in all, while scams related to major news events are certainly not new, they are still impactful."
William Van Duynhoven
Ransomware group posts Minneapolis Public Schools data to dark web by Jonathan Greig for The Record
"Tactics continue to evolve when it comes to ransomware. In this recent example, threat actors targeted the largest school system in Minnesota, which serves approximately 34,500 students. Tactics included a 51 minute ransom video that shared screenshots of the stolen data. Aside from home addresses, payroll, and contact information, the reported data leak covers sensitive information such as student grades, health records, disciplinary records, civil rights investigations, special education, district financial information, and more. As our collective data footprints continue to grow, so does the value of this data and the importance of its safeguarding."
AI-Generated Voice Deepfakes Aren’t Scary Good—Yet by Lily Hay Newman for WIRED
"AI is so hot right now. This WIRED article does an excellent job of discussing why AI voice deepfakes are not as big of a threat… right now. WIRED spoke with researchers and industry leaders about the current state of AI voice deepfakes and what we might expect in the future."
Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears by Robert Lemos for Dark Reading
"It turns out that ChatGPT and other Large Language Models (LLMs) have a data security problem. By asking carefully crafted questions, an attacker may be able to get to the data the LLM has ingested. These are called "training data extraction attacks" or "exfiltration via machine learning inference." We need to have some way to make sure that data stays safe. For example, a doctor might paste in a patient's data, a lawyer might use ChatGPT to help create a brief with sensitive material, a banker might ... you get the idea. Oh, oh! We have another attack surface to defend!"
CISA Alerts on Critical Security Vulnerabilities in Industrial Control Systems by Ravie Lakshmanan for The Hacker News
"With Operational Technology (OT) security becoming more and more connected and vulnerable over the last few years, we need to bridge the gap between OT, IT and security departments to mitigate risks involved with critical vulnerabilties on our most critical systems."
Sour Grapes: stomping on a Cambodia-based “pig butchering” scam by Sean Gallagher for Sophos X-Ops
"Taking a break from photographing the pithy koans of ornithological threat actors, Sean Gallagher is back with an incredible update to his earlier blog on 'pig-butchering.' Known in China as sha zhu pan (杀猪盘), pig butchering is a confidence scheme that leverages fake mobile apps and personal messages to lure victims into investment schemes. One group detailed boasted of '$3 million U.S. in cryptocurrency over a five-month period.' With such an ephemerous infrastructure and an increasing global reach, I'm hoping more folks take notice of this threat."
Atomics on a Friday - Purple March Madness Ep 3 by Michael Haag, Paul Michaud, and Anton Ovrutsky
"Atomics on a Friday is a livestream series on security hosted by two people, Paul Michaud and Splunk's Michael Haag. Their latest episodes have been focused on different perspectives from security individuals taking a threat report and making it actionable, with this episode being from the Purple Team perspective. The guest, Anton Ovrutsky, walked through pulling out the various TTPs from the report, and demoed how atomic tests can be use to execute a Purple Team exercise very easily. It was great to hear a technical discussion by practitioners doing the work in the field. Check it out!"
What Is ChatGPT Doing … and Why Does It Work? by Stephen Wolfram
"Professor Feynman once said, "If you cannot explain something in simple terms, you don't understand it." Stephen Wolfram, one of Dr. Feynman's students, has shown that he truly understands ChatGPT, and shares his knowledge, in simple terms, with the world with this 96 page explanation. The security implications of ChatGPT are just beginning to be discovered, so a knowledge of the how and why this model works is necessary for further exploration. Not only is this an explanation of ChatGPT, but is great primer for Machine Learning models in general. "
CISA and NSA Enhance Security Framework With New IAM Guide by Alessandro Mascellino for Infosecurity Magazine
"There is new guidance from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) on how organizations should address IAM within their organizations. This article specifically calls out mentions of multiple manufacturing and critical infrastructure entities within the U.S. and their recent breaches. In the guide, CISA and NSA mention a few attacks in recent years that leveraged vulnerabilities in IAM products and implementations to target critical infrastructure."
(@audrastreetman / @email@example.com)
The FBI’s BreachForums bust is causing ‘chaos in the cybercrime underground’ by AJ Vincens for CyberScoop
"The FBI recently arrested a 20-year-old man in New York believed to be 'pompompurin,' the administrator of BreachForums, an infamous cybercrime forum. In a statement, the DOJ said that the FBI and Dept. of Health and Human Services conducted a 'disruption operation' that caused BreachForums to go offline. Investigators linked the suspect to the pompompurin persona based on apparent operational security failures detailed in the FBI affadavit. This article examines how the takedown could impact threat researchers and the larger cybercrime ecosystem."