Security Leadership #18: Rocky Mountain Edition
The mountains are calling, and I must go...teach security leadership.
Posting from SANS Rocky Mountain 2025 in Denver, CO! In this issue: an extra helping of excellent research, presentations, and articles by my talented SANS colleagues. We’ve got productivity hacks with AI, Scattered Spider updates, automated risk analysis and reporting, a vulnerability in Anthropic’s MCP server, AI-enabled burnout reduction in the SOC, and a custom GPT for internal security communications and marketing.
🔥Must-Watch of the Week
Matt Edmondson's Freestyle AI talk
Matt is also here in Denver this week, and he’s giving an evening talk on AI that I’m looking forward to seeing. This freestyle talk explores how he utilizes AI to enhance his productivity, featuring some excellent examples, practical tips, and valuable tools. This talk kick-started my use of AI in my daily life. If you’ve thought about using AI for content creation or dabbling in vibe coding, this is a great primer.
🛡️Cyber Defense
Scattered Spider pivots to the insurance industry, by Matt Kapko
Scattered Spider has pivoted from their recent U.K. and U.S. retail attack spree to targeting insurance companies, according to the Google Threat Intelligence Group. The group, tracked as UNC3944, typically focuses on one sector at a time and has already impacted multiple U.S. insurance companies. Erie Insurance, a Fortune 500 company, discovered unusual network activity on June 7, and their systems remain offline, preventing customers from accessing online accounts. Aflac has also recently disclosed a breach that bears many of the hallmarks of Scattered Spider activity (though attribution hasn’t been confirmed).
John Hultquist warns that "the insurance industry should be on high alert, especially for social engineering schemes which target their help desks and call centers," noting this matches Scattered Spider's established attack patterns. If you're in insurance or work with insurance clients, immediately review your help desk procedures, implement additional verification steps for sensitive requests, and brief your teams on social engineering tactics. This group's track record suggests more attacks are imminent.
⚖️Policy and GRC
Architecting Data Analytics for Continuous Risk Management, by James Tarala
At the RSA Conference this year, my SANS colleague James Tarala presented on automating risk management data collection and analysis. This presentation provides practical guidance on aligning security safeguards with business objectives, while also identifying opportunities to automate risk measurement and reporting. If you’re looking to streamline and automate GRC or have ever struggled with building meaningful risk reports, you want to check this one out.
🤖 Emerging Tech
Security Advisory: Anthropic's Slack MCP Server Vulnerable to Data Exfiltration
This week, Embrace the Red published an advisory revealing that Anthropic's widely used Slack MCP server is vulnerable to data exfiltration through "link unfurling.” In this scenario, attackers use prompt injection to trick AI agents into posting malicious links that leak sensitive data when Slack automatically crawls them for previews. The real kicker? Anthropic deprecated the server in May and explicitly stated that they would not be patching security vulnerabilities in unmaintained code. Security researcher Johann Rehberger states that the risk is high if an AI agent utilizes a vulnerable server, has access to private data, and processes untrusted input (which he dubs “the lethal trifecta,” catchy).
Your immediate action items: audit your MCP server usage across the organization, and if you're running this Slack server, either patch it yourself by disabling link unfurling (unfurl_links: false, unfurl_media: false
) or replace it with a maintained alternative. The broader lesson is that AI supply chain management needs to become a formal part of your security program. Track which AI tools your teams are deploying, ensure they're from vendors committed to long-term support, and have a plan in place for when popular AI components are abandoned.
Trend Micro also posted about the vulnerability on their blog, which you can read here.
👤The Human Element
How AI-Enabled Workflow Automation Can Help SOCs Reduce Burnout, by John Hubbard
My friend and LDR551 co-author John Hubbard penned an excellent piece for The Hacker News on leveraging automation to reduce burnout in security operations teams. According to SANS' latest SOC Survey, most teams still operate with just 2-10 full-time analysts, while their coverage scope has expanded across on-premises, cloud, endpoints, and SaaS platforms.
The real cause of burnout isn't the volume of work; it's the repetitive, context-switching nightmare of chasing alerts across fragmented tools and manually piecing together investigation timelines. In this article, John discusses how AI-powered automation can serve as a contextual aggregator, consolidating telemetry, threat intelligence, and asset data into enriched case summaries rather than raw event logs. SOC leaders can also use AI to surface performance trends, identify skill gaps early, and provide targeted coaching, turning burnout prevention from reactive guesswork into proactive team management.
The Security Marketing Sherpa GPT by Lance Spitzner
Marketing and communications play a significant role in security. It’s easy to get used to technical jargon, and to forget that half the words we use are meaningless to many of the people we rely on for effective defense! Fortunately, my SANS colleague Lance Spitzner has released the Security Marketing Sherpa, a custom GPT designed to help you craft fun, engaging security emails that your workforce will pay attention to. I’ve been experimenting with it, and it’s fantastic!