Tabletop Exercises for AI Security Incidents 

tabletop exercises for AI security
  • Home
  • /
  • Insights
  • /
  • Tabletop Exercises for AI Security Incidents
May 20, 2025

Many businesses are using AI tools like Microsoft Copilot, ChatGPT, and Jasper without clear safeguards. If your business hasn’t practiced an AI breach scenario yet, this is your wake-up call.

Most Businesses Are Using AI Blindfolded

You’ve installed Copilot. Your team’s using ChatGPT. Someone in marketing has a subscription to Jasper. But no one has asked the most important question:

“What do we do if this goes wrong?”

You probably have antivirus software. Maybe a firewall. Possibly even cybersecurity insurance, though that’s getting harder to afford or renew. But if you’re like most small and mid-sized business owners, your team has never practiced responding to an AI incident. You assume your IT team’s got it covered, but they may be just as uncertain as you are.

“You don’t have a choice anymore. Everyone will use AI. The question is how to use it responsibly.” — Bob Miller, INGRAIN AI™ Mastermind

AI risk is not future talk

It’s already here. Bob Miller, COO of a top-tier managed service provider and founder of irgame.ai, laid it out plainly: AI isn’t just a new tool. It introduces new vulnerabilities that no antivirus or phishing simulation can catch.

From prompt injection buried in images… to data leaks via Copilot misconfigurations… to model theft and inversion attacks that expose your business logic; most teams aren’t ready.

“Most small businesses I talk to? They have no real controls in place. Not even public companies.” — Bob Miller

The only way to know what your team will do in that moment is to practice it. 

Why AI Security Risks Often Go Unnoticed

The Invisible Attack Surfaces You’ve Already Installed

Imagine this: you get an email that looks like a boring vendor update. You click it, skim it, and delete it.

What you don’t realize is that inside that email (in the image, the formatting, or even the code behind it) is a hidden message. Not for you, but for your AI assistant.

And because that AI tool is logged in under your credentials, it follows the instructions without hesitation. Just like that, it emails out company reports, exposes your file structure, or worse… and no alarms go off.

“It’s not going to be seen as a traditional breach. The AI is doing what you told it to do… or what it thinks you did.” — Michael Hasse, ITwerx

The new risk isn’t always hackers, it’s the configuration

Let’s talk about SharePoint. If you’re using Microsoft Copilot in your organization, it likely has access to your company’s SharePoint library. If that setup isn’t done with precision, a sales rep could ask Copilot to summarize revenue by region and end up with the CEO’s compensation details instead.

“If you don’t configure Copilot’s access to SharePoint correctly, anyone using it can accidentally see everything.” — Bob Miller

And it’s not just Microsoft. Every AI tool you bring into your ecosystem: Otter, Zoom, Jasper, and Fathom, can become a shadow AI risk if employees adopt it without oversight.

The ecosystem has changed. And you probably missed the memo.

Single sign-on, API integrations, and public cloud tools make life easier. But they also expand your attack surface—the number of ways someone (or something) can get inside.

“Eight out of ten businesses we assess have almost no real controls in place for AI risk.” — Bob Miller

You might have a firewall. Great. But what’s your plan when your AI model gets poisoned with biased data and starts giving flawed financial predictions?

Or when a competitor simulates your sales rep's prompts to recreate their logic using model inversion?

Most companies won’t even notice the breach until it’s too late.

How Tabletop Exercises Reveal the Gaps in Your AI Strategy

Practice the Panic Before It’s Real

Let’s be honest, most businesses have no playbook for an AI-driven breach.

You might have an incident response plan for ransomware, maybe even phishing. But when it comes to AI prompt injection, data model theft, or Copilot leaking confidential files… you're probably guessing.

“Plan for a tabletop AI incident exercise. Do it in a safe environment. It’s going to make you much smarter about what to do in a real one.” — Bob Miller

That’s why tabletop exercises for AI security are the secret weapon smart companies are starting to use.

What is a tabletop exercise, exactly?

It’s like a fire drill for your brain and business.

Picture your executive team, IT, and key staff in a virtual or physical room. They’re walked step by step through a simulated AI security event by a facilitator who throws in twists, surprises, and mounting pressure.

We’re talking:

  • “Your AI assistant just emailed all HR files to a vendor. What do you do first?”
  • “A prompt in an image caused Copilot to summarize your sales reports. Who got it?”
  • “Your internal model was accessed by an unknown user in Singapore. Do you shut it down?”

You’ll hear terms like injects (sudden twists in the scenario), and decision points where you have to act fast with incomplete info.

“You’ll never have enough time or resources in a real event. The tabletop forces you to make tough decisions in real time.” — Bob Miller

Why AI risk isn’t just an IT problem

One of the biggest mistakes businesses make is assuming these issues are technical.

They’re not.

Tabletop exercises for AI security expose policy holes, governance issues, and communication breakdowns. They show whether your HR team knows what “Copilot access tagging” is, or if your marketing team is putting sensitive customer data into public LLMs.

“Most employees don’t even know what they’re doing could be dangerous. They’re just using tools to get work done.” — Michael Hasse

That’s why these exercises aren’t just for your tech team. They’re for every department that touches AI. Which now, let’s face it, is every department.

4 AI Breach Scenarios to Practice Before It’s Too Late 

If These Happen Tomorrow, Would Your Team Know What To Do?

No one forgets the first time their system gets hit.

It’s the gut punch of realizing the AI assistant didn’t get hacked, it simply did what it was told. Because the real danger isn’t always malware… it’s misuse.

That’s why running a tabletop exercise for each of these breach scenarios could save you from a six-figure disaster, or worse.

1. What happens if Copilot is misconfigured?

Scenario: An HR employee asks Copilot to “summarize key leadership contracts.” Copilot includes private salary details, HR notes, and board files.

“If Copilot isn’t set up correctly, someone from Sales can get to Finance docs without even trying.” — Bob Miller

2. Can a marketing email prompt an AI breach?

Scenario: A disguised marketing email contains invisible AI prompts in the image file. Copilot sees them and follows hidden instructions under your credentials.

“That’s not a breach, that’s Copilot following instructions. Just not yours.” — Michael Hasse

3. How could someone steal your proprietary AI model?

Scenario: A private model trained on sales logic and IP is cloned via a stolen API key. Your edge is now your competitor’s weapon.

“If Coca-Cola trained a model with their secret formula and it got stolen, that’s an IP disaster.” — Bob Miller

4. What is a model inversion attack, and why does it matter?

Scenario: Repeated prompts slowly extract private data from your AI agent, without direct access to your systems.

“I asked it questions it shouldn’t have been able to answer. It gave me names, emails… and they were real.” — Michael Hasse

Running these breach scenarios is about building muscle memory before it counts.

What Life Looks Like When You’re Prepared

Peace of Mind Is Knowing Everyone on Your Team Knows What to Do

Once your team has practiced what to do when things go sideways, everything starts to shift.

You go from hoping your systems are secure to knowing your people can act fast, shut down exposure, and recover before damage spreads.

“You’ll never have time or resources during a real breach. But a tabletop shows you how you’ll actually respond.” — Bob Miller

Your systems are better tagged, your people ask sharper questions, your tools are controlled instead of chaotic, and your business stops reacting and starts leading.

You don’t need a cyber degree to take AI security seriously. You just need to realize one thing:

Your AI is part of your team now. Like every team member, it can either help you win, or accidentally take you down.

If your team hasn’t trained for these scenarios yet, this is your nudge. Start your AI incident practice now. Because when something goes wrong, you’ll want to know how to respond when it counts.

Frequently Asked Questions

What is an AI tabletop exercise?

A tabletop exercise is a simulated AI security incident where your team practices response strategies in a controlled environment.

Why is Copilot a security risk?

If not configured precisely, Copilot can access and summarize sensitive company data without the user realizing it.

What is prompt injection?

Prompt injection is when hidden instructions are embedded in content (like images or emails) that AI tools follow under your credentials.

How can I protect my business from AI misuse?

Start by defining AI usage policies, limiting access scope, and running practice exercises to spot and fix vulnerabilities early.

AI is already woven into how your business operates, even if you didn’t mean for it to be. Every automation, every assistant, every "smart" integration comes with unseen doors that can swing wide open without warning.

Tabletop exercises for AI security aren't optional anymore. They’re how smart companies prepare their people to stay calm, act fast, and protect what matters when something goes wrong.

So run the drills, ask the hard questions, and don’t wait until it’s a real crisis to find out how your team responds.

👉 Join the INGRAIN AI™ Mastermind