AI Risk Management: Secure Iowa Panel Recap
- Read Time: 6 mins
In this recap...
- How panelists defined AI risk in practical business terms
- The cautious but creative ways organizations are using agentic AI
- Why governance doesn’t have to slow innovation
- The growing challenge of vendor and third-party AI risk
- How to prepare for adversarial threats like data poisoning and prompt injection
- The role of human oversight in tackling bias and hallucinations
- What leaders should expect in the next 3–5 years
AI came up a lot at Secure Iowa 2025. Not as a trend, but as a reality everyone in the room is already dealing with. A panel of security and technology leaders shared how they’re wrestling with the upside and downside of AI in their organizations. The conversation focused on real decisions like: how to handle shadow AI, how to keep vendors accountable, where to draw the line on automation and how to make sure humans stay in the loop.
It was less about the hype and more about the “now what?” questions. How do you get visibility into AI that’s already in play? What guardrails actually work? And what risks do leaders need to think about today so they’re not caught off guard tomorrow?
What Does AI Risk Even Mean?
The first question on the table: how do you define AI risk? It’s tempting to launch into technical answers, but the panel boiled it down it down to this: AI risk is business risk. It’s about how AI impacts your revenue, your operations, your customers and your reputation.
The main culprit? Visibility. AI features are being added to existing software without much warning. Employees are experimenting with free tools they find online. And shadow IT is suddenly shadow AI. You can’t manage what you don’t know about.
If your business leaders don’t understand how AI decisions tie back to financial outcomes, the conversation falls apart. The panel stressed the need to translate AI risks into terms the business understands. That means reframing AI as something that can create both profit and loss, depending on how it’s managed.
AI Risk Management Framework
AI has incredible potential, but it also brings operational, security, ethical and regulatory risks that can’t be ignored.We outline how an AI Risk Management Framework—like NIST’s AI RMF—helps organizations proactively safeguard data, build trust and ensure responsible use of AI.
Agentic AI: Exciting and a Little Scary
Agentic AI was one of the most talked-about topics. Different than LLMs, these systems don’t just give answers, they take actions. Things like automated workflows, testing routines, or code reviews that run without human intervention. The potential is huge: freeing up people from repetitive tasks, cutting costs and boosting efficiency.
But with that power comes risk. The panel pointed out that when you let AI make decisions on its own, you have to ask: who’s accountable when something goes wrong? If the output is wrong, or biased, or even harmful—is it the vendor? The user? The business leader who approved it?
Panelists shared how they’re cautiously piloting agentic AI in narrow, controlled ways. Purpose-built agents for specific tasks feel safer and more predictable than all-purpose models. And in every example, human oversight stayed in the loop—whether at the beginning, the middle, or the end of the process. The Iowa Insurance Division was mentioned as a clear reminder that regulators are already watching closely.
Governance: Don’t Over (or Under)think It
The panel’s advice on AI governance was straightforward: start with what you already have. Most organizations already have frameworks like SOC 2 or NIST in place. These existing controls are a great foundation for handling AI. Don’t toss them out. Extend them.
Clients and regulators expect transparency. If you’re using AI to process client data or make decisions, disclose it. Let people decide if they’re comfortable with that. The panel emphasized that trust grows when organizations are upfront about where AI is in play.
One mistake to avoid? Over-engineering governance to the point of slowing innovation. If your governance process creates endless approvals and roadblocks, employees will bypass it. Guardrails should guide safe experimentation, not shut it down completely. Balance is key: make it easy for teams to explore AI responsibly while protecting the organization from real risks.
AI Governance
AI governance provides the guardrails organizations need to ensure AI is safe, ethical and aligned with business goals. We'll explain how governance works alongside an AI Risk Management Framework to build trust, manage risks and keep AI deployment both responsible and effective.
Vendor Risk Keeps Multiplying
Most AI features today arrive as part of SaaS tools. That means the risks don’t just live inside your four walls. They extend to your vendors—and to your vendors’ vendors. This creates a chain of risk that’s harder to control and harder to see.
The panel encouraged leaders to push vendors for specifics: Where does your data go? Who has access? What safeguards are in place? Vague answers aren’t good enough when your clients’ trust and your company’s reputation are on the line.
And here’s the kicker: even if your vendor slips up, you’re still responsible. Regulators, insurers and clients don’t care whose server it was on. They care that your data was exposed, or that your AI tool made a decision that hurt someone. The accountability always flows back to you.
Updating vendor risk management processes for AI simply isn’t optional. Treat AI in SaaS the same way you’d treat any major cybersecurity risk: with due diligence, hard questions and clear contractual protections.
Educating Stakeholders on Threats of Adversarial AI
Adversarial attacks like data poisoning, prompt injection, or manipulated outputs are already here. The panelists didn’t sugarcoat it: these threats are real, growing and evolving quickly.
So how do you prepare?
Education is the first step. Developers and employees need to understand the risks, not just security teams.
Role-based access controls are another must. Not everyone should be able to change or retrain models.
Another big one? Logging. If you don’t log every interaction with AI—the prompts, the data, the outputs—you have no defense if things go wrong.
One panelist summed it up perfectly: “You can’t defend what you can’t explain.” If someone asks why a model made a decision, “it’s magic” won’t cut it.
The Human Side of AI
Bias. Hallucinations. Inconsistent outputs. All of these are happening today. The panel agreed that the only way to manage them is to keep humans in the loop, especially in high-stakes situations like hiring, healthcare, or education.
Leaders have to build frameworks that catch errors and raise red flags before harm is done. Clean, diverse training data helps reduce bias. Quality assurance processes help catch hallucinations. The reality is: unsanctioned AI use is already happening inside organizations, whether leaders know it or not.
Shadow AI
Shadow AI happens when employees use AI tools without IT oversight—boosting agility and innovation but also creating real risks around security, compliance, and data governance. See how to recognize Shadow AI and manage it through clear policies, collaboration and training.
Looking Down the AI Road
The conversation closed by looking at the next three to five years. The group predicted cyberattacks will become faster, more complex and easier to launch. Regulation will continue to lag behind technology. And the job market? It’s going to shift—possibly in dramatic ways.
Some panelists postulated a potential stark picture of automation reducing headcount in some organizations. Others offered a more optimistic view: that new opportunities will emerge, just as they did during past industrial shifts. The truth is probably somewhere in between. What’s clear is that organizations—and workers—will need to adapt quickly.
And yet, the panel ended on a hopeful note. They shared small, everyday examples of how AI is already making life easier: automating a family phone bill, rewriting tough emails, even turning reports into Taylor Swift lyrics. It was a reminder that while the risks are real, so are the opportunities—and sometimes the joy—of using AI well.
What leaders should take home...
- Visibility is non-negotiable. You can’t govern what you can’t see.
- Start with what you have. Extend frameworks like SOC 2 and NIST before building from scratch.
- Push vendors. Demand clear answers about where your data goes and how it’s protected.
- Invest in education. Make sure employees understand both the power and the risks.
- Keep humans involved. Especially when decisions carry real-world consequences.
The organizations that thrive will be the ones that embrace AI innovation. But do it with guardrails firmly in place.
If you’re ready to get a handle on AI risk, reach out to HBS today. We can help you cut through the noise, evaluate your current exposure and put the right guardrails in place. Connect with our team to talk through how to manage AI and its risks with confidence.
Related Content
Developing an AI Risk Management Framework
Learn to develop an AI Risk Management Framework to ensure safe and effective AI deployment. Discover key elements and tips, and explore the NIST AI RMF.
Unmasking Shadow AI
Shadow AI poses risks, including security vulnerabilities and compliance issues. Learn how to recognize, manage, and govern Shadow AI to harness its potential.
AI Governance for Trustworthy AI Deployment
Unleash AI’s potential responsibly. Learn what AI Governance is, why it’s crucial, and how you can implement it.