AI Security

The Next Phase of Risk: Why Your AI Agents Need a Governance Strategy Now

February 09, 20263 min read

The Next Phase of Risk:

We are moving past the novelty phase of Generative AI. Companies are no longer just using chatbots to summarize emails. They are deploying "autonomous agents" capable of accessing file systems, executing code, and making decisions.

Patrick Leung, CTO of Farro Health and a former Google Duplex engineer, warned on the "Using AI at Work" Podcast with Chris Daigle, that this shift introduces profound security risks. He predicts we will soon see "disasters" where an AI system makes unauthorized high-stakes transactions or leaks sensitive data. The organizations that survive this transition will be the ones that prioritize governance over blind speed.

The "Crown Jewels" Conundrum

The primary fear for enterprise leaders is data leakage. They worry that their proprietary data—their "crown jewels"—will be mixed with competitor data or exposed through public models. This is a valid concern for industries like pharmaceuticals or finance where intellectual property is the entire business value.

You cannot rely on default settings to protect this information. Leung advises using private instances of Large Language Models (LLMs) available through providers like Azure or AWS. These instances ensure your data remains isolated from the public training sets used by tools like ChatGPT.

While vendors like OpenAI and Anthropic have a strong incentive to prevent leakage to maintain business trust, you must verify their policies. Do not assume safety. Review the enterprise agreements and architectural diagrams to confirm your data remains yours.

The New Threat Vector: Prompt Injection

Security leaders need to understand that LLMs introduce a new attack surface. The risk is no longer just about hackers stealing passwords. It now includes "prompt injection attacks".

Malicious actors or rogue employees can craft specific prompts designed to bypass safety filters. If an autonomous agent has access to your internal file system, a successful injection attack could force the AI to exfiltrate sensitive documents or execute harmful commands.

Chief Information Security Officers (CISOs) must actively engage with AI governance. You need a dedicated strategy to monitor how agents interact with your systems. Farro Health, for example, has assigned senior personnel solely to manage these AI security risks.

The Non-Deterministic Challenge

Traditional software is deterministic. If you input the same data twice, you get the same result twice. Generative AI is non-deterministic. You can input the same prompt five times and get five different answers.

This makes standard quality assurance impossible. You cannot simply compare the output to a static "correct" answer. This creates a risk of "hallucinations" where the AI confidently presents false information or bad code.

To mitigate this, you need a "Human-in-the-Loop" framework. Leung highlights that using AI for "vibe coding"—generating code based on feelings or loose instructions—can result in "spaghetti code" that becomes a liability. An experienced human must review the output to ensure it is secure and functional.

Team Analyzing AI Issue

The 50-Hour Learning Curve

Adopting AI is not an instant productivity fix. Leung cites studies showing that using AI for coding can actually decrease productivity initially. Gains typically appear only after a user has logged approximately 50 hours of usage.

You have to take your lumps. Employees need to fail, struggle, and learn the nuances of prompting before they become effective.

Leaders should be wary of candidates who claim to be AI experts without proof. Farro Health found that many applicants "dressed up" their resumes but failed practical tests that required real-world AI implementation experience. Real expertise comes from hands-on experimentation, not just reading headlines.

Core Insight

If you are deploying AI agents in your business, follow these steps to secure your operations:

  1. Isolate Your Data: Use private model instances to prevent your IP from training public models.

  2. Monitor the Agents: Limit the file system access and execution privileges of your AI agents.

  3. Validate the Output: Implement rigorous human review processes for any code or critical documents generated by AI.

  4. Test Your Talent: Use practical tests during hiring to verify that candidates have actual experience navigating AI limitations.

Curiosity is the defining trait of successful AI adoption. However, that curiosity must be paired with a rigorous defense of your organization's security and data integrity.

Damon is an AI strategist focused on business growth, efficiency, cost reduction, and increased profits using AI models made for business.

Damon L. Davis

Damon is an AI strategist focused on business growth, efficiency, cost reduction, and increased profits using AI models made for business.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog