Microsoft, Google push AI agent governance into enterprise IT mainstream

Microsoft and Google are adding new controls for AI agents, as enterprise IT teams try to keep up with tools that can access corporate data and act across business applications.

Microsoft’s Agent 365, made generally available for commercial customers on May 1, is designed to help organizations discover, govern, and secure AI agents, including those operating across Microsoft, third-party SaaS, cloud, and local environments.

Google’s new AI control center for Workspace, announced this week, focuses more specifically on giving administrators a centralized view of AI usage, security settings, data protection controls, and privacy safeguards within Workspace.

The timing reflects a shift in enterprise AI use. Many companies are no longer just testing chatbots, but are beginning to use agents that can reach corporate systems and carry out tasks on behalf of users.

Analysts said the shift changes how CIOs and CISOs should think about AI agents inside the enterprise.

“By placing agent controls alongside identity, access, data, and workload management, vendors are positioning AI governance as an operational discipline owned jointly by IT and security,” said Biswajeet Mahapatra, principal analyst at Forrester. “For CIOs, this means AI agents now need to be managed like any other digital workforce, with lifecycle oversight, cost visibility, and integration into service management.”

For CISOs, that broadens the mandate beyond model risk and data leakage. As agents are given more autonomy, security teams will need a more continuous way to control what they can do and contain the impact when their actions create risk.

The announcements also elevate AI governance to a “core component of all AI-assisted enterprise applications,” signaling to CIOs and CISOs that governance will need to be built into AI deployments as adoption moves from pilots to enterprise-wide enablement, according to Lian Jye Su, chief analyst at Omdia.

Where Microsoft and Google differ

Microsoft Agent 365 and Google’s AI control center address related governance problems, but from different starting points.

“Given how enterprises are increasingly deploying AI in multicloud and hybrid IT environments, these two are complementary,” Su said. “They are highly optimized for AI workloads within their respective environments, meaning enterprises heavily invested in one vendor will find the native AI governance experience to be far smoother.”

According to Mahapatra, enterprises should see the distinction as a matter of platform scope rather than governance maturity. Microsoft’s approach treats AI agents as enterprise actors that require broad organizational oversight, while Google’s controls are more narrowly focused on how AI interacts with collaboration data and user content.

“These are not fully competing approaches because they govern different control planes, but they are not truly complementary either unless an enterprise standardizes on both ecosystems,” Mahapatra said. “Over time, each model reinforces governance capabilities that are tightly coupled to its underlying productivity and data platforms, which increases the risk that AI governance decisions become implicitly tied to vendor choice rather than enterprise architecture strategy.”

Pareekh Jain, CEO of Pareekh Consulting, took a middle view, saying the approaches are both complementary and competitive, especially as enterprises using both Microsoft and Google may find AI governance becoming more closely tied to each vendor’s underlying platform.

Risks left to resolve

The new controls may give enterprises better visibility into AI agents, but analysts said they do not eliminate bigger risks related to shadow AI, third-party integrations, and accountability for autonomous actions.

According to Jain, shadow AI agents can still emerge through developer tools, browser extensions, local assistants, SaaS copilots, and unsanctioned tool connections. Third-party integrations, he said, could also expand faster than security teams can validate them.

“Audit logs may show what happened, but not always why an autonomous agent chose an action,” Jain said.

That leaves enterprises with difficult questions when an agent takes actions that create business or security risks. Better logs do not automatically settle questions of control or responsibility.

Mahapatra said the biggest gaps are likely to remain outside the boundaries of native platforms. Shadow agents created through low-code tools, external APIs, or embedded SaaS applications can bypass central controls and operate with excessive or inherited permissions.

“Third-party integrations often expand agent reach without equivalent visibility into downstream actions or data propagation,” Mahapatra said. “Auditability remains uneven when agents chain actions across systems, making it hard to reconstruct intent versus outcome. Accountability is still unresolved when autonomous agents trigger material business or security impacts, since ownership is split across users, developers, and platform controls.”

The message for enterprises is that native controls from Microsoft or Google may help, but they are unlikely to cover the full agent landscape. Companies using multiple clouds, SaaS tools, developer platforms, and browser-based AI assistants will still need governance that extends beyond any single vendor’s console.

Read more: Microsoft, Google push AI agent governance into enterprise IT mainstream

Story added 5. May 2026, content source with full text you can find at link above.