The Rise of AI-Communications
The workplace has entered a new phase of digital transformation, marked by the rapid integration of generative and agentic AI into daily collaboration. Tools such as Microsoft Copilot, Zoom AI Companion, and other AI assistants now participate directly in communications, shaping conversations and content.
While the tools are delivering undeniable productivity gains— they’re also generating an unprecedented volume of new interactions. They introduce an entirely new category of communications, referred to as: “aiComms” as well as new behaviors as employees and AI interact.
In the race to deploy AI tools, organizations are beginning to realize that while they are enabling productivity, they lack visibility into these new communications and new behaviors. And without visibility, they cannot identify risks or ensure compliance standards are met. This is where compliance must step in.
AI Has Entered the Conversation—Now Compliance Must Too
Some compliance teams may have considered AI interactions outside of their remit— assuming that the interactions are “internal only” or “temporary”. What’s often overlooked is that whether these tools are drafting emails for clients, surfacing data in response to prompts, or summarizing meetings, there are compliance and governance risks that compliance officers need to be aware of.
Regulators are reinforcing these responsibilities. In the US, FINRA’s Advertising Regulation FAQs reminds firms that they are “…responsible for their communications, regardless of whether they are generated by a human or AI technology. Accordingly, firms must ensure that AI-generated communications they distribute or make available comply with applicable federal securities laws and regulations and FINRA rules.” In the UK, the FCA has reiterated that its existing regulatory frameworks apply to AI.
The question now facing compliance teams is not whether AI should be governed, it is: What is the right role for compliance in governing AI-communications?
The Governance Gap: What the Data Shows
Insights from Theta Lake’s Digital Communications Governance report reflects a consistent theme across organizations: Firms are expanding AI adoption rapidly, but they are doing so without visibility into whether their governance controls are working. Nearly all (99%) of the 500 financial services firms that participated in the research said they are planning to implement or expand AI features within their unified communications and collaboration (UCC) tools this year but 88% report they are already facing governance and data security challenges. Specifically:
- 47% say the top challenge is ensuring that AI-generated content is accurate and complies with regulatory standards.
- 45% cannot reliably detect confidential or sensitive data exposure in generative AI output
- 41% cannot identify risky user behaviours in AI interactions.
The Right Role for Compliance
The goal for compliance teams should not be to slow down the adoption of AI. In practice, restricting or delaying access often pushes employees toward unsanctioned tools, increasing the risk of sensitive data being copied into ungoverned AI systems. Instead, by providing governance, validation, oversight, and effective remediation, the role of compliance can enable safe adoption.
Even where guardrails are well-designed, firms still need to validate that controls function in real-world use: Are outputs compliant? Are regulatory requirements met? Is confidential content escaping into channels? Lack of aiComms oversight doesn’t reduce risk—it only delays detection until it becomes an incident.
Below are the key areas where Compliance should maintain oversight to confirm guardrails are operating as intended and compliance obligations can be met.
1. Verifying Data Protection and Security
AI systems can inadvertently expose PII, client data, MNPI and confidential internal documents—-even when guardrails exist. It is therefore critical to be able to answer these core questions: Has sensitive data been exposed? Is AI-generated content safe?
Through content inspection, compliance needs to be able to detect when confidential or sensitive information appears where it shouldn’t, as well as attempts to obtain or insert sensitive information. Firms also need to be able to identify the use of unsanctioned AI tools.
2. Identifying Prompt Hacking and Guardrail Circumvention
Because AI follows permissions, rather than intent, guardrails can be circumvented in order to access restricted information. In practice, employees are able to intentionally or accidentally test guardrails by rephrasing prompts or breaking requests into smaller components. Compliance should be able to:
- Flag boundary-testing or inappropriate user behaviour
- Detect non-compliant content in outputs
- Identify patterns showing systematic attempts to bypass controls
3. Validating Output Accuracy and Regulatory Compliance
AI-generated content can violate advertising rules, introduce promissory or misleading statements, omit mandatory disclosures, or surface fabricated information. Compliance needs the ability to answer the core questions: Is the content correct? Does it meet regulatory requirements? Does the content raise any red flags? This includes:
- Identifying factual inaccuracies before content reaches clients or the market
- Detecting missing disclaimers or required language
- Spotting conduct risks such as collusion, manipulation, or inappropriate recommendations.
4. Recordkeeping and Supervision
Compliance must be able to capture, archive, and supervise AI-generated content related to regulated business activity just as they would for human-created communications. The requirements will differ by user group, jurisdiction and regulatory regime, so firms should have the flexibility to retain only the content that is required, and for the appropriate timescales, to meet their specific obligations.
5. Remediation and Policy Enforcement
As well as assessing the governance controls and AI content, a modern AI communications governance program should enable remediation and policy enforcement. This is to ensure that governance controls are reset or tightened based on violations, and that remediation workflows are triggered such as removing or remediating risky content before it reaches clients.
6. Auditability
Regulators will expect a transparent audit trail that provides evidence of oversight. Being able to show prompts, responses, generated content, remediation steps, and any escalations is essential for internal audits, regulatory inquiries, and investigations.
A compliance priority
Organizations are facing a new frontier of communications governance. AI is now a participant in enterprise communications, and its outputs must be governed with the same rigor applied to human-created content. Regulators are already signalling their focus on this area, for example the SEC’s 2026 Exam Priorities note: “The Division will assess whether firms have implemented adequate policies and procedures to monitor and/or supervise their use of AI technologies” including for fraud prevention, AML, back-office operations, and trading functions. It notes that reviews will also consider the “integration of regulatory technology to automate internal processes and optimize efficiencies”.
Compliance leaders should expect explicit questions about AI oversight, control validation, recordkeeping, and risk governance—and should be prepared to demonstrate a robust, defensible approach.









