Generative AI is Here to Stay: Why Firms Must Prioritize Governance for Sustainable Success
Why it matters
The rapid adoption of AI-enabled tools has the opportunity to reshape business processes, compliance, and risk management at an unprecedented pace. As firms race to leverage this technology, the question of balancing innovation and risk mitigation has become paramount. Firms must prioritize internal efficiencies while addressing compliance to remain competitive in a landscape where innovation often outpaces regulatory frameworks.
The following summarizes our three recent webinars exploring generative AI adoption, the state of regulatory and compliance implications as well as the implications for governance at the company and foundational technology level. As we do so, we’d also like to acknowledge and thank our expert contributors to these programs, including Lore Aguilar from Citi, Peter Williams from AWS, Matt Kelly from Debevoise & Plimpton, Nina Bryant from FTI Consulting, and Matthew Bernstein from Bernstein Data.
The current state of generative AI adoption
As of December 2024, generative AI adoption is moving at a striking rate. However, early research indicates that upwards of 80% of generative AI projects fail for a multitude of reasons, ranging from a misunderstanding of the problem attempting to be solved, to unclear measures of success, to a lack of the appropriate resources to move pilots into production. Compounding this failure rate is shiny object syndrome, where too much emphasis is placed on the latest technology innovation and less on using the right tool for the right job.
While some AI technology vendors trumpet their deployment numbers, the hard questions remain for regulated firms, such as:
- Which use cases are being prioritized?
- How are firms planning for an extended period of regulatory uncertainty?
- What does good generative AI governance look like?
What is abundantly clear is that innovation is moving faster than regulation, at least in the US. Ironically, a US move toward deregulation will likely increase complexity for multi-national firms, given the severe penalties imposed for infractions of the EU AI Act along with a multitude of US state-level AI regulations. The net of which has led firms to proceed with caution — initially favoring internal use cases that emphasize productivity gains through automation of manual, time-intensive processes such as meeting summaries and first draft content authoring.
Today’s generative AI communications compliance and governance focus
Over the course of the webinar series, our experts noted a significant change occurring in the investment of generative AI governance programs, establishing guardrails to evaluate new use cases, track projects, and communicate processes within their organizations.
This has led to a fascinating shift in the state of deployment of generative AI projects as we have captured by asking the same survey question in a variety of Smarsh and industry forums:
“What is your primary focus for generative AI-enabled communications tools?”
- Assessing how AI might be used by specific applications and processes: 32.3%
- Doing due diligence on existing tools to understand how they use AI: 22.6%
- Piloting or deploying solutions in limited capacity: 18.9%
- Prohibiting use of ChatGPT or similar tools: 15.9%
- Supporting enterprise-level AI tools for specific apps (e.g., conduct surveillance): 10.4%
Our expert panels noted that option B is a basic 3rd party risk management task, and many have already completed their investigation and inventories of existing applications which is why that percentage has fallen over the course of the past several months. However, that work will be on-going as new generative AI features are added to existing communications tools almost daily.
For many firms, the heaviest lift is now item A as the multitude of potential use cases continues to grow — in many cases, without clear definitions of duplicative use cases and similar problems surfacing in different parts of the business. While some firms have moved toward taxonomies and other approaches to rationalize similar use cases, we are still early in that process. As Matt Kelly noted, “Eventually, we think there will be a fairly standardized set of terms that everyone just understands, but we're not there yet. And so it is a massive volume of fairly one-off negotiations, one after the other after the other."
Straddling options B and C is the prioritization of use cases and strategy of many firms to move first on lower risk internal use cases, using the learnings and surfacing of unanticipated risks into future pilots. This strategy also reflects the apprehension in pursuing client-facing use cases until risks are better understood and regulatory requirements are further clarified. As Tiffany Magri noted, “We’re frequently hearing how firms are initially pursuing efficiencies by automating manual tasks, such as using Copilot to summarize Microsoft Teams meetings, or in writing first drafts of internal policy documents.”
Over the course of 2024, the biggest shift we observed was the decrease in option D, prohibition, dropping nearly in half from the mid-30% territory it started the year at. While the percentage continues to be higher for segments such as SMB, this decrease does indicate that firms are finding their way through deployment uncertainties and slowly overcoming user apprehension about potential job loss for those who are proficient with the technology and leverage it to improve productivity.
The final option E, supporting enterprise-level tools such as communications surveillance, reflects the focused but compelling growth in adoption of risk and compliance use cases, where firms are anxious to move beyond ineffective approaches in identifying and responding to risk via lexicons. As Theo Hill of Smarsh noted, “Machine learning has obviously been used extensively to try to detect risks. The enhanced capabilities that LLMs and generative AI technologies bring you is the ability to rapidly expand your operational risk taxonomy detection."
What is “good” generative AI governance?
One common challenge faced by all firms is the lack of a common definition of “generative AI governance," considering multiple layers of functional oversight, as well as company and AI technology risk management. While a separation of “good versus great” governance has yet to emerge, our experts agreed that successful programs consist of the following elements, at minimum:
- Centralized governance committees that include representatives from legal, compliance, IT, data science, and business units, and are responsible for setting policies, defining roles, and ensuring alignment with regulatory requirements.
- Clear use case selection criteria driven by a risk-reward analysis, where high-risk applications (like client-facing AI) receive more scrutiny than internal, efficiency-driven use cases.
- Vendor risk management frameworks that adopt standardized questionnaires and frameworks, such as those provided by NIST or FS-ISAC, to assess vendor compliance with AI risk management protocols.
- Model risk management (MRM) practices that include AI model testing for robustness, explainability, and reliability.
- Regular audits and reviews to ensure ongoing compliance with internal policies and external regulations, including documentation of AI governance processes to demonstrate to regulators that they have taken reasonable steps to address risks.
Looking forward to generative AI in 2025
As firms refine these approaches, our expert contributors noted several key trends likely to shape the next phase of generative AI adoption:
- Shift toward use-case-specific models: Instead of relying on massive, multi-purpose LLMs, firms will adopt smaller, purpose-built models to achieve better control, lower costs, and increased efficiency.
- Increased scrutiny of third-party vendors: Regulatory pressure will push firms to evaluate their vendors more thoroughly. Expect more comprehensive vendor due diligence processes and "right to audit" clauses in vendor contracts.
- Operational integration of AI governance: Firms will move governance closer to operational workflows. This "embedded governance" approach allows firms to address risks as they arise instead of retroactively fixing issues after deployment.
- Greater focus on explainability and documentation: Firms will be required to explain not only the "what" but also the "why" behind AI-generated decisions. Documentation of governance processes will become a key part of regulatory defenses.
Your firm’s future with generative AI
Generative AI presents unprecedented opportunities and risks. Firms that establish a strong governance framework will be better positioned to leverage generative AI capabilities while maintaining compliance and accountability. By embedding governance into everyday operations, conducting thorough vendor risk assessments, and prioritizing model explainability, firms can stay ahead of regulatory changes and reduce operational risk. The era of AI is here, and firms that embrace governance as a strategic enabler will be best positioned for sustainable success.
Share this post!
Smarsh Blog
Our internal subject matter experts and our network of external industry experts are featured with insights into the technology and industry trends that affect your electronic communications compliance initiatives. Sign up to benefit from their deep understanding, tips and best practices regarding how your company can manage compliance risk while unlocking the business value of your communications data.
Ready to enable compliant productivity?
Join the 6,500+ customers using Smarsh to drive their business forward.
Subscribe to the Smarsh Blog Digest
Subscribe to receive a monthly digest of articles exploring regulatory updates, news, trends and best practices in electronic communications capture and archiving.
Smarsh handles information you submit to Smarsh in accordance with its Privacy Policy. By clicking "submit", you consent to Smarsh processing your information and storing it in accordance with the Privacy Policy and agree to receive communications from Smarsh and its third-party partners regarding products and services that may be of interest to you. You may withdraw your consent at any time by emailing privacy@smarsh.com.
FOLLOW US