Dear Readers,
As previewed last time, I'm experimenting with a new format: a feature article complemented by shorter sections.
As many of you are business leaders and board directors, I've decided to focus on AI use in the boardroom this time, a very popular topic in my recent conversations. While legal advisors and compliance teams often tell directors "not yet" regarding AI use in board work, I thought it's time to explore what's actually happening and why some remain hesitant. And even more interestingly, what it means for the art of governance. It's remarkable how quickly perspectives have shifted—just six months ago, many directors told me AI wasn't relevant to their roles!
I'd love your feedback on this new format. Please note that the newsletter will be shorter than what it used to be.
AI Use in the Boardroom: Bridging Hesitation and Opportunity
The past year has brought dramatic change to how boards view artificial intelligence. Since co-authoring the Athena Alliance AI Playbook for Boards with fellow directors and executives, I've watched AI transform from an emerging technology topic to a central governance focus. Yet, there's been surprisingly little focus on how boards themselves might use AI to enhance their own practices.
The reluctance to use AI inside boardrooms feels increasingly out of step with broader organizational trends. Management teams routinely rely on AI to prepare board materials, legal advisors use it for drafting contracts and analyzing regulations, and compliance departments deploy it for monitoring risks.
However, once directors enter the boardroom itself, many revert to traditional methods as though AI doesn't exist. Many directors are warned against uploading sensitive materials into AI tools or using AI note-taking assistants. Some boardrooms have banned AI outright due to privacy and security concerns and their legal and business consequences.
This disconnect is striking given the growing demands placed on directors. Board materials are becoming more voluminous and complex, regulatory requirements are multiplying, stakeholder expectations are rising, and emerging technologies like AI add layers of oversight responsibility. Directors need AI with the right guardrails that help them focus on what's most important for decision-making rather than getting lost in minutiae.
The Fundamental Governance Question
In my conversations with board members across industries, I find they consistently return to a fundamental question: Does AI use in the boardroom help or hinder directors in fulfilling their fiduciary duties?
This question must be examined through the lens of directors' core responsibilities - their duty of care and duty of loyalty. Does AI make directors better informed without diminishing their scrutiny? Can it enhance alignment with stakeholder interests without introducing bias?
The Artificially Intelligent Boardroom paper in Stanford Closer Look Series recently highlights how carefully boards must navigate these questions to avoid undermining their role. Their research identifies four key areas poised for AI impact: how boards function, how they process information, how boards interact with management (and vice versa), and how board advisors contribute.
Where AI Shows Promise in Board Work
Corporate governance rests on a delicate balance: management runs the company, while the board oversees management's actions on behalf of shareholders. Historically, boards fulfill their duty of care if they are reasonably informed and make decisions in good faith based on available information provided by management.
Today's information environment makes this increasingly challenging. Information asymmetry amplified by information overload could negatively impact the effectiveness of governance oversight.
AI offers potential solutions. Tools like Equilar's ERIC or Diligent's governance platforms can summarize complex documents, identify anomalies across reporting periods, and provide predictive analyses beyond historical data. These capabilities could transform boards from reactive to proactive in their oversight.
Here are a few areas where some boards are actively experimenting with responsible use of AI:
Material enhancement could involve summarizing lengthy board materials without losing key insights, highlighting narrative changes between reporting periods, and identifying trends or anomalies across financial reports.
Knowledge augmentation might include offering real-time explanations of regulations or financial metrics, retrieving relevant historical decisions to inform current deliberations, and fact-checking statements during discussions.
Process improvement could mean suggesting questions based on identified gaps or inconsistencies in materials, tracking follow-up actions from previous meetings, and supporting oversight with targeted data analysis.
Boardroom engagement enhancement could mean the board asks AI to suggest areas they should have dug deeper in and potential ways to leverage diversity of thoughts in boardroom discussions.
For specialized boards like mutual fund and ETF fund boards, AI might transform critical processes like 15c contract reviews. Instead of relying solely on consultants to prepare comparative analyses weeks in advance, AI systems could provide detailed comparisons across advisers and sub-advisers in real time during meetings.
Valid Concerns That Demand Attention
My enthusiasm for AI in boardrooms has been tempered by legitimate concerns raised by thoughtful directors and their advisors.
Judgment and accountability concerns are profound. Several directors have expressed worries about delegating critical judgment to algorithms. The essence of a board's role is applying human judgment to complex situations. Can directors truly outsource even part of that judgment while maintaining accountability?
Nuance and pattern recognition capabilities help experienced directors spot subtle inconsistencies or emerging risks. Many directors have caught important issues that didn't fit standard reporting frameworks but proved critical to effective oversight. When AI synthesizes materials with its own understanding of materiality of information, do we lose important nuances in the process without even knowing it?
Information overload is another risk. Ironically, while AI reduces information asymmetry, it could potentially overwhelm directors with too much data. The expectation for deeper preparation based on AI-enhanced materials might increase workloads rather than reduce them.
Privacy and security risks remain significant. Sensitive board materials could be exposed to unauthorized access if security protocols fail—a concern that dominates governance discussions, particularly in regulated industries and public companies. Fortunately, advances like on-premises deployment are addressing these concerns through secure implementations within corporate firewalls while maintaining strict access controls.
Blurred board governance boundaries present another challenge. With access to comprehensive corporate data through AI tools, directors risk crossing into managerial territory—potentially violating governance principles that separate oversight from operations.
Trusting and auditing the quality of AI models is a critical challenge at current stage. Moving from purely historical data to incorporating predictive trends and analysis without reliable ways to measure the quality of such predictions could lead to devastating consequences.
Data governance needs are profound: Who ensures proper governance of the data used in the boardroom? Input permissions for models and data rooms require different user rights than boardroom inquiries, discussions, and record-keeping.
Additional governance concerns include the creation of separate sets of records for meeting discussions, the workload required to train and prepare AI ahead of board meetings, and the potential for surveillance of individual directors' activities becoming too intrusive.
Encouragingly, some of these concerns are being actively addressed through technological advancements. AI techniques such as Retrieval Augmented Generation (RAG) and long context windows now ground AI outputs in authorized source materials, significantly reducing the risk of hallucination while maintaining traceability to original documents. Domain-specific fine-tuning creates systems tailored specifically for governance applications with improved understanding of regulatory requirements, financial metrics, and board responsibilities. AI infrastructure players and enterprise vendors provide safety, privacy and security guardrails around their solutions.
More Creative Uses in the Boardroom
As boards get more comfortable with AI experiments, some may consider more creative uses that would further enhance boardroom effectiveness.
Directors could potentially use AI assistants tailored to their specific committee roles and personal expertise areas. Directors with different backgrounds can have enriched materials with more explanations on their weaker areas and less on their stronger areas. These tools might synthesize competitive intelligence or regulatory updates during discussions, bridging knowledge gaps while fostering more dynamic deliberations.
An innovative approach might involve deploying multiple AI systems trained with different risk tolerances or industry perspectives. These systems could present alternative viewpoints during strategy discussions, challenging assumptions and deepening analysis in ways that help combat groupthink.
Forward-thinking organizations might explore some supplemental board materials designed specifically for algorithmic analysis and AI agents to consume rather than for directors. This hybrid information matrix might be more adaptive to the future proliferation of AI agents.
Practical First Steps
For boards considering AI adoption, a gradual approach might offer the best balance of innovation and prudence.
Begin with non-sensitive applications by using AI for analyzing publicly available information before applying it to confidential materials.
Establish clear boundaries by developing explicit policies about appropriate AI use, including what types of materials can be processed and how outputs should be interpreted.
Focus on augmentation, not automation by emphasizing tools that enhance director capabilities rather than attempting to automate judgment.
Implement robust security protocols to ensure any AI implementation meets or exceeds the organization's standards for handling sensitive information.
Regularly evaluate effectiveness by assessing both practical benefits and potential governance risks.
Preserving the Art of Governance
My thinking on boardroom AI has evolved considerably since beginning this journey. What initially seemed like a straightforward efficiency opportunity has revealed itself to be a profound governance consideration touching fundamental questions about the board's role.
I believe we're entering an period where boards that thoughtfully integrate AI into their practices may gain significant advantages in effectiveness and insight—but only if they preserve the essential human judgment that defines good governance. The boardroom of tomorrow won't be AI-free, but neither will it surrender its fundamental responsibility to exercise independent, informed judgment on behalf of those it serves.
1. Surge in Adoption of MCP Accelerates AI Agent Developments
Anthropic’s Model Context Protocol (MCP) is a major step forward in making AI systems more adaptable and useful for business applications. MCP is like the USB-C of AI, a universal connector that allows models to seamlessly interact with external tools, data sources, and APIs without custom integrations. For business leaders, this means faster deployment of AI solutions that can access real-time information, streamline workflows, and drive smarter decisions. Learn more here.
2. What About Data Protection?
If you use the $20 version of ChatGPT for personal use, you may not be aware that you get almost exactly the same level of data protection (or lack of) as the free version. A Swiss law firm Vischer explains different terms of use and data processing agreements in popular AI tools (personal and business licenses). Meanwhile, WIRED warns about the upcoming changes to Alexa privacy agreement: “Everything You Say to Your Echo Will Soon Be Sent to Amazon, and You Can’t Opt Out”.
Two AI Apps to Try
1. Lovable - use English to turn your idea into an app
You may have heard about ‘vibe coding’ but maybe these AI-powered tools such as Cursor or Github still look intimidating. Then you have to give no-code tool Lovable a try. A prompt as simple as ‘clone airbnb’ can create a functional website for you. I’ve tested other similar tools like Replit or ChatGPT in the past and I have to say this is much more intuitive and beautiful to use. The company reached $10m annualized recurring revenue within 2 months after its launch, and has become the fastest growing company in Europe in this very short time. Let me know what you create!
2. Artificial Societies - simulate your social network
Artificial Societies utilizes AI to simulate human interactions within large groups, enabling companies to predict marketing and content performance before real-world deployment. For example, you can test if your post will go viral on LinkedIn by publishing it here to see what’s the engagement level from your simulated LinkedIn audience. It definitely makes me feel a bit like I am part of the ‘The Sims’ game.
Thank You
If you’re finding this newsletter valuable, please share it with a friend, and consider subscribing if you haven’t already. I greatly appreciate it.
Sincerely,
Joyce 👋