#21 CFOs Accelerate AI Adoption
Plus: Small Language Models, Practical Applications for Fund Board Trustees, Wisdom of Human and AI Crowds, McKinsey's Lessons from AI Implementation, and More
Welcome back! In this issue, I cover:
Notable News and Developments: Nobel Prize, OpenAI’s Raise, AI Safety Bill Vetoed, McKinsey’s Lesson from AI Implementation, Microsoft’s Energy Bill
Small Language Models: The Next Big Thing
Surprise! CFOs Accelerate AI Adoption
Directors’ Corner: Practical Applications, Governance Insights, and Future Trends for Fund Board Trustees
One More Thing: The Wisdom of Human+AI Crowds in Decision Making
Enjoy.
Notable News and Developments
1. Nobel Prize in Physics Awarded to AI Scientists
The 2024 Nobel Prize in Physics has been awarded to John Hopfield and Geoffrey Hinton, recognized for their groundbreaking work in artificial intelligence. Their contributions, particularly in the development of artificial neural networks, have been pivotal for advancements in machine learning, significantly impacting technologies like large language models, including ChatGPT. This award highlights the growing intersection of physics and AI, acknowledging the foundational algorithms that enable machines to learn and evolve.
2. OpenAI Raised $6.6bn at $157bn Valuation
OpenAI has secured a monumental $6.6 billion in its latest funding round, achieving a post-money valuation of $157 billion. This significant increase from its previous $86 billion valuation earlier this year underscores investor confidence despite recent turnovers. The funding was dependent on OpenAI’s successful conversion to a for-profit company.
OpenAI has experienced a significant exodus of top executives recently. Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew, and Vice President of Research Barret Zoph all announced their departures. This follows earlier exits of co-founders Ilya Sutskever and John Schulman, as well as other key figures like Jan Leike. These departures have raised concerns about OpenAI's resource allocation and commitment towards AI safety.
3. The AI Safety Bill (SB 1047) Vetoed
Governor Newsom's vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. Newsom argued the bill was overly broad, applying stringent standards to even basic AI functions in large systems, regardless of their risk level or purpose. He criticized its focus on model size rather than function and potential risks. The governor also expressed worry that the bill might stifle innovation and create a false sense of security. Despite the veto, Newsom reaffirmed his commitment to AI safety, calling for a more targeted and flexible approach to regulation that can keep pace with rapidly evolving AI technology.
4. Interesting Lessons from McKinsey’s Implementation of AI Bot
This article detailing McKinseys’ AI journey is a fascinating read.
McKinsey & Company introduced Lilli, an advanced AI platform, into their operations, after Lilli's potential quickly became evident as it was leveraged for diverse tasks from data analysis to creative problem-solving.
During the implementation of Lilli, McKinsey faced several challenges.
Technically, developing a robust orchestration layer to integrate multiple AI models proved complex, complicating scaling efforts. Data management issues arose around privacy, relevance, and cost-efficiency, necessitating a dedicated data strategy team.
Organizationally, McKinsey had to establish a new collaborative operating model and invest in skill development for prompt engineering among both developers and users. Additionally, the firm prioritized testing and rapid adjustments to address model hallucination and ensure quality. These hurdles ultimately shaped Lilli into a transformative tool for enhancing operational efficiency and client value.
5. Microsoft’s AI Power Deal
Microsoft has signed a 20-year agreement to purchase power from the reopened Three Mile Island Unit 1 nuclear reactor to support its growing AI and data center operations. This deal aims to provide Microsoft with reliable, carbon-free energy while revitalizing a historically significant nuclear site. The move highlights the increasing energy demands of AI technologies and tech companies' efforts to meet clean energy goals. The project faces regulatory hurdles and some local opposition but is expected to create jobs and economic benefits for Pennsylvania.
Small Language Models: The Next Big Thing
In AI model developments, bigger isn't always better. As energy costs soar and efficiency becomes paramount, small language models (SLMs) are emerging as attractive solutions.
These compact models, typically containing fewer than 100 million parameters, perform specific language tasks efficiently and with lower resource demands compared to their larger counterparts.
Nvidia's recent collaboration with Mistral AI highlights the capabilities of SLMs. Their Mistral-NeMo-Minitron 8B model, a streamlined version of a larger 12 billion parameter model, excels in nine key benchmarks for its size category. It is designed for low latency and high throughput, making it suitable for deployment on standard workstations and laptops.
Microsoft is also advancing the SLM landscape with its Phi-3 family of models. The Phi-3-mini, featuring just 3.8 billion parameters, outperforms models that are twice its size, demonstrating that smaller models can achieve competitive performance.
For business leaders, understanding SLMs is essential for several reasons:
Cost savings: SLMs require less computational power, which can significantly reduce operational costs.
Faster implementation: Their smaller size allows for quicker training and deployment.
Enhanced data security: Many SLMs can be deployed locally or within private cloud environments.
Specialized performance: These models can be fine-tuned for specific tasks, often outperforming larger models in niche applications.
For these reasons, SLMs present an opportunity for businesses to adopt more sustainable and efficient AI solutions. Understanding and leveraging these models can help organizations remain competitive while managing resources effectively.
Surprise! CFOs Accelerate AI Adoption
Just a year ago, if you were betting on which business function would be one of the last to jump on the AI bandwagon, finance would've been the odds-on favorite. Conservative by nature, with sky-high accuracy demands and overwhelming workload, finance teams weren't exactly known for their early adoption of new technology.
Fast forward to today, and hold onto your ledgers. A recent Gartner survey dropped a bombshell: finance is now leading the AI charge with a staggering 21% surge in willingness of adoption. 58% of finance leaders now reporting AI implementation - the biggest increase of any department vs a year ago.
“In this survey last year, other administrative functions (such as HR, legal, and procurement) were twice as likely to be using or scaling out AI solutions compared to the finance function,” Marco Steecker, senior director of research in Gartner’s finance practice. “This year the gap is almost nonexistent.”
What’s happening? This surge can be attributed to several factors:
Tangible ROI: AI solutions are demonstrating clear ROI by significantly boosting productivity and streamlining talent management.
Evolving Role of Finance: AI is enabling finance teams to transition from reactive number crunching to proactive strategic analysis. This shift is exemplified by AI-powered flux analysis, which can swiftly identify anomalies and potential causes in financial data, allowing finance professionals to focus on higher-level tasks like refining budgeting processes.
Talent Shortage: The ongoing shortage of accounting talent is prompting finance professionals to embrace AI as a valuable ally rather than a threat. 76% of US CFOs say they are facing a significant talent shortage within their teams, according to a 2023 survey by Avalara. This perspective contrasts with other sectors where AI is sometimes perceived as a job displacer.
Applications of AI in Finance
AI is making its mark across a spectrum of financial operations, automating mundane tasks and augmenting strategic decision-making. This shift is evident in the proliferation of AI-powered financial tools.
Accounting: AI is transforming accounting processes through applications such as dynamic reporting, automated workflows, streamlined close management, enhanced AP and payroll management, and automated expense management
Financial Planning & Analysis (FP&A): AI is empowering FP&A teams with the ability to extract insights from unstructured data, generate timely and relevant financial insights, and enhance predictive modeling and planning capabilities.
Science vs Art
We're seeing faster AI adoption in the 'science' part of finance - the structured, verifiable accounting tasks. AI is automating mundane processes like information gathering, rule searching, and number reconciliation. This allows accounting professionals to focus on verifying outcomes and building trust in AI systems over time.
However, AI adoption is slower in areas perceived as more 'art' than science:
Forecasting, budgeting, and planning involve high-stakes strategic discussions and negotiations that are challenging to automate fully.
Organizational changes during company growth phases are difficult for AI to adapt to convincingly.
While AI is proving useful for scenario planning and cross-departmental insights, gaining stakeholder trust in these areas will take time.
Considerations for Successful AI Implementation
While the potential of AI in finance is vast, successful implementation hinges on several key considerations:
Integration: Seamless integration with existing systems is paramount to avoid workflow disruptions and ensure data consistency.
Scalability: As businesses grow, their AI solutions must be able to scale accordingly to accommodate increasing data volumes and evolving business needs.
User-Friendliness: Intuitive interfaces are crucial for encouraging widespread adoption among finance professionals, many of whom may not have extensive technical expertise.
Data Security: Robust data security measures are non-negotiable, especially given the sensitive nature of financial data. AI systems must comply with all relevant data protection regulations.
Focus on Problem-Solving: It's essential to prioritize AI solutions that address critical business challenges rather than simply adopting AI for its own sake. The overarching goal should be to enhance decision-making, save time, and optimize resource allocation.
Despite the growing enthusiasm for AI in finance, some challenges remain. These include data governance and security concerns, implementation complexity, the finance function's traditional aversion to risk, and the need for talent and change management.
CFOs must stay informed and ready to adapt their tech stacks to maintain competitive advantage. The focus should be on solving significant problems rather than adopting AI for its own sake.
Practical Applications, Governance Insights, and Future Trends for Fund Board Trustees
AI's impact on fund management is both tangible and multifaceted. From accelerating deal sourcing through automated SEC filing analysis to unlocking value in unstructured data like lease agreements and customer reviews, AI is changing work flows in investment management. It's augmenting asset allocation with improved portfolio construction and ongoing compliance monitoring. When humans are always in the loop for decision making, investment advisors are embracing the benefits of AI advancements.
I enjoyed a recent conversation with Thompson Hine partner Cassandra Borchers on practical AI applications relevant to fund boards. We highlighted several critical governance issues that demand immediate attention, such as conflicts of interests, AI reliability, AI washing, data protection and boardroom AI tools. Here is the full conversation and I would love to hear your feedback.
One More Thing:
The Wisdom of Human+AI Crowds in Decision Making
James Surowiecki's "The Wisdom of Crowds" in 2004 is an influential book on collective decision-making, such as in forecasting economics or financial market performance. Surowiecki pointed out that under the right conditions, large groups could make more accurate predictions than individual experts. Now, a new study, "Wisdom of the Silicon Crowd," explores this concept in the age of AI.
The research compared predictions from AI-only, human-only, and AI+human crowds, yielding intriguing results:
Ensembles of 12 large language models (LLMs) matched human crowd accuracy.
Individual LLMs showed an "acquiescence bias," emphasizing the importance of aggregation.
Crucially, integrating human predictions improved LLM accuracy by 17-28%.
These findings suggest that while AI crowds can rival human performance, the best outcomes emerge from combining human and machine intelligence. As we integrate AI into decision-making processes, this study underscores the value of a balanced approach.
The future of collective intelligence may lie in the synergy between human expertise and AI capabilities, potentially reshaping how we approach complex decisions across various fields.
I hope you found this newsletter valuable. If so, please consider sharing it with others in your network. I greatly appreciate it.
Joyce Li