#16 AI Model Drift: What Leaders Must Know
Plus: McDonald's AI fiasco, ex-OpenAI chief scientist's new company on safe AGI, some AI tools and podcasts, and more
👋 Welcome back to "AI Simplified for Leaders," your bi-weekly digest aimed at making sense of artificial intelligence for business leaders, board members, and investors.
In this issue, I cover:
Notable news: AI adoption continues to rise; consulting firms see surge in AI revenue; McDonald’s shuts down its new AI ordering system; Ilya’s new startup focuses on safe super-intelligence; record labels sue AI apps
AI Model Drift: What Senior Finance Leaders Must Know
Some interesting podcasts
AI tools: Claude 3.5 Sonnet and Luma AI
One more thing: the Humanity Code documentary
Enjoy.
Notable AI News
1. Consulting Firm Accenture Booked $900m of AI Business in a Single Quarter
Accenture recorded an about 50% jump in new bookings quarter-over-quarter. Of that, $900 million in new bookings was for its GenAI services, compared with about $600 million in the prior three-month period, taking the total for the full year to more than $2 billion.
2. Another Survey; Another Jump in Adoption
Bain’s latest survey (of their client base of mainly large companies) finds that companies are investing heavily in generative AI:
“On average, about $5 million annually, with an average of 100 employees dedicating at least some of their time to generative AI. Among large companies, about 20% are investing up to $50 million per year. These investments reflect their priorities: More than 60% of companies surveyed see generative AI as a top three priority over the next two years.”
3. McDonald Shut Down Its New AI Drive-Through Ordering
McDonald's is discontinuing its AI drive-thru ordering system, developed in partnership with IBM, due to accuracy issues and high operating costs. The technology, tested in over 100 restaurants, struggled to interpret various accents and dialects, leading to order inaccuracies.
Interestingly this system was in development since 2021, before GPT-3 came out. I’ve seen AI startup demos on voice ordering with good accuracy at AI Hackathons, so I wonder if this fiasco was due to an outdated AI technology stack or insufficient quality control before the release. McDonald hinted the blame was on IBM in a statement to CNBC, saying that it is not ruling out potential AI drive-thru plans in the future, even though it ended the IBM partnership.
4. OpenAI Co-Founder Ilya Sutskever Starts New Company to Build Safe AGI
A few weeks after leaving OpenAI, its former chief scientist Ilya Sutskever announced his new start-up, Safe Superintelligence, aims to build A.I. technologies that are smarter than a human but not dangerous. In Ilya’s own words on X:
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”
The mission reads like a research lab so it might be interesting to see future funding plans. Meanwhile, The Information reports OpenAI CEO Sam Altman is seeking to convert OpenAI to a for-profit business that the non-for-profit board does not control. This is probably not a surprise to anyone at this point.
5. Record Labels Sued AI Music Apps on Training
The Recording Industry Association of America (RIAA) and major record labels, including Sony, Universal, and Warner, have filed lawsuits against AI music generation companies Udio and Suno, alleging copyright infringement. The lawsuits claim that Udio and Suno trained their AI models on copyrighted music without permission, resulting in the unauthorized use of artists' work. In the New York Times vs OpenAI suit, claims alleging infringement focused more on the similarity of AI outputs to New York Times articles. In contrast, in this lawsuit RIAA focuses specifically on the training side, potentially expanding the landscape of IP claims.
AI Model Drift: What Senior Finance Leaders Must Know
Could your AI investments depreciate faster than you think? Probably one of the questions that keep CFOs up at night 😱😱😱
As AI becomes integral to business operations, leaders and board directors must understand AI model drift - a phenomenon that can significantly impact the long-term value and risk profile of AI investments.
What is AI Model Drift?
AI model drift occurs when an AI system's performance gradually declines due to changes in the data it processes or its operational environment. This isn't uniform across all AI applications. Models predicting rapidly changing phenomena (like consumer behavior) may drift faster than those handling more stable tasks (like document classification).
Financial Implications of Model Drift
For senior finance leaders, model drift introduces nuanced financial considerations. The oft-cited maintenance costs of 10-30% of initial investment provide a starting point, but the reality is more complex. High-stakes models in finance or healthcare might require more frequent updates, pushing costs towards the higher end. In contrast, models in more stable domains might need less attention, keeping costs lower. CFOs and FP&A leaders must align maintenance intensity with the model's business impact.
The hidden costs of drift can be significant. A drifting fraud detection model might lead to increased false positives, causing customer friction and potential revenue loss. On the flip side, a well-maintained model could provide a competitive edge, particularly in fast-moving sectors like e-commerce or fintech. Board directors should be aware of these potential impacts when overseeing AI strategy and risk management.
Budgeting Strategies for Model Maintenance
When budgeting for model drift maintenance, CFOs and FP&A leaders should plan for continuous monitoring, retraining, and maintenance of AI models. Include drift management in initial project budgets, factoring in the costs of detection and mitigation when planning new AI initiatives. Invest in automated monitoring tools that can detect performance degradation. Set aside resources for regular model updates and retraining, which may be needed quarterly or annually depending on the use case.
The scale of AI operations matters. Larger companies with more complex AI systems may need to invest more heavily in drift management. Account for talent costs, budgeting for skilled data science teams or considering outsourcing costs. Factor in industry-specific requirements, especially for highly regulated industries or those with rapidly changing environments. Board members should ensure these considerations are reflected in the company's overall AI strategy and risk assessment.
Build vs. Buy: Navigating the Capital Investments Decision
Senior finance leaders face a critical decision between building in-house capabilities and using vendor solutions. For models using highly sensitive data or in core competitive areas, in-house development might be necessary. However, for more general applications, vendor solutions can offer sophisticated drift management capabilities without extensive internal expertise. A hybrid approach, increasingly popular, involves using vendor-provided base models fine-tuned with company-specific data.
Smaller companies or those with limited AI expertise may benefit from outsourcing model maintenance to vendors or using off-the-shelf models enhanced with techniques like retrieval-augmented generation (RAG) or knowledge graphs to incorporate private data. This can reduce the burden of managing model drift internally. CFOs should weigh the long-term costs and benefits of each approach, while board directors ensure the chosen strategy aligns with overall corporate governance and risk tolerance.
The Future of AI Model Maintenance
As AI technology advances, so do drift management techniques. Emerging approaches like federated learning and continual learning promise to change how we handle model updates, potentially reducing the frequency and cost of major retraining efforts. For senior finance leaders, staying informed about these developments is crucial to understand how evolving AI maintenance strategies might align with business goals and capital allocation priorities.
FP&A leaders should work closely with technical teams to forecast the potential impact of these emerging technologies on long-term AI costs and performance. Board members, meanwhile, should ensure the company remains adaptable to these technological shifts, balancing innovation with prudent risk management.
Other Strategic Considerations
Model drift management should be viewed as an essential aspect of AI governance and risk management. By understanding and planning for model drift, organizations can maintain the effectiveness of their AI investments and mitigate potential risks associated with degrading model performance.
For CFOs, this means not just budgeting for initial AI implementation, but strategically allocating resources for ongoing maintenance and improvement. Board directors must ensure that AI strategies, including provisions for managing model drift, align with the company's overall strategic direction and risk appetite.
AI Podcasts
In the spirit of summer, here are a few AI-related podcasts worth noting:
1. Aravind Srinivas, Perplexity CEO on the Lex Fridman podcast
Among the many thought-provoking points Srinivas made during this long-form interview, his vision for Perplexity particularly stood out. He emphasized knowledge over mere search, which is reflected in the product's design, especially in its potential follow-up questions that facilitate a learning process rather than a transactional one. This approach aligns with Perplexity's philosophy of satisfying innate curiosity with academic rigor, as evidenced by its use of citations.
If you haven't tried Perplexity yet, it's worth checking out. Although the product has recently faced some performance issues and content IP controversies, it remains many people's go-to AI-powered knowledge engine.
2. The State of AI, by a16z co-founders Marc Andreessen & Ben Horowitz
When the consensus is moving towards big tech winning the AI race due to scale of compute and data, it might be interesting to hear these two VC investors making their case that such advantages could be overstated, especially on the salability of data.
AI Tools Spotlight
Claude 3.5 Sonnet: Dashboards and Games in Seconds
Anthropic’s new Claude 3.5 beat ChatGPT-4o on a series of benchmark metrics, and above all, it provides fun user experience. Even the free version provides impressive result. I uploaded four recent company announcements from ServiceNow and gave Claude this simple prompt:
“Create interactive business intelligence dashboard to show major business drivers including: revenue, orders, customer numbers, margins, cash flows and more.”
Check out the sleek dashboard in my illustrative video below.
People also use Claude to create simple interactive games and tutorials in preview mode. For instance, AI expert Allie K. Miller created a Mancala web app with only one screenshot of the game’s instructions, while Shubham Saboo shared on X how he turned a research paper into interactive learning dashboard. Try out these examples and I look forward to seeing what you create.
Luma Dream Machine: Turning Text and Image Into Video
Luma AI’s Dream Machine makes very cool videos with a simple text prompt or a static image. You can try for free on its website. Seeing the impressive performance of Luma and Runway in creating videos and the generous pricing options for consumers, I am even more confused by how their competitor Pika Labs’ recent raise was so well received (more in last issue).
Here’s the video I created with no more than 5 words. Guess what these five words are!
One More Thing
Over the past few weeks, I have been hearing more and more urgent discussions and questions around what AI means to our humanity when AI is advancing rapidly.
"The Humanity Code" documentary film explores the profound implications of AI, the critical questions we must ask, and the decisive actions we must take. It also aims to decode what matters most to us and paint a vision for the world we want to -- and can -- create with AI. The film will reveal the thinking and actions of not just the important technologists and policymakers, but also the artists, spiritual leaders, ethicists, business leaders, and others -- including everyday people -- deeply contemplating how to steer the AI ship through these uncharted waters.
My dear friend Heidi Lorenzen, board director and former tech executive, left her C-level corporate job to become the executive director of the documentary. If this initiative resonates with you, please click here to learn more and support them in their crowd funding campaign. Thank you for your support.
Have a great week.
Joyce Li