#10 Embracing AI Coexistence
Plus: New AI Moves from Google and Amazon, AI Chip Race, OpenAI Forum Thoughts, YC Demo Day, AI Regulation and More
👋 Welcome back to "AI Simplified for Leaders," your weekly digest aimed at making sense of artificial intelligence for business leaders, board members, and investors. I invite you to explore the past issues here.
You may notice that this issue is shorter. Allergy hit me hard this week and left my productivity at half of my usual level. Thank you for understanding and your continued readership - it means so much to me.
In this issue, I cover:
Curated AI News: Google’s Cloud Next and Amazon’s AI Pillar
The Custom AI Chip Race
Embracing AI Coexistence
Y Combinator’s Demo Day
Directors’ Corner: Talent; Regulation
Enjoy.
AI News, Curated for You
1. Google goes on AI offensive at its Cloud Next conference
Google has been under a lot of heat lately as in this Financial Times article and as a user, I am disappointed by the frequent bugs and issues in its AI products. However, I believe it is far too premature to write off the tech giant, given its treasure trove of data, tremendous talent pool, and deep pockets to redirect investments in talent and products. It appears that Google's leaders finally have a sense of urgency, and some recent product moves and talent hires demonstrate this.
At its Cloud Next conference, Google showcased numerous AI applications built on Google Cloud, unveiled its custom chip Axion, made its latest AI model Gemini 1.5 Pro available for developers through APIs, and announced a flurry of new features coming to Google Workspace. I decided to give it a try: I used "@gmail" in a Google document to summarize the emails I received from a specific business contact over the past month. While the function is not entirely stable yet, it provides an exciting glimpse of what might be on the horizon.
2. Amazon hails Gen AI as the next ‘pillar’ and adds AI guru Andrew Ng to board
Amazon CEO Andy Jassy's 2023 shareholder letter underscores the company's strategic emphasis on Gen AI, delineating a three-tier approach: foundational models with custom AI chips for efficient development, managed services like Amazon Bedrock for customized AI applications, and direct AI-driven consumer and AWS applications to enhance customer experiences. Jassy highlights AWS's shift from cost-saving to growth, supporting AI advancements and long-term customer commitments. He remains optimistic about Amazon's competitive edge in AI, viewing it as key to innovation and offering significant societal and business benefits. Amazon also adds Andrew Ng, the renowned AI guru and Stanford professor who co-founded Coursera and Google Brain. Many have taken his AI related courses online, including me.
My view is that major cloud infrastructure players (Amazon, Google, Microsoft) have significant competitive advantages in influencing business users’ adoption of AI related services and applications, and they can capture a large part of the value due to their ability to scale. For example, the middle tier in Amazon’s approach aim to provide solutions to common needs of business users, including risk guardrails, data privacy and cyber security, area that play into the strength of cloud players.
The Custom AI Chip Race
While it's well-known that selling shovels during a gold rush is a lucrative business, with over 90% of AI shovels (GPUs) sold by NVIDIA, there's a less-discussed trend gaining momentum: key GPU users commissioning their own specialized AI chips to offer better-priced performance for their customers' use cases. Here's a developing list of what each company is doing:
Apple plans to overhaul its entire Mac product line with AI-focused M4 chips. The company eventually aims to bring AI chips to phones as well. Apple's efforts in developing its AI chips have been ongoing since the mid-2010s.
Amazon is building custom AI training chips (Trainium) and inference chips (Inferentia) to optimize price-performance for their customers' needs.
Google has its own Tensor Processing Unit (TPU) technology for its AI needs. This week, the company announced Axion, its first ARM-based data center processor designed to optimize more economical computing needs.
Who will manufacture these chips? Taiwan Semiconductor Manufacturing Company (TSMC) might still be the top choice. The company is executing expansion plans with geopolitical diversification as a priority. TSMC has transformed the small Japanese farm town of Kikuyo into a key node in its supply chain and has established production bases in the United States.
As these tech giants continue to invest in developing their own AI chips, the landscape of the AI hardware market is set to evolve rapidly. While I don’t believe NVidia's dominance in AI compute can be challenged in the near future, it creates opportunities to lower the costs and improve energy usage for the AI future.
Thoughts from an OpenAI Forum Event: Embracing AI Co-Existence
Recently, I attended an OpenAI Forum event as an invited member. The OpenAI Forum is a community where AI experts and enthusiasts gather to learn, discuss, and shape the future of AI. Although OpenAI is the supporter, discussions are company-agnostic and tend to focus on the broader implications of a human-AI world. I'd like to share my takeaways from time to time. Please let me know if you find this helpful, as your feedback is important in shaping this newsletter.
During the event, Turing Award recipient Professor Shafi Goldwasser, who is also the mother of a young OpenAI research scientist, gave a talk on "Trust, Backdoor Vulnerabilities, and their Mitigation".
A focus of her presentation (read the full paper here) was on mechanisms to establish trust in an AI system where undetected backdoors could be planted by malicious actors in the classifier of an AI model during development.
Professor Goldwasser demonstrated two frameworks for planting undetectable backdoors. By understanding these adversarial examples, she discussed a few approaches to mitigate such vulnerabilities.
My takeaway is that while undetectable backdoors cannot be completely eliminated, there are ways to learn to co-exist with imperfect AI systems by implementing mitigation methods. Lessons and approaches learned from blockchains, where establishing trust in a trustless world is crucial, might be applied here.
Assuming that the imperfect AI-human co-existence is already here, can we go back to first principles and find ways where such co-existence might amplify our collective intelligence as humans?
I discussed this topic with serial entrepreneur, co-founder of CrowdSmart, and AI scientist Thomas Kehler.
He suggested thinking of AI as a patient and observant learner and facilitator in numerous business discussions happening throughout an organization. AI can unravel the complexity of learning alignment of beliefs and preferences while staying clear of interference from unproductive politics and biases.
Can this make collaboration among humans more effective? If you are interested in exploring more, Tom's article here beautifully captures the idea and further describes a new first-principles architecture for computational models of nature's intelligence.
Y Combinator’s Demo Day: Key Trends
As one of the most closely watched early-stage startup events in Silicon Valley, Y Combinator's semiannual Demo Day always provides interesting insights into trends (and inevitably, some hype). This year, many startups focused on an AI-enabled future, spanning various domains such as unstructured data, developer tools, workflows, agentic workforces, health tech, sales generation, and customer success, among others.
Wing Capital has published an article discussing the key implications of these trends for enterprise technology. I highly recommend reading the article, as it offers valuable perspectives on how these emerging technologies and startup ideas could shape the future of businesses, along with some interesting quotes from the event.
Director's Corner: Talents; AI Regulations
Future of Work: More Empowerment and Less Certainty
Talent discussions have become a top concern for board directors and leaders. In last week's newsletter, I presented a framework on talent strategy.
A recent New York Times article argued that AI tools can replace much of Wall Street entry-level analysts' "grunt work," such as assembling PowerPoints, crunching numbers on Excel spreadsheets, and spending countless nights finessing language in documents. Major investment banks like Goldman Sachs and Morgan Stanley are considering shrinking incoming analyst classes, potentially eliminating thousands of positions traditionally sought after by business school graduates.
As quarterly earnings, shareholder letters, and board meetings approach, we'll hear more examples. Leaders must think macro (workforce) rather than micro (specific titles), and focus on goals (strategic objectives) over means (current workflows). For professionals, well-defined career ladders may be a thing of the past; adaptability and curiosity are key.
As organizations navigate this transition, open communication and transparency are essential for maintaining employee morale and engagement. Leaders must address job security concerns and provide upskilling and reskilling opportunities to help their workforce adapt to evolving roles.
AI Regulation: Generative AI Copyright Disclosure Act of 2024
This week, Representative Adam Schiff introduced the Generative AI Copyright Disclosure Act of 2024, which aims to mandate transparency from companies developing generative AI technologies. The bill requires these companies to disclose any copyrighted content used to train their AI systems by submitting a detailed summary to the Register of Copyrights before publicly releasing new AI tools. The legislation seeks to protect the intellectual property rights of creators and ensure they receive appropriate credit and compensation. It establishes a civil penalty for non-compliance and includes provisions that would apply retroactively to existing AI systems.
In his thought-provoking WIRED article, Steven Levy points out some key challenges, especially:
“The real puzzle of this bill…is that no one knows whether using copyrighted work for AI training is legal…Whatever tack the courts take, it will be based on copyright law that didn’t anticipate an artificial intelligence that could suck up all the prose and images the world has to offer. Figuring out what fair use means in the age of AI is a job for Congress.”
For board directors, staying informed about the evolving regulatory landscape surrounding AI is crucial to effectively guide their organizations through this complex terrain. As the debate on AI regulation continues to unfold, directors should work closely with their management teams to ensure compliance, mitigate risks, and adapt strategies as necessary.
One More Thing
Thank you for reading. If you haven’t done so, please follow or connect with me on LinkedIn. I would love to hear from you.
Enjoy your Spring (ex the allergies).
Joyce Li