Editor’s note: I’m in the habit of bookmarking on LinkedIn and X (and in actual books, magazines, movies, newspapers, and records) things I think are insightful and interesting. What I’m not in the habit of doing is ever revisiting those insightful, interesting bits of commentary and doing anything with them that would benefit anyone other than myself. This weekly column is an effort to correct that.
As you’d reasonably expect, the very companies pouring hundreds of billions of capital into AI infrastructure are also acutely aware that the products riding on the infrastructure need to provide the type of value that customers will pay for — reliably and at global scale. The shift is subtle but we’re seeing it play out now: training foundation models is important, and it pushes the industry forward, but that progress must be paired with a sharp focus on user-facing utility. End-user impact means a narrowing lens from general capabilities to specialized products.
Meta and Microsoft focus on winning the AI interface layer
Meta intends to spend more than $60 billion on capital this year as it continues to advance its open-weight Llama models, and bring improved predictive and generative capabilities to its social media platforms and hardware. This week Axios’s Ina Fried reported that the company is shuffling its AI organization “to speed up the rollout of new products and features.”
Based on an internal memo from Chief Product Officer Chris Cox, the reorg splits efforts into an AI products division led by Connor Hayes and an AGI Foundations division jointly overseen by Ahmad Al-Dahle and Amir Frenkel. The former is focused on embedding AI capabilities into Facebook, Instagram, and WhatsApp, specifically via the Meta AI Assistant; the latter will look after further improvements to Llama around “reasoning, multimedia, and voice,” according to Axios.
Meta’s reorg is a maturation signal. The clear distinction between models and products marks a pivot from capability development to capability delivery. It also suggests an understanding that owning messaging and social interfaces offers a ready distribution and monetization channel for AI.
Also this week, as reported by The Verge, Microsoft CEO Satya Nadella distributed a memo communicating that the company has “a tremendous opportunity to transform every role, business process, and industry…as we build out the next phase of the agentic web…And to that end, we are bringing together LinkedIn, Microsoft 365, and Dynamics 365 to redefine and to drive these new AI solutions.”
In terms of structure, LinkedIn CEO Ryan Roslansky will now oversee the Office business. The Business and Industry Copilot (BIC) team will now report to Microsoft Head of Experiences and Devices Rajesh Jha. The idea, again, is that Microsoft is moving to collapse siloes and unify its AI efforts across consumers, cloud and enterprise, turning its productivity software moat into the distribution layer for AI.
To say that another way, Microsoft has demonstrated clear strength in developing AI. Now it’s aiming to turn that strength into habit by integrating AI functionality directly into the interfaces where work happens, e.g. LinkedIn, Office, Outlook, and Teams.
OpenAI acquires Apple design legend’s startup to develop AI devices
OpenAI is also dumping money into its infrastructure plant with the initial $500 billion phase of its Stargate project. While the company has brought on 3 million paying business users and has something like 400 million active weekly users, its acquisition of Jony Ive’s design startup io is a push to a new frontier. The legendary Apple designer started working with OpenAI about a year ago and, while details remain sparse, this acquisition seemingly validates OpenAI’s desire to embed its product in a physical device.
In a May 21 blog post co-authored by Altman and Ive, the duo reflected on the “extraordinary moment” we’re in. “Computers are now seeing, thinking, and understanding. Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces.” They called out “tangible designs” — pictures of OpenAI-branded devices are making the rounds — and said Ive “will assume deep design and creative responsibilities across OpenAI and io.”
This move is less about incremental revenue and more about interface reinvention. It stands apart because OpenAI is not vertically integrated in the way Microsoft or Meta are. This makes it a bet on new behavior, design, and usage patterns; it’s a high risk, potentially high reward move.
A few additional comments: this focus on product is punctuated by the move in AI gravity from training to inference, and by the plummeting costs of inference. It also speaks to how the rise of agentic AI could reshape user behavior and experience. If we have agents handling our light work, this frees us up to turn our attention elsewhere.
Companies like Meta, Microsoft, and OpenAI need and want our attention; more precisely the data flow that attention delivers. So if you can deliver an AI-enabled product that makes more attention available, how do you then (re)capture, leverage, and monetize that attention? Remember: as AI transforms how we live and work, attention is all you need.
Here’s another column to augment you’re reading: “Bookmarks: Agentic AI — meet the new boss, same as the old boss.” And for a big-picture breakdown of both the how and the why of AI infrastructure, including 2025 hyperscaler capex guidance, the rise of edge AI, the push to AGI, and more, download my report, “AI infrastructure — mapping the next economic revolution.”
Oh, I’ve got another report out this week, “The AI power play.” It’s all about how scaling AI is about megawatts and servers, but also about aligning timelines, building trust, and coordinating ecosystems. Fill out the form and download it. I also want your attention.