The Big Four’s AI Scorecard: Q1-25 Earnings Reveal Who’s Pulling Ahead
Amazon, Alphabet, Meta & Microsoft AI Spending and Strategy
According to headlines, the Big Four’s most recent quarterly earnings all seemed to have “beat expectations” or performed " better than expected.” Today, we take a closer look at their AI spending, revenue, innovation, and everything else related to AI in a more nuanced way.
A few months ago, there was some noise about the unclear AI ROI causing investors to want to pull back, but that fear seemed to have dissipated. Comments about AI infrastructure in excess and oversupply also quieted down.
Alphabet, Amazon, Meta, and Microsoft, as the incumbents, are in some ways the most prominent players in AI in the U.S. right now, from owning the data-centre footprints, model talent, to the distribution channels that have set the pace for the entire generative-AI economy.
Their first-quarter 2025 results don’t just tell us who sold more ads or cloud credits. The results show whether the unprecedented cap-ex splurge that began in early 2024 is translating into usable GPU capacity, paying customers, and durable economic moats. It seems that spending continues to go up. Despite headline fears of excess supply for data center build-out, demand is exceeding supply, and the consequence on the environment is larger than ever. On the business side, the strategic gaps between the four giants are widening fast.
Macro factors matter, but CapEx spending continues
Google’s (Alphabet) business chief Philipp Schindler said that despite the company’s size, it is “not immune to the macro environment,” adding that President Donald Trump’s decision to end the de minimis trade loophole next month will “cause a slight headwind to our Ads business in 2025, primarily from APAC-based retailers.”
The Trump tariffs humbled the Magnificent Seven, Fabulous Five, Big Four, whatever you want to call them, as their stock prices all revised back to pre-Jan/ Feb rally. However, now removing the hype created by AI optimism, we can look at how the companies are revising their strategic plans.
According to CNBC’s report, Amazon plans to boost its capital expenditures to $100 billion in 2025, mainly on AI. Google parent Alphabet said it expects to invest about $75 billion in capital expenditures this year. Microsoft said it planned to spend $80 billion in fiscal 2025 on building out data centers to support AI workloads. Meta said it will spend as much as $65 billion on capital expenditures as it works to construct more data centers and computing infrastructure.
However, no one thinks that macro tensions are a reason for slowing down investment in AI. Almost every CEO on these calls has described AI as a “once-in-a-lifetime” platform shift in some way or form, and the numbers back up the rhetoric. Combined capital expenditure for the quarter hit roughly ~$80 billion, and the Big Four's full-year guides now total more than ~$240 billion. For context, ~USD 250 billion is approximately Sweden, Belgium, or Switzerland’s annual GDP.
No sign of slowdown on AI infra spending, highlights:
Alphabet spent $17.2 billion in Q1 and reaffirmed a record $75 billion guide for 2025, almost all of it on servers and data centres to support Google Cloud, Search, and DeepMind workloads.
Microsoft laid out $21.4 billion of capex and stressed that half of that spend is on long-lived real estate for data halls that will monetise “for the next 15 years and beyond.”
Amazon’s cash capex was $24.3 billion, “the majority” of which was devoted to AWS infrastructure and custom silicon like Trainium 2. Even larger deployments arrived in the back half of the year.
Meta raised its full-year guide to $64–72 billion, explicitly to “bring capacity online more quickly” for generative-AI workloads.
Company Breakdowns
A. Amazon: Betting on Vertical Integration
AWS closed the quarter at a $117 billion revenue run-rate, but CEO Andy Jassy now says the Gen-AI slice is already a “multi-billion-dollar business growing triple-digits”.
Amazon’s pitch is simple: if inference costs don’t fall, then mass adoption will stall. Thus, we’re seeing Amazon pushing ahead with its custom AI chip - Trainium 2, which claims to offer 30–40 % better price/performance than comparable GPU instances. Meanwhile, AWS’s Bedrock platform is expanding its model menu (Claude 3.7, Llama 4, DeepSeek R1) and showcasing its own Nova family. The Nova series includes the speech-to-speech Nova Sonic and the action-oriented agent Nova Act that aims to lift browser-based agent accuracy from 30 % to 90 %. Every element of its focus right now comes back to cost discipline: cheaper silicon, more efficient agents, and cap-ex that scales only when utilisation is proven.
As Mark Mahaney at Evercore ISI wrote, generative-AI “should produce a material narrative shift,” positioning Amazon as “a pure play on robotization and AI cloud services.” If Trainium 2 can work as advertised, AWS can then truly protect margin even as GPU prices rise and undercut rivals on inference pricing, which is precisely the lever that made EC2 dominant in the last compute wave.
B. Microsoft: Doubling down on Co-pilot
Azure revenue grew 33 %, and AI services contributed +16 percentage points of that growth. Under the hood, the commercial remaining performance obligation (RPO) soared to $315 billion, 40 % of which converts to revenue within 12 months. The company processed over 100 trillion tokens last quarter and already has 10,000+ organizations building agents on its Agent Service.
Despite the cloud gross margin falling three points, the company’s management reminds investors that half of the Q1 spend was on 15-year assets and depreciation lags monetisation. Azure’s AI attach is not a side business; rather, it is now the single largest incremental growth driver in the Microsoft Cloud. The backlog suggests that demand will stay ahead of capacity well into 2026, even if GPU supply loosens.
But tbh, I’m not fully seeing the grand winning vision here beyond cloud-driven growth, given that the co-pilot’s experience is still quite sub-par. Tapping into SaaS AI is probably the smartest way to capture vertical use cases and leverage its enterprise know-how and distribution.
C. Alphabet: Innovation and Margin Squeeze
Alphabet rolled out Gemini 2.5 Pro and Flash, Imagen 3 for images, and Veo 2 for video in a single quarter, while open-sourcing Gemma 3 (140 million downloads) to keep developer mindshare. CEO Sundar Pichai highlighted that AI Overviews in Search already reach 1.5 billion monthly users, and early adopters of the full AI Mode type twice-as-long queries. This datapoint could translate into materially higher ad prices once monetisation toggles on. The flip side: server depreciation is climbing 31 % year-on-year, and CFO Ruth Porat says that growth rate will accelerate through 2025.
Google is betting that richer, AI-generated answers lift retention and ad yield enough to offset a near-term margin squeeze from the heaviest cap-ex programme in history. Investors need to watch early monetisation tests closely. But as Brent Thill at Jefferies wrote: “The cloud division’s struggles are a microcosm of Alphabet’s broader problem: growth is slowing just as costs are exploding.”
However, there are more concerns and doubts surrounding Google’s ability to keep its search business relevant in the long run as GPT usage takes over search engines. Google’s strength in being cash-rich and having walled-garden content on YouTube, as well as an array of diversified products that would allow AI embodiment, may be where the next strategic shift will go.
D. Meta: Open source and Distribution Reach
CEO Mark Zuckerberg’s AI vision laid out in Meta’s AI road-map is actually the most clear in terms of monetization strategy and is split into five pillars: ads, content ranking, business messaging, the Meta AI assistant, and eventually hardware, which is then backed by that ambition with a $64–72 billion cap-ex budget. The company added 2,800 employees last quarter, most into AI and infra, yet still generated a 41 % operating margin. Model-based ranking tweaks delivered fresh engagement bumps: +7 % time-spent on Facebook and +6 % on Instagram in just six months. And management admits that, because even with the higher spending, the compute demand from internal teams already exceeds supply.
Meta’s competitive edge remains in its reach, much similar to some of the leading Chinese internet players too (Tencent, Bytedance, Alibaba). Meta’s DAU of 3.4 billion and its embrace of the open-source Llama 4 ecosystem in many ways are similar to Baba’s playbook (see here). If developers flock to that stack, Meta could monetise through messaging and device agents without ever selling a cloud GPU. The risk, though, is that until hardware is in users’ hands, the pay-off remains speculative.
The Bottom Line: Capacity Is the New Moat
When three trillion-dollar companies say they’re GPU-constrained, owning physical power contracts and data-center real estate becomes a strategic advantage, perhaps even more than owning the next great model. As we’ve covered extensively on AI Proem, there has been an obvious divergence in business strategies between AI companies in China and the U.S., and now the nuanced differences between the companies are also starting to play out.
Monetization ladders are diverging: 1) Amazon courts developers with the broadest model marketplace, b) Microsoft sells seat-based Copilot subscriptions, C0 Google chases higher ad yield, and d) Meta leans on engagement and devices. Expect very different revenue mixes by 2026.
Whatever they choose to move forward with in terms of their AI strategy, they are all faced with one shared challenge: the biggest bottleneck is the lag in GPUs.
Among the three hyperscalers that sell cloud capacity, they admitted they are still short on GPUs. Microsoft now expects “some AI capacity constraints beyond June.” Alphabet warned that Cloud growth will vary “depending on capacity deployment each quarter,” and Amazon said it could already be “helping more customers… if we had more capacity.” In other words, the companies with the biggest cap-ex budgets on earth cannot buy or build silicon fast enough to meet demand. That scarcity is itself becoming a moat.
For investors, the signal is clear: in an era where even the richest companies complain about hardware shortages, the victor will be the one that turns scarce computing into high-margin, high-retention products faster. Watch utilization, not just cap-ex, and the answer will reveal itself in the coming quarters.[There might be something for them to learn from DeepSeek and other Chinese AI companies now that they face even further restrictions on GPU access.]
The Big Four's AI investments are noteworthy, but the focus should shift from the magnitude of spending to the quality of strategic implementation. I mean, AI development should be guided by thoughtful alignment, robust evaluation, and structural coherence, which will be key to realising its full potential. I know this is very "headline". But comes from deep work with vertical domain AI. I'd be happy to elaborate if this is an angle you'd like to explore. Cheers,