Artificial Intelligence
Is Google’s Gemini Now Leading the AI Race?
Securities.io maintains rigorous editorial standards and may receive compensation from reviewed links. We are not a registered investment adviser and this is not investment advice. Please view our affiliate disclosure.

The development of AI technology has rightfully been described as a race, with private startups like OpenAI and Anthropic competing toe-to-toe with tech giants like Microsoft (MSFT ) and Google (GOOGL ). This race has been powered by hundreds of billions of dollars in investment, not only in software development, but also in massive capital expenditure to build ever larger and more power-hungry AI data centers to train the latest models.
Meanwhile, Chinese models are also progressing quickly, adding a sense of urgency and geopolitical competition to Western companies’ efforts.
Lately, it seems that Google’s Gemini is pulling ahead of its competitors, especially with the release of Gemini 3 Deep Think, a model focused on a realistic understanding not just of languages but also the physical world. In addition, Google is also being selected by Apple (AAPL ) to power the company’s devices’ AI and is progressing in the business of AI chip making.
Gemini 3 Deep Think: What Changed?
Deep Think Release
With the release on February 12th, 2026, of Gemini 3 Deep Think, Google made a definitive step ahead in moving from AIs that are mostly focused on search and language (LLMs) toward more generalist AIs able to understand the physical world.
This is an important development, as “Physical AI” is the direction the industry is taking, a trend we explored in further detail in “Physical AI: Investing in the 2026 Humanoid Robot Boom.”
For now, the new Deep Think is available in the Gemini app for Google AI Ultra subscribers and, for the first time, available via the Gemini API to select researchers, engineers, and enterprises, making this AI already commercially available, not just a test model.
Maths & Sciences First
What distinguishes Deep Think from previous Gemini iterations, and to some extent from other AIs as well, is a focus on mathematical understanding.
LLMs notoriously perform poorly in simple mathematical tasks, sometimes failing even simple additions or counting in order. This is not true for Deep Think, which has enabled specialized agents to conduct research-level mathematics exploration. The model is massively outperforming other models on mathematics and science tests. It is also performing very well for coding tasks.

Source: Google
The difference with Gemini Pro Preview is even more marked on tests about scientific topics, from the International Math Olympiads or the International Chemistry Olympiads, where it scored around 82%, compared to only 14% for the math test of the previous Google LLM model.

Source: Google
These results were possible thanks to a radically different architecture from “classic AIs,” which suffer from hallucinations when data are too scarce, which will by definition always be the case for the latest scientific discovery.
For example, for pure mathematics, a math research agent (internally codenamed Aletheia), powered by Gemini Deep Think, features a natural language verifier to identify flaws in candidate solutions. It enables an iterative process of generating and revising solutions. Crucially, this agent can admit failure to solve a problem, a key feature that improved the efficiency for researchers.

Source: Google
This approach is not only more powerful in giving the right results, it is also more efficient, as Aletheia demonstrated that higher reasoning quality can be achieved at a lower inference-time compute.
The approach can be expanded from maths to other physical sciences. For example, Gemini Deep Think found how to use “a novel solution using Gegenbauer polynomials” to calculate gravitational radiation from cosmic strings.
Real Science Applications
This performance is already translating into real scientific uses by researchers.
For example, mathematician Lisa Carbone at Rutgers University used Deep Think to find a logical flaw that had been missed by human reviewers in a highly technical mathematical paper on Einstein’s theory of gravity and quantum mechanics.
Deep Think was also used by the Wang Lab at Duke University to design a recipe for growing semiconductor thin films larger than 100 micrometers, a previously challenging target to hit.
Distribution, Hardware & Strategic Momentum
Deep Think’s achievement comes on top of other good news for Google’s AI team.
The most important was the decision by Apple, the only tech giant that mostly sat out the AI race, to adopt Gemini as its default AI on Apple devices. In that context, it makes sense that OpenAI declared in December 2025 a “Code Red” regarding Google’s progress and other AI firms as well.
“Gemini’s user base has been climbing since the August release of an image generator, Nano Banana, and Google said monthly active users grew from 450 million in July to 650 million in October.
OpenAI is also facing pressure from Anthropic, which is becoming popular among business customers.”
Another of Google’s recent wins is the success of its AI chips. First, it was Anthropic, which announced it would start using Google’s AI chips, called TPUs (Tensor Processing Units), including using up to 1 million processors to power its AI software. Now, the competing AI company Meta is also joining in using Google’s TPUs, putting into question whether Google is becoming a competitor to Nvidia (NVDA ) as much as to OpenAI.
(You can read more about TPUs and other AI-focused hardware like XPUs, FPGAs, etc., in “Investing in AI Hardware: From CPUs to XPUs“)
Alphabet’s AI Strategy: Vertical Integration at Scale
Swipe to scroll →
| Company | Model Focus | Hardware Strategy | Distribution Control | Vertical Integration |
|---|---|---|---|---|
| Alphabet | Gemini 3 Deep Think (Math/Science) | In-house TPUs | Android + Search + Potential Apple Routing | Full stack (Chip → Cloud → Consumer) |
| Microsoft/OpenAI | GPT Models (General LLM) | Nvidia GPUs via Azure | Windows + Enterprise SaaS | Partial |
| Meta | Llama (Open-weight) | GPUs + Custom Silicon | Social Platforms | Moderate |
| Anthropic | Claude (Enterprise Focus) | Google TPUs | API + Enterprise Deals | Low |
The focus on TPUs is a good indication of Google’s strategy. Solid LLMs like Gemini and superior performance in real-world applications like Deep Think are, of course, very important.
But it is in the control of the distribution of AI and in the cost structure + capital access where Google holds a solid position.
The presence of Google in the mobile market through Android is already strong, but with the deal with Apple, it almost guarantees that most AI requests that are not specifically routed to a given AI app will go to Gemini, directly or indirectly.
The other component is the increasing reliance on TPUs. Some reports say that TPUs are ~30% cheaper than Nvidia GPUs and deliver 2–4x better performance per dollar in comparable workloads. The lower energy consumption for the same compute is not just a financial issue; it also helps scale up AI data centers despite mounting energy supply constraints.
Lastly, the level of vertical integration—starting with TPUs, to directly-owned data centers, an enterprise cloud platform, and then a consumer distribution channel—is unmatched in the industry, with only Microsoft coming somewhat close in the enterprise space.
Finally, the buildup of AI infrastructure has been extraordinarily costly. These hundreds of billions of dollars in chips and data centers now need to be paid for and create massive amortization costs on the balance sheet every year onward. The scale of Alphabet’s cash flows from search, YouTube, Android, and other products makes it more able to handle both the initial costs and the future maintenance of this infrastructure.
Is Gemini Actually Pulling Ahead?
Calling for a given “winner” of the AI race is certainly premature. For example, the entire current paradigm could be upended if orbital data centers by the now merged xAI/Space prove to be a strong competitive advantage.
But it seems that a few trends are emerging which are moving in Google’s favor.
The first is the need for specialized AI hardware, a domain where many tech giants lag, giving an advantage to chip manufacturers and Google.
The other is the importance of distribution control for the general public, who might not be very aware of which AI they can or should use. In that respect, the direct access to the whole Apple ecosystem mirrors the previous strategy of getting Google to be the default search engine on iPhones (which even brought US antitrust rulings in late 2025 as it was “too beneficial”).
Together with Deep Think’s prowess in mathematics and science, Google is experiencing a great start to 2026 when it comes to AI. Whether this leading position will be held long against pushback by OpenAI, Microsoft, Meta, Anthropic, and a myriad of Chinese models—including from Chinese tech giants like Alibaba (BABA ) or ByteDance—remains yet to be seen.











