stub From Silicon to Light: The Next AI Hardware Wave – Securities.io
Verbinden Sie sich mit uns

Datenverarbeitung

From Silicon to Light: The Next AI Hardware Wave

mm

Securities.io hält strenge redaktionelle Standards ein und kann eine Vergütung für geprüfte Links erhalten. Wir sind keine registrierten Anlageberater und dies ist keine Anlageberatung. Bitte beachten Sie unser Offenlegung von Tochtergesellschaften.

As artificial intelligence (AI) continues to grow more popular and powerful, so does its appetite for speed and energy. The need for faster, smarter, and more efficient systems has led researchers to explore a radical alternative: optical computing.

Unlike traditional processors that use electrons, optical computing uses photons, or particles of light, to transmit and process information. This shift offers two critical advantages.

First, photons are significantly energy-efficient. They produce considerably less heat than electrons, which generate so much heat that it limits their performance and requires large, expensive cooling systems in data centers.

Second, light travels much, much faster than electrical currents, enabling dramatically quicker operations. Optical signals can also carry more information, offering a simple path to cleaner, faster computing.

As a result, there’s now a growing interest in photonic computing. The technology is showing promising results in lab settings and attracting significant investment from major companies.

However, translating that laboratory success into practical photonic devices has proven rather difficult. For that, we have to clear several hurdles first. Photons do not naturally interact with each other, making it difficult to build the optical logic gates that are fundamental to computing. Additionally, the technology is still under research, so it doesn’t have the maturity and economies of scale that electronic chip fabrication has due to decades of commercialization.

Then there’s cost, bulk, and low modulation rates restricting most existing optical setups. 

A new study has taken a major step towards overcoming einige limitations by developing eine neue optical engine, die combines speed, efficiency, and compactness on eine chip.

Researchers from Tsinghua University have developed a groundbreaking optical system for computing that performs feature extraction with unprecedented low latency, which has the potential to revolutionize AI processing.

The usage of light rather than electricity to process data enables the technology to accelerate computing significantly while minimizing latency, a major leap toward real-time AI.

At the core of this new system lies a semiconductor optical amplifier-based Mach-Zehnder interferometer, or SOA-MZI. 

An SOA is a compact device that directly amplifies light signals via stimulated emission. Meanwhile, MZI, one of the oldest optical instruments, is a basic waveguide interference device consisting of two couplers connected by two waveguides of different lengths.

Now, the SOA-MZI setup ermöglicht light to carry out the work that underlies deep learning. The information here is processed, and features wie patterns and edges are detected in the light signal, without converting sie back into electricity.

Zusätzlich, a wavelength-division multiplexing (WDM), method is used by the device. This particular method splits light into a spectrum of colors, with each color carrying its own data stream. Leveraging WDM enables the chip to run viele calculations in parallel, also increasing die throughput. 

When put to the test in the lab, the engine processed data at speeds of as much as 10 gigabits per second (Gbps) per channel with a latency of mere tens of picoseconds (ps). For context, one ps is equivalent to 1,000 femtoseconds or one-thousandth of a nanosecond.

These results show that the engine is much faster than any electronic processor could möglicherweise hope to achieve.

Was diese speed means ist das the system can process information in Echtzeit, making it perfect for applications wie high-frequency trading, medical imaging, Roboter Operation, oder autonomous vehicles. These applications rely on AI’s ability to extract key features from raw data at speed, so even milliseconds are of great significance

The Breakthrough: Tsinghua’s Optical Engine and Real-Time AI

A close-up of a futuristic photonic microchip glowing with beams of violet and blue light, representing Tsinghua University’s optical engine that uses photons for real-time AI computations. The light streams across the chip’s intricate circuitry, symbolizing the shift from electronic to photonic processing.

Moore’s law says the number of transistors on a microchip doubles über jede two years. This results in an increase in computing power, a decrease in costund overall smaller devices.

This trend, which has been driving innovation in the semiconductor industry, now seems to be coming to an end. Having shrunk to sizes just a few nanometers, transistor sizes are approaching the physical limits of silicon-based technology.

Besides the smaller size, which leads to electron tunneling and leakage currents that increase energy use and heat generation, the cost of manufacturing cutting-edge microchips has skyrocketed. Meanwhile, silicon itself is reaching its performance and scalability limits.

Diese is why researchers and companies have been exploring alternative solutions like chiplets, system-in-package (SiP), non-volatile memory, quantum computing, biocomputing, and, of course, photonics. 

Among these alternatives, photonics shows particular promise for AI applications. By harnessing the power of light, feature extraction, a critical step in machine learning, can be greatly accelerated. 

Feature extraction is the process of transforming raw data into a simplified set of numerical features that better represent the underlying problem for machine learning (ML) models. This technique reduces data complexity to extract the most relevant information, thereby improving the performance and efficiency of ML algorithms.

While light can speed up feature extraction, maintaining stable, coherent light for fast optical computations is extremely challenging. 

An tackle this, researchers von Tsinghua University developed a second-generation optical feature extraction engine (OFE2)1 that can perform optical feature extraction für numerous practical applications. The integrated on-chip system uses tunable power splitters and precise delay lines to deliver stable, parallel optical signals.

The system deserializes the incoming stream of data by sampling the input signal into multiple synchronized light waves that allow for parallel, real-time processing.

These light waves then pass through the diffraction operator, a microscopic plate-like structure that performs calculations as light propagates through it. This operation mirrors matrix-vector multiplication, a fundamental AI operation used to transform and process data.

How the diffracted light creates a focused ‘bright spot’ at the output is fundamental to this operation, as it can be partially deflected toward a particular output port by adjusting the phase of the parallel input lights. It is this movement in output power, along with the corresponding changes, that allows their engine, aka OFE2, to capture features of the input signal’s variations im Laufe der Zeit.

OFE2 operates at a rate of 12.5 GHz, a record in optical computing, and can perform a single matrix-vector multiplication within 250.5 ps, das ist the lowest latency among similar implementations of optical computing.

“We firmly believe this work provides a significant benchmark for advancing integrated optical diffraction computing to exceed a 10 GHz rate in real-world applications.”

– Professor Hongwei Chen, who, along with his team at Tsinghua University, conducted this research

The team demonstrated the strong capabilities of their system across different tasks.

When tested for a digital trading task, OFE2 achieved impressive results. A trader feeds real-time price signals into OFE2, and the optimally configured engine generates output signals that are directly translated into buy or sell decisions to achieve stable profitability with minimal delay, as the system operates at the speed of light.

The team also used OFE2 to process images, wobei the engine extracted edge features from input images and erstellt two complementary feature maps resembling relief and engraving effects. The optical features produced by OFE2 performed much better in classifying images and boosted pixel accuracy in semantic segmentation, wie identifying organs in computed tomography (CT) scans.

More importantly, when AI systems use OFE2, they need fewer electronic parameters, showcasing the potential of optical pre-processing to enable lighter, more efficient, and less expensive hybrid AI systems. The hard work wird durchgeführt by optical pre-processing, while the AI models can focus on learning and interpretation.

These results suggest that the most intense computational loads can be moved from electronics to photonics, unlocking a future of real-time AI models.

According to the researchers, their device can process huge data streams with very little energy loss while maintaining good signal integrity even under load.

“The advancements presented in our study push integrated diffraction operators to a higher rate, providing support for compute-intensive services in areas such as image recognition, assisted healthcare, and digital finance,” said Chen. “We look forward to collaborating with partners who have data-intensive computational needs.”

Click here to learn about how computing at the speed of light becomes possible with silicon photonics.

The Global Race to Reinvent Computation with Photonics 

Wischen zum Blättern →

Projekt What It Demonstrates Speed / Latency Funktion Maturity Quelle
Tsinghua OFE2 (SOA-MZI + diffraction) Optical feature extraction with parallel WDM 12.5 GHz; ~250.5 ps per MVM Optical MVM, edges, time-series features Lab demo (2025) APN (2025)
MIT Photonic Processor On-chip optical DNN with NOFUs <0.5 ns; ~92% accuracy (task-specific) All-optical linear + nonlinear ops Lab demo (2024) Nat. Photonics (2024)
Magneto-Optical Memory (Ce:YIG) Non-volatile optical weights with high endurance ~1 ns program; ~143 fJ/bit (press) Photonic in-memory compute / weights Lab demo (2024–25) Nat. Photonics (2024)
Microsoft Analog Optical Computer Steady-state analog optics for AI + optimization Est. ~100× energy efficiency (prototype) Inference + combinatorial optimization Prototype (2025) Nature (2025)
NVIDIA Co-Packaged Optics Photonic links for GPU clusters 3.5× power efficiency vs. pluggables Interconnect (not compute) Product roadmap (2026 targets) NVIDIA (2025)

Global photonic computing revolution

The progress from Tsinghua is part of a larger global shift. Scientists around the world are racing to overcome the electronic bottlenecks by turning to light.

Earlier this year, another team from China enthüllt seine chip, which utilizes light to synchronize processors and could unlock next-generation communications and high-speed AI computing.

Traditional chips generate clock signals using electronic oscillators, und sie often nur operate at eine primary clock speed, which means different applications brauchen different chip manufacturing technologies. The new chip designed by die international group of scientists led by China’s Peking University uses “light as a medium to generate clock signals through photons.”

They have developed an “on-chip microcomb” that can synthesise single-frequency and wideband signals and provide reference clocks for the electronics in the system.

“By building a ring that looks like a racecourse on the chip, light can continuously ‘run’ at the speed of light. The time of each lap is then used as the standard of the on-chip clock,” said lead author Chang Lin, who’s an assistant professor at the Institute of Information and Communication Technology at Peking University. “Because a lap would take a few billionths of a second, the clock can regulate time at an ultra-high speed.”

Equipped with the new technology, chips can cover various microwave frequency bands.

The team has achieved a clock speed of over 100 GHz and hat sagte dass they can produce thousands of identical chips on 8-inch wafers while they resolve stability issues and optimize packaging processes.

Another international team of researchers tried zu address the limitations of Moore’s Law2 through photonics, but they utilized a magneto-optical material. The material is cerium-substituted yttrium iron garnet (YIG), whose optical properties dynamically change in response to external magnetic fields. 

Using tiny magnets to store data and control die transfer of light within the material, the researchers pioneered a new type of magneto-optical memory.

This new class of memory, per the study, has switching speeds 100 times faster than those of advanced photonic integrated technology, consumes about one-tenth the power, and can be reprogrammed over 2.3 billion times, potentially meaning an unlimited lifespan.

Meanwhile, in the US, scientists von MIT have nachgewiesen3 a photonic processor that can do alle die AI computations optically on the chip. Their optical device actually completed the key computations for an ML classification task in less than half a nanosecond with a 92% accuracy.

In their work, the scientists designed nonlinear optical function units (NOFUs) to address the challenge of nonlinearity in optics, which is due to photons not interacting mit each other easily, also making it energy-intensive to activate optical nonlinearities. NOFUs combine optics and electronics to integrate nonlinear operations on the chip.

While universities are demonstrating their proof-of-concept optical chips, major tech companies aren’t far behind; they are actively exploring how these principles can make commercial AI systems faster and greener.

Microsoft researchers detailed a light-based computer4, die uses camera sensors and micro-LEDs, to make AI a hundred times more efficient. The prototype analog optical computer (AOC) from the tech giant computes ein Problem numerous times, and each time, it improves until a “steady state” is erreicht.

“The most important aspect the AOC delivers is that we estimate around a hundred times improvement in energy efficiency,” said study co-author Jannes Gladrow, who’s an AI researcher at Microsoft, in the company’s blog post. “That alone is unheard of in hardware.”

At the same time, the team programmed a “digital twin,” a model that mimics the computations of the physical AOC and can be scaled to handle more variables and even more complex calculations. The model enables the team to “work on larger problems than the instrument itself can tackle right now,” noted Michael Hansen, senior director of biomedical signal processing at Microsoft Health Futures.

The computer can already handle einige tasks wie MRI image reconstruction, financial transaction matching, and simple AI inference.

To test AOC, the team first gave it the simple task of classifying images, and the physical AOC performed at about the level of a digital computer. Its digital twin was then used to reconstruct an image of a brain scan using just 62.5% of the original data, and it did dass accurately. This achievement, the scientists believe, can lead to shorter MRI times.

The AOC was also used to solve financial problems, with which it had a higher success rate than current quantum computers.

In an interview with IBM, Francesca Parmigiani, a Principal Researcher at Microsoft Research Cambridge, said that their system has “dual-domain capability,” which means it can perform two kinds of tasks mit the same piece of hardware. Diese ist erledigt by drawing on fixed-point search, which links the way both problems are solved.

“What excites me most is that we can already run workloads in both AI and optimization on the same hardware,” she said. “We’re still at a small scale, but this is an important first step.”

IBM itself is ausnutzend photons, not to do computations, but to move information faster. “We’re using light to send data at very high density for AI applications,” said Jean Benoît Héroux, a Research Scientist at IBM Research. They are developing photonic links that transfer data between chips, memory, and boards.

Investing in Photonics Computing 

As the momentum behind photonic computing catches the attention of major tech players inmitten von demand for faster AI computation, AI darling NVIDIA (NVDA -0.2%) has also been exploring ways to integrate photonic interconnects and optical networking der Reihe nach to push its hardware even further.

While leading the GPU-driven AI revolution, NVIDIA is researching optical data transmission to overcome the bandwidth bottlenecks that limit traditional chip architectures.

Zu Beginn dieses Jahres hat das Unternehmen gestartet photonic switches with co-packaged optics (CPO) to provide 10x higher network resiliency, 3.5x better power efficiency, and 1.3x faster time to deploy compared to traditional networks.

As for the chip maker’s stock performance, this week, it became the first company to hit $5 trillion in market value as its share price surged past $212 to hit a new all-time high (ATH). Currently trading at $207, NVIDIA shares are up more than 54% YTD.

NVIDIA Gesellschaft (NVDA -0.2%)

It has an EPS (TTM) of 3.51 and a P/E (TTM) of 58.93. A dividend yield of 0.02% is paid to Nvidia shareholders.  

Wenn es darum geht Nvidia’s financial position, the company berichtet a revenue of $46.7 billion for the second quarter of fiscal 2026. While the total revenue jumped 6% from the previous quarter, Nvidia’s data centre revenue increased by 5% to $41.1 billion, with Blackwell Data Center revenue surging 17% sequentially.

Schlussfolgerung

As AI mania spreads überall auf der Welt, researchers and companies alike are working on replacing electrons with photons to unlock a new world of speed, scalability, and energy efficiency. In this attempt to redefine AI infrastructure, the recent breakthrough from Tsinghua University’s optical engine shows that light-based systems can rival oder sogar surpass their electronic counterparts in specific tasks.

But photonic computing is still in the testing phase. Once photonic computing matures and becomes cost-effective, it can herald an era where computing moves at the speed of light.

Click here to learn how a light-powered chip is boosting AI by 100x.

Referenzen

1. Sun, R., Zhang, L., Li, Y., Wang, X., Chen, J., & Zhao, Q. (2025). High-speed and low-latency optical feature extraction engine based on diffraction operators. Advanced Photonics Nexus, 4(5), 056012. https://doi.org/10.1117/1.APN.4.5.056012
2. Pintus, P., Dumont, M., Shah, V., Murai, T., Shoji, Y., Huang, D., Moody, G., Bowers, J. E., Youngblood, N., et al. (2025). Integrated non-reciprocal magneto-optics with ultra-high endurance for photonic in-memory computing. Nature Photonics, 19, 54–62. https://doi.org/10.1038/s41566-024-01549-1
3. Bandyopadhyay, S., Sludds, A., Krastanov, S., Hamerly, R., Harris, N., Bunandar, D., Streshinsky, M., Hochberg, M., & Englund, D. (2024). Single-chip photonic deep neural network with forward-only training. Nature Photonics, 18, 1335-1343. https://doi.org/10.1038/s41566-024-01567-z
4. Kalinin, K. P., Gladrow, J., Chu, J., Clegg, J. H., Cletheroe, D., Kelly, D. J., Rahmani, B., Brennan, G., Canakci, B., Falck, F., Hansen, M., Kleewein, J., Kremer, H., O’Shea, G., Pickup, L., Rajmohan, S., Rowstron, A., Ruhle, V., Braine, L., Khedekar, S., Berloff, N. G., Gkantsidis, C., Parmigiani, F., & Ballani, H. (2025). Analog optical computer for AI inference and combinatorial optimization. Nature, 645(8080), 354-361. https://doi.org/10.1038/s41586-025-09430-z

Gaurav begann 2017 mit dem Handel von Kryptowährungen und hat sich seither in den Krypto-Bereich verliebt. Sein Interesse an allem, was mit Kryptowährungen zu tun hat, machte ihn zu einem Schriftsteller, der sich auf Kryptowährungen und Blockchain spezialisiert hat. Schon bald arbeitete er mit Kryptounternehmen und Medien zusammen. Außerdem ist er ein großer Batman-Fan.

Offenlegung des Inserenten: Securities.io hat sich zu strengen redaktionellen Standards verpflichtet, um unseren Lesern korrekte Rezensionen und Bewertungen zu liefern. Wir können eine Vergütung erhalten, wenn Sie auf Links zu Produkten klicken, die wir bewertet haben.

ESMA: CFDs sind komplexe Instrumente und bergen aufgrund der Hebelwirkung ein hohes Risiko, schnell Geld zu verlieren. Zwischen 74-89% der Konten von Kleinanlegern verlieren beim Handel mit CFDs Geld. Sie sollten abwägen, ob Sie die Funktionsweise von CFDs verstehen und ob Sie es sich leisten können, das hohe Risiko einzugehen, Ihr Geld zu verlieren.

Haftungsausschluss für Anlageberatung: Die auf dieser Website enthaltenen Informationen werden zu Bildungszwecken bereitgestellt und stellen keine Anlageberatung dar.

Haftungsausschluss für Handelsrisiken: Der Handel mit Wertpapieren ist mit einem sehr hohen Risiko verbunden. Der Handel mit allen Arten von Finanzprodukten, einschließlich Forex, CFDs, Aktien und Kryptowährungen.

Dieses Risiko ist bei Kryptowährungen höher, da die Märkte dezentralisiert und nicht reguliert sind. Sie sollten sich bewusst sein, dass Sie einen erheblichen Teil Ihres Portfolios verlieren können.

Securities.io ist kein registrierter Broker, Analyst oder Anlageberater.