Computing
Spin-Wave Networks: The Next Leap in Efficient AI Computing

Artificial intelligence (AI) is transforming the way we live. With its potential to revolutionize industries, the technology is expected to generate trillions in value.
From healthcare to education, transportation, entertainment, and finance, AI has greatly enhanced efficiency and accuracy across sectors. AI has also been helping improve energy efficiency. For instance, scientists from around the globe collaborated to create1 a new class of material using AI that helps reduce energy costs.
But what about the massive energy demands of AI itself? The power-hungry AI presents a big challenge. With the rapid rise in AI applications, the demand for energy is also increasing dramatically, in turn, putting a strain on our energy infrastructure.
Every day, machine learning (ML) models are becoming more and more complex. And the bigger and more sophisticated they get, the higher the resource requirements to train and run these models.
Training ML models requires not just computational resources but also energy and water for the data centers that house the IT infrastructure needed to train, deploy, and deliver AI applications and services.
Vijay Gadepally, a senior scientist at MIT Lincoln Laboratory Supercomputing Center (LLSC), had said the following a couple of years ago when the situation was still developing:
“As we move from text to video to image, these AI models are growing larger and larger, and so is their energy impact. This is going to grow into a pretty sizable amount of energy use and a growing contributor to emissions across the world.”
As the International Energy Agency (IEA) projects, global electricity demand from data centers will double from an estimated 460 terawatt-hours (TWh) in 2022 to 1,000 TWh in 2026, which is roughly equivalent to the electricity consumption of Japan.
Already, electricity consumption from data centres has reached about 1.5% of global electricity consumption.
A new research released by UNESCO and University College London (UCL) also warns that the energy demands of AI, especially large language models (LLMs), have reached unsustainable levels, and to change this, “ we need a paradigm shift in how we use it.”
As per their report, Gen AI tools are being used by over 1 billion people daily, with each interaction consuming about 0.34 watt-hours of energy per prompt. It said:
“This adds up to 310 gigawatt-hours per year, equivalent to the annual electricity use of over 3 million people in a low-income African country.”
In their report, the team of computer scientists suggested three key innovations to enable substantial energy savings. This includes using smaller models, which are just as smart and accurate as large ones but can cut energy use by up to 90%. Then there are shorter, more concise prompts and responses that can reduce energy use by over 50% while model compression can save up to 44% in energy.
Making AI More Efficient with Smarter Software, Greener Hardware

Glowing AI chip embedded in a green leaf symbolizing energy-efficient AI
Not just a growing number of individuals are increasingly adopting AI, but more and more organizations are also integrating this technology into their business.
A study by the IBM Institute for Business Value (IBV) revealed that the majority (77%) feel the need to use generative AI rapidly to keep pace with their customers.
Over the years, several other technological innovations, such as computing, have raised similar concerns, which were then addressed through efficiency innovations. The same can now be done with AI. From researchers to companies, everyone is currently working on understanding its impact and finding solutions to mitigate its negative effects.
These solutions include the use of clean and renewable energy, as well as smaller models and smarter model training.
To tackle AI’s energy efficiency challenges, researchers are focused on two fronts:
- Software innovations
- Hardware improvements
In the hardware realm, power-capping is a solution that can potentially reduce energy consumption by as much as 15%. There is also carbon-efficient hardware, which “matches a model with the most carbon-efficient mix of hardware,” per MIT.
At the MIT Sustainability Conference in October, Gadepally, who leads energy-aware research efforts at LLSC, suggested rethinking AI model training and investing in more efficient hardware. The MIT Lincoln Laboratory has been employing Gadepally’s recommendations to reduce its own data center footprint.
Using more computationally efficient hardware and specialized hardware accelerators can also contribute to energy savings. Parallelization, which reduces the algorithm’s training time by distributing computation among several processing cores, and edge computing, which performs computation at the locations where the data is collected or used, are other promising hardware solutions.
Scientists are also turning to the human brain, which has 100 billion neurons and 100 trillion synaptic connections, to make machines better.
This has led to neuromorphic computing, which, instead of relying on traditional von Neumann architectures, utilizes artificial neurons and synapses to process information in a manner similar to the brain to achieve greater energy efficiency and computational power.
For instance, researchers from Seoul National University College of Engineering developed2 neuromorphic devices based on hybrid organic-inorganic materials.
Talking about the key part of their research, Professor Ho Won Jang noted that it “lies in demonstrating that uniform ion movement across the surface of the material is more important for developing high-performance neuromorphic hardware than creating localized filaments in semiconductor materials.”
Light is another way AI hardware is being improved. Instead of electrical signals, photonic computing uses light and allows for parallel operations with minimal heat loss.
Just a few months ago, researchers from Columbia Engineering released3 a 3D photonic-electronic platform that achieves great energy efficiency and bandwidth density. For this, they integrated photonics with advanced CMOS electronics circuits. The 3D-integrated photonic-electronic chip delivers high bandwidth (800 Gb/s) while consuming just 120 femtojoules per bit. It’s 5.3 Tb/s/mm2 bandwidth density also exceeds existing benchmarks.
Last summer, researchers from the University of Minnesota College of Science and Engineering, meanwhile, showed a new technology4 called computational random-access memory (CRAM) that can potentially cut AI energy use by 1,000 times.
With silicon photonics emerging as a disruptive technology for next-gen accelerators for ML, researchers from Hewlett-Packard Labs have introduced5 an energy-efficient and scalable silicon photonic platform to serve as the underlying foundation for AI accelerator hardware.
Photonic AI accelerators, unlike traditional ones, which depend on electronic distributed neural networks (DNNs), use optical neural networks6 (ONNs) that offer high parallelism, extremely low latency, and minimal heat loss.
While easy to manufacture, silicon photonics are difficult to scale; hence, the platform. It is fabricated using silicon photonics along with III-V compound semiconductors (like InP or GaAs).
Now, there’s a new method that can make AI more efficient, and that’s by enabling large spin waveguide networks to handle the processing of advanced information. Spin-wave is a promising solution to processing information.
This breakthrough in AI hardware efficiency is achieved by a team of German scientists from the Universities of Münster and Heidelberg.
Led by Rudolf Bratschitsch, a physicist Professor at Münster, the team has created a vast spin waveguide network that processes information with significantly less energy, making it a promising alternative to energy-intensive electronics.
Scalable Magnonic Circuits as the New Frontier in Energy-Efficient AI

Spin waves flowing through a nanoscale circuit illustrating magnonic networks
| Spin-Wave Device | Function |
|---|---|
| Logic Gates | Perform binary operations for data processing |
| Multiplexers | Select input signals for routing |
| Couplers & Splitters | Divide or combine spin-wave signals |
| Interferometers | Analyze wave interactions for computing tasks |
| Memories | Store spin-wave encoded data |
While magnonic networks based on magnetic insulators can revolutionize information processing due to their energy efficiency, the building blocks of these networks, i.e., spin-wave waveguides, suffer from inefficient dispersion tuning capabilities and limited spin-wave propagation lengths.
These limitations have been addressed by the team of scientists from Münster and Heidelberg.
Published in the scientific journal Nature Materials7, the study detailed developing a new way to create waveguides in which the spin waves can propagate very far, thus building the largest spin waveguide network thus far.
But that’s not all. The team was also able to control the properties of the spin wave that was transmitted in the waveguide. For instance, the scientists were able to precisely change the wavelength and reflection of the spin wave at a certain interface. The study noted:
“The dispersion of the waveguides can be continuously tuned due to the precise and localized ion implantation, which sets them apart from commonly etched waveguides.”
Electron spin or intrinsic angular momentum is a fundamental quantum mechanical property of electrons, where the alignment of several spins determines the magnetic properties. Now, if an alternating current is applied to a magnetic material with an antenna, a changing magnetic field is produced, and the spins in the material can generate a spin wave.
Spin waves are excitations of a magnetic material, and they present exciting possibilities for advanced information processing.
What truly makes them attractive are their distinct characteristics, like a natural strong nonlinearity and high-speed operation within the frequency band of gigahertz (GHz) to terahertz (THz).
In recent times, researchers have begun to utilize spin waves in nanoscale magnetic structures and networks for signal processing and computing applications. This emerging technology can help address limitations inherent in traditional semiconductor microelectronics with regard to computational density and high-dimensional processing capacity.
More importantly, it is the low-energy footprint of spin-wave technology that’s particularly appealing.
The utility of the technology resides in its capability to encode information within the phase, frequency, and amplitude of spin waves. This strategy, much like electromagnetic waves, allows for a flexible range of data processing by taking advantage of the dependence of propagation attributes on these parameters.
Spin waves are currently used to create different individual components. Logic gates that perform logical operations on binary inputs to produce a single binary output are one example. Multiplexers are another type of device that selects one of several input signals.
Other examples include crossings, couplers, memories, majority gates, (de-)multiplexers, interferometers, splitters, and spectrum analysers.
All of these devices can either work independently as information processing units or integrate into bigger, complex networks with advanced functionalities.
In a large network, the links between elements are customized waveguides for spin waves. These waveguides are important to confine as well as guide spin waves from one element to another and, as such, require minimal propagation losses. Such waveguides and their combinations also serve as functional spin-wave devices.
The components, however, haven’t been connected to form a larger circuit until now.
“The fact that larger networks such as those used in electronics have not yet been realised, is partly due to the strong attenuation of the spin waves in the waveguides that connect the individual switching elements – especially if they are narrower than a micrometre and therefore on the nanoscale.”
– Physicist Professor Bratschitsch
So, what the team did to overcome that problem was they used the material that currently has the lowest attenuation, and that’s yttrium iron garnet (YIG). It has the lowest damping and highest propagation length of spin waves, reaching millimeters.
As for realizing waveguides for spin waves, lithographic approaches are usually used. To create nanoscale waveguides in YIG, the advanced fabrication approach is based on reactive ion etching of thin YIG films. But even with high-quality YIG films and state-of-the-art etching processes, the maximum propagation length that’s been reported is 54 µm.
Developing hybrid structures is another emerging approach where YIG films are combined with nanostripes of ferromagnetic metal to define nanoscopic spin-wave transporting channels through dipolar coupling, which creates spin-wave propagation lengths of ~20 µm.
Then there’s ion implantation, which was recently used to manipulate spin waves in YIG. Focused ion beam writing has enabled the precise modification of YIG films on a submicrometre scale.
So, the scientists used a commercially available 110 nm thick film of the magnetic material YIG and then inscribed individual spin-wave waveguides into it using a beam of silicon ions.
The maskless implantation process allowed the creation of multiple tailored spin-wave structures on one substrate. But more importantly, it can be scaled up to fabricate wafer-size magnonic integrated circuits.
A gold microstrip antenna was also fabricated with electron-beam lithography film to excite spin waves with a continuous-wave microwave signal. An external static in-plane magnetic field H0 of μ0H0 = 50 mT was applied to launch surface-mode spin waves.
This way, they were able to produce a large network with 198 nodes, opening the doors to large-scale magnonic integrated circuits. It also enables complex structures of high quality to be created replicably and flexibly.
Moreover, the team achieved a spin-wave propagation length of over 100 µm, and their etchless approach allowed them to have an integrated spin-wave network made up of 34 parallel input ports and 34 outputs. The study stated:
“These results pave the way for realizing advanced magnonic networks with unparalleled control and exciting avenues for realizing low-loss large-scale spin-wave computing systems.”
Investing in Efficient AI
In the world of artificial intelligence, NVIDIA Corporation (NVDA +5.59%)
is the clear leader with its AI accelerators and chips. The world’s largest company by market cap of over $4 trillion, NVIDIA, has also been investing in energy-efficient architectures.
NVIDIA Corporation (NVDA +5.59%)
Nvidia’s GPUs offer performance-per-watt improvements. It’s Blackwell architecture, in particular, promises gen AI on trillion-parameter LLMs at up to 25x less cost and energy consumption than its previous Hopper architecture.
Blackwell, founded by Jensen Huang, the CEO, said last year, is designed to be “very performant and very energy efficient.”
Nvidia also offers liquid-cooling systems, the NVIDIA GB200 NVL72 and the NVIDIA GB300 NVL72, to handle the demanding tasks of LLM inference with their architecture specifically optimized for test-time scaling accuracy and performance.
The tech giant is also involved in edge AI research and development with its NVIDIA EGX™ platform, which combines powerful computing, remote management, and systems and software to bring AI to the edge. NVIDIA IGX Orin™ is designed for industrial and medical environments, while the NVIDIA Jetson™ platform is its robotics solution.
Yet another area of research at Nvidia is photonics. Earlier this year, the company announced its new co-packaged silicon photonic networking switches to connect millions of GPUs across sites while reducing energy consumption and operational costs.
“By integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories.”
– Huang
The new tech uses beams of laser light to send information on fiber optic cables between chips. It will come out later this year and into 2026.
The company has also looked into using it more widely in its flagship GPU chips, but currently has no plans to do so yet, as traditional copper connections are still “orders of magnitude” more reliable than co-packaged optical connections.
NVIDIA Corporation (NVDA +5.59%)
When it comes to Nvidia’s market performance, it has been nothing short of extraordinary. In Oct. 2022, NVDA shares went under $11 and are currently trading above $165. With that, it has an EPS (TTM) of 3.10 and a P/E (TTM) of 53.12. The company even offers a dividend yield, though only 0.02%.
As for financials, for the first quarter of fiscal 2026, Nvidia reported revenue of $44.1 billion, up 12% from Q4, while data center revenue came in at $39.1 billion, a 10% increase from the previous quarter.
The demand for the company’s AI infrastructure, Huang noted, is “incredibly strong.”
Latest NVIDIA Corporation (NVDA) Stock News and Developments
AI Infrastructure Spending Could Nearly Triple by 2029. Here Are 2 Stocks to Buy.
Chinese chipmakers claim nearly half of of local market as Nvidia's lead shrinks, IDC says
Iran threatens Nvidia, Apple and other tech giants with attack
Billionaire Ken Griffin Buys 2 AI Stocks Chasing a $1 Trillion Market Opportunity in Robotaxis (Hint: Not Tesla)
Nvidia vs. Broadcom: The Smarter AI Stock to Buy in April
Is It Too Late to Buy Nvidia Stock?
Conclusion
As the world continues to adopt AI, which promises increased efficiency, enhanced productivity, improved decision making, and personalized experiences, the market for this powerful technology is expected to be worth multi-billion dollars in 2025.
But as demand for energy-hungry AI grows, so do its energy needs, which means strain on energy grids and rising greenhouse gas emissions.
In order to achieve truly efficient AI, coordinated efforts are required in the evolution of both software and hardware. Against this backdrop, innovations like smarter model training, smaller models, concise prompts, model compression, neuromorphic computing, edge AI, and photonics could help create a future where scale doesn’t have to come with unsustainable energy demands.
Here, the latest breakthrough in spin-wave computing could define the future of low-power, high-performance computing, potentially becoming foundational to next-generation AI architectures.
Click here to learn all about investing in artificial intelligence.
References:
1. Xiao, C.; Liu, M.; Yao, K.; et al. Ultrabroadband and Band-Selective Thermal Meta-Emitters by Machine Learning. Nature 2025, 643, 80–88. https://doi.org/10.1038/s41586-025-09102-y
2. Kim, S.J.; Im, I.H.; Baek, J.H.; et al. Linearly Programmable Two-Dimensional Halide Perovskite Memristor Arrays for Neuromorphic Computing. Nat. Nanotechnol. 2025, 20, 83–92. https://doi.org/10.1038/s41565-024-01790-3
3. Daudlin, S.; Rizzo, A.; Lee, S.; et al. Three-Dimensional Photonic Integration for Ultra-Low-Energy, High-Bandwidth Interchip Data Links. Nat. Photon. 2025, 19, 502–509. https://doi.org/10.1038/s41566-025-01633-0
4. Lv, Y.; Zink, B.R.; Bloom, R.P.; et al. Experimental Demonstration of Magnetic Tunnel Junction-Based Computational Random-Access Memory. npj Unconv. Comput. 2024, 1, 3. https://doi.org/10.1038/s44335-024-00003-3
5. Tossoun, B.; Xiao, X.; Cheung, S.; Yuan, Y.; Peng, Y.; Srinivasan, S.; et al. Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators. IEEE J. Sel. Top. Quantum Electron. 2025, 31(3), Article 8200326. https://doi.org/10.1109/JSTQE.2025.3527904
6. Fu, T.; Zhang, J.; Sun, R.; et al. Optical Neural Networks: Progress and Challenges. Light Sci. Appl. 2024, 13, 263. https://doi.org/10.1038/s41377-024-01590-3
7. Bensmann, J.; Schmidt, R.; Nikolaev, K.O.; et al. Dispersion-Tunable Low-Loss Implanted Spin-Wave Waveguides for Large Magnonic Networks. Nat. Mater. 2025. https://doi.org/10.1038/s41563-025-02282-y












