Artificial Intelligence
Bee Brains Inspire Smarter AI and Robotics
Securities.io maintains rigorous editorial standards and may receive compensation from reviewed links. We are not a registered investment adviser and this is not investment advice. Please view our affiliate disclosure.

Bees, the world’s greatest pollinators, are an essential part of the biodiversity on which we humans are directly dependent for our survival.
These winged insects are primarily known for providing high-quality food like honey as well as beeswax, propolis, pollen, and jelly, among other products. More importantly, they are responsible for pollinating countless flowering plants, including a vast majority of the world’s food crops, which enables plants to reproduce and produce fruits, vegetables, and seeds.
To accomplish this, bees use their hairy bodies and transfer pollen from one flower to another.
While bees aren’t alone in this, as birds, monkeys, and even humans pollinate, bees are certainly the most common pollinators. It is estimated that over 87% of all flowering plant species depend on animals, with bees being the primary group for pollination, an essential ecosystem service vital for biodiversity and food security.
Bees are actually very intelligent insects, and people have been studying their behavior, mannerisms, and social interactions to understand ecosystem health, environmental changes, and improve crop pollination efficiency.
Moreover, bees are used as a model for understanding cooperative behavior and mapping how tiny brains coordinate complex social tasks.
Scientists also take inspiration from bees to advance technology. For instance, their navigation and communication strategies are applied to drone tech. Bee behavior has also inspired robotics, algorithms, and AI.
In regard to that, researchers have now discovered that bees use their flight movements to improve brain signals, which allows them to learn and recognize complex visual patterns with great accuracy.
This movement-based perception, according to the new study, could revolutionize the development of next-generation AI and robotics by accentuating efficiency over massive computing power.
Bee Intelligence: What Tiny Brains Teach Us About AI
The visual learning abilities of bees are simply remarkable. This is evident from the fact that they can learn to associate a color with a reward as well as identify specific features to classify visual patterns. They have even shown the capacity to understand abstract concepts and solve numerosity tasks by sequentially scanning the elements within a stimulus.
A fundamental concept in cognitive science, numerosity refers to the number of items in a set and is usually studied in the context of visual perception, where it refers to the ability to quickly grasp the quantity of objects in a scene without counting.
As such, numerosity tasks analyse the brain’s innate ability to perceive and estimate quantities.
So, bees clearly have exceptional capabilities, which makes them a valuable animal model for exploring the principles of visual learning by analyzing their behavioural responses.
But the thing is, it is still not really known just how bees are able to identify complex patterns and perceive the complexities of the world around them while foraging, given their supposedly low visual sensitivity and limited neural resources.
Visual sensory neurons are actually presumed to evolve in order to exploit regularities in natural scenes. For instance, studies have shown that insect sensory pathways and behaviors associated with them dynamically adapt to different surrounding conditions. The responses are adjusted based on input data like spatial frequency, contrast, and spatiotemporal correlations.
When it comes to active sampling strategies, wherein animals continuously scan their environment to extract visual information over time, such behavior has been widely observed across species.
While primates use eye movements to enhance their fine spatial resolution and improve the encoding of natural stimuli, insects employ strategies that involve head and body movements or specific approach trajectories.
In the case of bees, they are likely to depend on active vision and sequential sampling to build a strong and resilient neural representation of their surroundings.
These strategies play a key part in early visual processing, reducing redundancy and making the encoding of visual stimuli more efficient. But again, our understanding of how these mechanisms allow bees to detect visual regularities, overcome representational constraints, and solve complex tasks remains poor.
According to the latest study, understanding these strategies is crucial for unraveling the fundamental principles of insect vision and their broader implications for visual processing across biological and artificial systems.
So, building on their previous study, which assessed bee flight paths during a simple visual task1, the researchers are now looking into the main circuit elements that contribute to active vision in recognizing achromatic patterns.
The primary goal of the study is to determine how the scanning behavior of bees contributes to the organization and connectivity of neurons in their visual lobes.
Researchers from the University of Sheffield hypothesized that scanning behaviors have adapted to sample complex visual features in a way that encodes them more efficiently in the lobula neurons. This, in turn, facilitates unique representations that support learning in the bees’ tiny brain. To test this hypothesis, they developed a neuromorphic model of the bee optic lobes.
The researchers included coding principles through a novel model of non-associative plasticity. This enabled the model to self-organize its connectivity within the visual lobe, thus creating efficient representations of the environment and leading to the emergence of orientation-selective cells, which are essential for encoding complex visual scenes.
The visual processing framework was further enhanced by employing another module for decision-making, which took inspiration from associative learning mechanisms of insects.
The researchers’ simulations reveal that a small subset of lobula neurons, sensitive to specific orientations and velocities, can compress complex visual environments into representations expressed as firing rates. These infrequent representations effectively distinguish between the plus and multiplication patterns, which highlights the model’s broader applicability.
The insights gathered in the study can help advance our understanding of biological vision and cognition and inspire the development of novel computational models for visual recognition tasks, stated the study.
How Bee-Inspired Vision is Shaping Robotics and AI
The latest study, a collaborative effort with Queen Mary University of London and published in the journal eLife, detailed a digital model of a bee’s miniature brain2.
It leverages the surprising way these insects combine their brains and bodies to help advance technology and make future robots smarter and more efficient. Much like how bees use their flight movements to create clear brain signals and simplify complex visual tasks, the next-gen tech can also gather relevant information through movement rather than relying on huge computing power.
The study, after all, has demonstrated that even tiny insect brains are capable of solving complex visual tasks.
The fact that few brain cells can do so much means intelligence is not just a brain thing, but the result of the brain, body, and the environment working in cohesion.
Building a digital version of a bee’s brain helped researchers discover that the way bees move their bodies during flight aids in their shape visual input. These movements also produce unique electrical signals in their brains, which allow them to identify predictable features around them easily and efficiently.
This showcases bees’ remarkable accuracy in learning and identifying complex visual patterns during flight.
“In this study, we’ve successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system — albeit the result of millions of years of evolution — can perform computations vastly more complex than we previously thought possible.”
– The study’s senior author, Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield
By leveraging nature’s best designs for intelligence, Marshall noted, this paves the way for “the next generation of AI, driving advancements in robotics, self-driving vehicles and real-world learning.”
As noted earlier, this study builds on their previous research into how bees use active vision, where their movements help collect and process visual information. The latest work takes a deeper look at the underlying brain mechanisms that drive their behavior of flying around and inspecting specific patterns.
“In our previous work, we were fascinated to discover that bees employ a clever scanning shortcut to solve visual puzzles. But that just told us what they do; for this study, we wanted to understand how.”
– Lead author, Dr. HaDi MaBouDi from the University of Sheffield
The advanced visual pattern learning abilities of bees have actually been long understood. This includes their ability to differentiate between human faces, but not as to how they navigate the world with such efficiency.
“Our model of a bee’s brain demonstrates that its neural circuits are optimized to process visual information not in isolation, but through active interaction with its flight movements in the natural environment.”
– MaBouDi
And this, he noted, supports the theory that intelligence arises from the interplay of the brain, body, and environment working together.
“We’ve learnt that bees, despite having brains no larger than a sesame seed, don’t just see the world – they actively shape what they see through their movements. It’s a beautiful example of how action and perception are deeply intertwined to solve complex problems with minimal resources. This is something that has major implications for both biology and AI.”
– MaBouDi
The model, built through collaborative efforts, shows that the neurons of a bee become highly tuned to specific movements and directions as their brain slowly adapts through repeated exposure to different stimuli. That improves their responses without having to depend on associations or reinforcement.
What this means is that a bee’s brain adapts to its environment simply by observation while in flight, without needing immediate rewards.
All of this is done using just a few neurons, which conserves both energy and processing power, making their brain incredibly efficient. Now, to test the model, the team put it through the same visual challenges as those faced by actual bees. In this case, the computational model had to differentiate between a ‘plus’ sign and a ‘multiplication’ sign.
When mimicking the strategy of real bees, scanning only the lower half of the patterns, the model showed considerably improved performance.
Furthermore, the model successfully demonstrated just how bees can recognise human faces, using only a small network of artificial neurons, emphasizing the versatility and the strength of their visual processing.
“Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. But such speculations make no sense unless one knows the neural computations that underpin a given task,” said Professor Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary University of London. “Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition. Thus, insect microbrains are capable of advanced computations.”
So, this way, the study adds to the evidence that animals aren’t just passively receiving information. In fact, they are actively working with it.
Bees, in particular, have higher-order visual processing, and the model reveals how behaviorally driven scanning can create compressed, learnable neural codes.
“Together, these findings support a unified framework where perception, action, and brain dynamics co-evolve to solve complex visual tasks with minimal resources – offering powerful insights for both biology and AI.”
– Professor Mikko Juusola, Professor in System Neuroscience from the University of Sheffield’s School of Biosciences and Neuroscience Institute
Click here to learn how AI can help protect honeybees from asian hornets.
Swipe to scroll →
Approach | Key Principle | Strengths | Limitations |
---|---|---|---|
Conventional AI | Massive datasets & high computing power | High accuracy in complex tasks | Energy-intensive, costly to scale |
Bee-Inspired AI | Active vision & efficient neural coding | Lightweight, energy-efficient, fast learning | Still in early research phase |
Investing in AI Tech
In the world of AI and robotics, Qualcomm (QCOM +2.2%) is a known name that has explored neuromorphic and edge-AI technologies.
Over a decade ago, Qualcomm released the Qualcomm Zeroth processors to mimic human-like perception and learning, just like how biological brains do. Besides biologically-inspired learning, the goal was to replicate the efficiency with which our brain communicates information and standardize the new processing architecture called a Neural Processing Unit (NPU).
Its AI-driven Robotics RB6 Platform, meanwhile, powers the next-generation robotics and intelligent machines, including delivery robots, autonomous mobile robots (AMRs), UAM aircraft, manufacturing robots, autonomous defense solutions, and much more. The platform is delivering power-efficient, advanced edge-AI computing and video processing with 5G connectivity for robots
Primarily, Qualcomm is involved in developing foundational technologies for the wireless industry, including 3G, 4G, 5G, wireless connectivity, and high-performance and low-power computing.
Click here to learn all about investing in artificial intelligence (AI).
Qualcomm (QCOM +2.2%)
Looking at Qualcomm’s market performance, the $171.67 billion market cap company’s shares are currently trading at $159.54, up 3.6% this year so far.
While performance this year has been underwhelming, it follows QCOM’s surge past $215 in June last year. Its EPS (TTM) stands at 10.36, P/E (TTM) at 15.36, and ROE (TTM) at 44.62%, while shareholders benefit from a dividend yield of 2.24%.
QUALCOMM Incorporated (QCOM +2.2%)
On the financial front, the wireless chipmaker reported a 10% increase in revenue to $10.4 billion for its fiscal third quarter ended June 29, 2025.
Driven by strength across Handsets, IoT, and Automotive, QCT revenues jumped 11% YoY to $9 billion, and EBT revenues surged 22% to $2.7 billion. Combined QCT Automotive and IoT revenues, meanwhile, were up 23% YoY to $2.7 billion.
The company’s non-GAAP EPS was up 19% YoY to $2.77.
According to CEO Cristiano Amon:
“Another quarter of strong growth in QCT Automotive and IoT revenues further validates our diversification strategy and confidence in achieving our long-term revenue targets. Our leadership in AI processing, high-performance and low-power computing, and advanced connectivity positions us to become the industry platform of choice as AI gains scale at the edge.”
During the quarter, Qualcomm returned $3.8 billion to stockholders, which included $967 million, or $0.89 per share, of cash dividends and $2.8 billion of share repurchases.
Most recently, Qualcomm launched the Dragonwing Q-6690 for its enterprise customers, less than six months after unveiling the Dragonwing suite of products. The company claims the chipset to be the world’s first mobile processor with built-in ultra-high frequency RFID capabilities.
With its industrial and embedded IoT, networking, and cellular infrastructure solutions, the company aims to utilize them to simplify complexity, optimize operational efficiency, and empower smarter decision-making.
Amidst this, Saudi Arabia’s AI company, Humain, has begun building its first data centers in Riyadh and Dammam, for which it has partnered with Qualcomm and AMD, Cisco, and Groq. The company is planning to build 1.9 GW of data center capacity by the end of this decade.
Latest Qualcomm (QCOM) Stock News and Developments
Qualcomm (QCOM) Beats Stock Market Upswing: What Investors Need to Know
QUALCOMM Incorporated (QCOM) Is a Trending Stock: Facts to Know Before Betting on It
Qualcomm Announces Quarterly Cash Dividend
Should You Buy This Semiconductor Stock Right Now?
Why Qualcomm (QCOM) is Poised to Beat Earnings Estimates Again
QCOM Shares Down on China Trade Spat Fallout: Should Investors Fret?
Conclusion
Animals have long inspired technology, and now bees show us that intelligence isn’t about brain size but about efficiency, adaptability, and the seamless integration of body, brain, and environment. These lessons could help transform AI design.
AI is one of today’s most advanced and fast-moving fields, garnering significant attention, capital, and development. Scaling massive models, however, is expensive, energy-intensive, and unsustainable. Here, bee-inspired research offers an alternative: small, efficient neural networks that can achieve more with less.
By studying the active vision and compact neural strategies of bees, we can build futuristic AI and robotics that are faster and more capable.
Click here to learn if robotic pollinators can play a role in vertical farming.
References:
1. MaBouDi, H., Richter, J., Guiraud, M.-G., Roper, M., Marshall, J.A.R., & Chittka, L. Active vision of bees in a simple pattern discrimination task. eLife, 14, e106332, published 20 February 2025. https://doi.org/10.7554/eLife.106332
2. MaBouDi, H., Roper, M., Guiraud, M.-G., Juusola, M., Chittka, L., & Marshall, J.A.R. A neuromorphic model of active vision shows how spatiotemporal encoding in lobula neurons can aid pattern recognition in bees. eLife, 14, e89929, published 1 July 2025. https://doi.org/10.7554/eLife.89929