Programming a robot to carry out a repetitive set of steps is not especially challenging these days. But while these types of robots are quite useful in highly structured environments — like those commonly found in industrial and manufacturing settings — they fail spectacularly when faced with unexpected conditions. Just about everything in the real world, from our homes to our city streets, is filled with unexpected situations, so in order to deal with these environments, more intelligent navigation systems are required.
Many solutions leveraging cutting-edge sensing equipment and deep learning algorithms have been developed in recent years, and some of them work quite well. However, the hardware required to run the algorithms and collect the environmental data tends to consume a large amount of energy for operation. That is a big problem for mobile autonomous robots that are powered by batteries. By including the hardware, they will be able to navigate successfully, but will drain their batteries before they get very far. Without the hardware, they can travel far, but do not know where they are going. If only there was a more efficient way to navigate…
Of course there is, and it is seen throughout the natural world — the brain. Humans and animals have excellent navigational capabilities, yet the brain consumes very little energy. Inspired by this biological efficiency, researchers at Shanghai Jiao Tong University have developed a new approach to autonomous navigation called the BIG (Brain-Inspired Geometry-awareness) framework . Their work leverages neural principles to drastically improve the way autonomous systems explore and map unknown environments.
The BIG framework utilizes a brain-inspired navigation mechanism called the geometry cell model, which mimics how mammals perceive space. Unlike traditional autonomous navigation systems that rely on exhaustive map building and computationally heavy algorithms, BIG takes a more adaptive and resource-efficient approach. It does so through four key components: geometric information, BIG-Explorer, BIG-Navigator, and BIG-Map.
The geometric information leveraged by the system is a representation of spatial data that helps robots understand and interpret their surroundings. BIG-Explorer is an exploration module that optimizes how robots expand their search areas by focusing on boundary information. The navigation module, called BIG-Navigator, intelligently guides the robot to its destination based on insights gained from exploration. The final component, the BIG-Map, is a spatio-temporal experience map that reduces memory and computational costs while maintaining efficiency.
By using real-time boundary perception and an optimized sampling approach, the BIG framework cuts computational demands by at least 20% compared to existing state-of-the-art methods. The system allows robots to cover large areas with fewer nodes and shorter paths, making it ideal for long-range exploration tasks in environments where power and processing resources are limited.
Looking ahead, BIG has the potential to support applications involving autonomous vehicles, search-and-rescue operations, space exploration, and smart city infrastructure. Future robots equipped with BIG-based navigation systems could even be expected to effectively explore forests, underground tunnels, urban environments, and beyond without the excessive energy consumption that is characteristic of many existing navigation systems.The brain-inspired mapping strategy of BIG (📷: Z. Sun et al.)
The architecture of the system (📷: Z. Sun et al.)
Some simulated environments used to test the BIG framework (📷: Z. Sun et al.)