
Imagine a robotic dog that can see its surroundings, remember where it has been, understand spoken instructions, and make smart decisions in real time.
That is exactly what a team of engineering students at Texas A&M University has built—and their invention could change how search-and-rescue missions are carried out in dangerous environments.
Developed by graduate students Sandun Vitharana and Sanjaya Mallikarachchi, the AI-powered robotic dog is designed to work in chaotic places such as disaster zones, collapsed buildings, or remote areas with no GPS.
Unlike traditional robots that simply follow pre-programmed paths, this robot can observe its environment, store visual memories, and use those memories to navigate more intelligently the next time.
At the heart of the system is an advanced artificial intelligence model that combines vision, memory, and language.
Using a camera, the robot captures images of its surroundings and processes them with a multimodal large language model, a type of AI that can reason across images, text, and instructions.
This allows the robot to understand what it sees, plan where to go, and respond to voice commands in a natural way.
When moving through an unfamiliar space, the robotic dog behaves much like a human first responder.
It reacts quickly to avoid obstacles while also thinking ahead, deciding which route is safest or most efficient.
As it explores, it builds a visual memory of the environment. If it needs to return to a location or repeat a task, it can recall earlier paths instead of starting from scratch. This ability to remember and reuse routes is especially valuable in emergency situations, where time and efficiency can mean the difference between life and death.
The project began as an experiment to see whether large AI models could be used directly inside robotic systems rather than relying on remote computing.
With support from the National Science Foundation, the team combined voice interaction, visual understanding, and intelligent navigation into a single working system.
Their approach is unusual because it uses a carefully designed decision framework that allows the AI to guide both long-term planning and moment-by-moment movement.
According to Mallikarachchi, this blend of quick reactions and thoughtful decision-making is likely to become standard in future human-like robots. The goal is not just to build machines that move, but machines that understand context and adapt to changing conditions.
While search-and-rescue missions are the most obvious application, the technology could be useful in many other settings.
Hospitals, warehouses, and large public facilities could use such robots to move efficiently through complex spaces. The system could also assist people with visual impairments, explore dangerous areas like minefields, or conduct inspections in hazardous locations.
Dr. Isuru Godage, who advised the project, believes the key breakthrough is placing powerful AI directly on the robot.
By doing so, the robotic dog gains immediate awareness of its surroundings and can interact with humans more naturally. The long-term vision, he says, is to create robots that are not just tools, but reliable and empathetic partners in high-risk environments.
The team recently demonstrated the robotic dog at an international robotics conference, marking an important step toward smarter, more capable machines that can truly work alongside humans when it matters most.


