Researchers at Anthropic, a company founded by former OpenAI employees who are concerned about the potential risks of advanced AI, have successfully taken control of a robot dog using its large language model. The experiment, dubbed Project Fetch, tested whether Claude, one of Anthropic's most powerful AI models, could automate much of the work involved in programming a robot and getting it to perform physical tasks.
In the experiment, two groups of researchers were asked to take control of a Unitree Go2 quadruped robot dog and program it to complete specific activities. One group used Claude's coding model, while the other wrote code from scratch without AI assistance. The results showed that the team using Claude was able to complete some tasks faster than the human-only programming group.
However, researchers caution that this development raises concerns about the potential for misuse of powerful AI models like Claude. "We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly," says Logan Graham, a member of Anthropic's red team.
The study highlights the increasing capabilities of large language models in generating code and operating software. These systems are becoming adept at interacting with physical objects and will likely continue to expand their reach into the physical realm as they become more advanced.
Anthropic's experiment also sheds light on the importance of designing interfaces for AI-assisted coding that can mitigate potential risks. "What I would be most interested to see is a more detailed breakdown of how Claude contributed," says Changliu Liu, a roboticist at Carnegie Mellon University.
Meanwhile, researchers like George Pappas from the University of Pennsylvania are warning about the dangers of using AI to interact with robots without proper safeguards in place. His group has developed systems like RoboGuard that limit an AI model's ability to misbehave by imposing specific rules on the robot's behavior.
The increasing capabilities of large language models and their potential applications raise fundamental questions about the relationship between AI, humans, and the physical world.
In the experiment, two groups of researchers were asked to take control of a Unitree Go2 quadruped robot dog and program it to complete specific activities. One group used Claude's coding model, while the other wrote code from scratch without AI assistance. The results showed that the team using Claude was able to complete some tasks faster than the human-only programming group.
However, researchers caution that this development raises concerns about the potential for misuse of powerful AI models like Claude. "We have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly," says Logan Graham, a member of Anthropic's red team.
The study highlights the increasing capabilities of large language models in generating code and operating software. These systems are becoming adept at interacting with physical objects and will likely continue to expand their reach into the physical realm as they become more advanced.
Anthropic's experiment also sheds light on the importance of designing interfaces for AI-assisted coding that can mitigate potential risks. "What I would be most interested to see is a more detailed breakdown of how Claude contributed," says Changliu Liu, a roboticist at Carnegie Mellon University.
Meanwhile, researchers like George Pappas from the University of Pennsylvania are warning about the dangers of using AI to interact with robots without proper safeguards in place. His group has developed systems like RoboGuard that limit an AI model's ability to misbehave by imposing specific rules on the robot's behavior.
The increasing capabilities of large language models and their potential applications raise fundamental questions about the relationship between AI, humans, and the physical world.