
An artificial intelligence learns to carry out surgical operations alone, thanks to human language

Artificial intelligence has taken a new step. A robot unveiled by researchers from Johns Hopkins University is now capable of carrying out several surgical operations, thanks to human vocal orders.
Advertisement
An autonomous robot for surgical operations
A major advance in the robotic surgery has just been revealed by researchers from Johns Hopkins University. Thanks to an artificial intelligence called SRT-H, a robot managed to make a removal of the gallbladder alone. For the moment, the test has not yet been carried out on a real human, but on a realistic synthetic patient. It was directed in real time from human vocal orders. A bit like a young internal guided by his head of service.
Behind this feat, we find Axel Krieger, medical robotician in Johns Hopkins. For him, this innovation marks a turning point: “This progress takes us from robots capable of performing precise tasks to robots that really include surgical procedures“reveals the media Hubvia the publication of Science Robotics.
The robot uses a machine learning architecture close to the one that feeds Chatgpt. Called SRT-H (Hierarchical Surgical Robot Transform), it is able to adapt to each anatomy, correct its gestures in the course of intervention and react to the voice of doctors. For example, if a surgeon says: “Seized the head of the vesicle‘, The robot obeys. And if he hears: “Shift your arm on the left a little“, he adjusts his movement.
This system has learned by watching videos of Johns Hopkins surgeons operating pork corpses. The images were enriched with explanatory legends.
100% robotic surgical operations?
In 2022, SRT-H had already carried out a first operation on a pork, but the fabrics had to be marked in advance to guide the robot, and the environment was extremely controlled. From now on, after this learning by imitation, the robot carried out the procedure with 100 % precision, even in unforeseen conditions, as a change in the starting point or a visual modification of the tissues.
Although the intervention took more time than with a human, the results were similar to those of an expert. Researchers now want to extend SRT-H capacities to other types of interventions and, of course, in the future, to real patients. “”Our work shows that AI models can be reliable enough to ensure surgical autonomy, a formerly distant objective, but now obviously viable“, hopes Ji Woong”Brian“Kim, former postdoctoral researcher at Johns Hopkins and now at Stanford University.




