Project page:
Technical paper:
Basic safety needs in the paleolithic era have largely evolved with the onset of the industrial and cognitive revolutions. We interact a little less with raw materials, and interface a little more with machines.
Robots don’t have the same hardwired behavioral awareness and control, so secure collaboration with humans requires methodical planning and coordination. You can likely assume your friend can fill up your morning coffee cup without spilling on you, but for a robot, this seemingly simple task requires careful observation and comprehension of human behavior.
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently created a new algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.
“Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge,” says MIT PhD student Shen Li, a lead author on a new paper about the research. “By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee.”
Proper human modeling -- how the human moves, reacts, and responds -- is necessary to enable successful robot motion planning in human-robot interactive tasks. A robot can achieve fluent interaction if the human model is perfect, but in many cases, there’s no flawless blueprint.
A robot shipped to a person at home, for example, would have a very narrow, “default” model of how a human could interact with it during an assisted dressing task. It wouldn’t account for the vast variability in human reactions, dependent on a myriad of variables such as personality and habits. A screaming toddler would react differently to putting on a coat or shirt than a frail elderly person, or those with disabilities who might have rapid fatigue or decreased dexterity.
If that robot is tasked with dressing, and plans a trajectory solely based on that default model, the robot could clumsily bump into the human, resulting in an uncomfortable experience or even possible injury. However, if it’s too conservative in ensuring safety, it might pessimistically assume that all space nearby is unsafe, and then fail to move, something known as the “freezing robot“ problem.
To provide a theoretical guarantee of human safety, the team’s algorithm reasons about the uncertainty in the human model. Instead of having a single, default model where the robot only understands one potential reaction, the team gave the machine an understanding of many possible models, to more closely mimic how a human can understand other humans. As the robot gathers more data, it will reduce uncertainty and refine those models.
To resolve the freezing robot problem, the team redefined safety for human-aware motion planners as either collision avoidance or safe impact in the event of a collision. Often, especially in robot-assisted tasks of activities of daily living, collisions cannot be fully avoided. This allowed the robot to make non-harmful contact with the human to make progress, so long as the robot’s impact on the human is low. With this two-pronged definition of safety, the robot could safely complete the dressing task in a shorter period of time.
For example, let’s say there are two possible models of how a human could react to dressing. “Model One” is that the human will move up during dressing, and “Model Two” is that the human will move down during dressing. With the team’s algorithm, when the robot is planning its motion, instead of selecting one model, it will try to ensure safety for both models. No matter if the person is moving up or down, the trajectory found by the robot will be safe.
To paint a more holistic picture of these interactions, future efforts will focus on investigating the subjective feelings of safety in addition to the physical during the robot-assisted dressing task.
“This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction, and feedback control for safe human-robot interaction,” says Assistant Professor in The Robotics Institute at Carnegie Mellon University (Fall 2021) Zackory Erickson. “This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities.”
Story by Rachel Gordon
1 view
59
12
3 months ago 00:03:57 1
Chrystabell - Suicide Moonbeams (Official Music Video)
8 months ago 00:02:38 1
Do You Need a Hand? - a Bimanual Robotic Dressing Assistance Scheme (Short Version - No Narrative)
1 year ago 00:01:34 1
Robot-assisted Dressing
1 year ago 00:04:56 1
Eefje de Visser - Lange Vinnen
1 year ago 00:00:46 3
Pola Demianiuk creates robotic clothing that “assists the body to be dressed“
2 years ago 00:02:45 6
Sylvie Kreusch - Flaunt it, try it! (live)
3 years ago 00:15:54 3
Perpetual Tango with a robot channeling John Cage
6 years ago 00:11:42 1
Assisted Homicide - Hitman 2 Gameplay Part 5
6 years ago 00:01:19 145
Robot Puts Hospital Gown on a Person
6 years ago 00:02:59 7
Deep Haptic Model Predictive Control for Robot-Assisted Dressing
12 years ago 00:06:34 33
PLASTICZOOMS - The Sonnets (D E R remix) Directed by YUWAKATSUKI