Robot Helps Tricky Nurse Scheduling
MIT thinks this model will soon change with the development of robots that advise hospital staff on decision-making. A research team have experimented on this possibility in one of the most complex hospital tasks of all: scheduling.
In two new papers, CSAIL researchers show that a robot can learn from human staff and can then assist in assigning a range of tasks.
In one paper, presented at the recent Robotics: Science and Systems (RSS) Conference, the team demonstrated a robot that helped nurses scheduling tasks in a labour ward. It gave advice from where to place a patient to which nurse should do a C-section.
In the second paper, the same system was tested in a video game that simulated missile-defence scenarios. In the game, the system occasionally perfumed better than human experts at cutting both the number of attacks and the cost of decoys.
“The aim of the work was to develop artificial intelligence that can learn from people about how the labour and delivery unit works, so that robots can better anticipate how to be helpful or when to stay out of the way — and maybe even help by collaborating in making challenging decisions,” says MIT professor Julie Shah, the senior author on both papers.
See Also: Robots in Healthcare in 20 Years
In the course of their research, study authors found that a small group of workers were highly skilled in scheduling. This skill was not easily transferable to other colleagues.
Being able to automate the task of learning from experts and to then generalize it across industries could help make many businesses run more efficiently could make, they say.
The hospital study points out that scheduling is especially tough for healthcare facilities. Head nurses in labour wards’ have the added burden of trying to predict when a woman will arrive in labour, how long it will take and whether or not C-sections will be required.
“We thought a complex environment like a labour ward would be a good place to try to automate scheduling and take this significant burden off of workers,” researchers say.
Like many AI systems, the team’s robot was trained via “learning from demonstration.” This involved studying routine actions that human schedulers make and comparing them to all the possible actions that were not made at each of those moments in time. From there, a scheduling policy was developed that could respond dynamically to new situations that had not been encountered before.
Within this framework the system can predict assignments and suggest which nursing staff take o particular tasks.
The system was tested on the labour ward with a Nao robot. Staff accepted the robot’s recommendations 90 percent of the time. Deliberately bad recommendations were rejected by staff at the same rate – 90 percent – indicating that humans do not blindly accept bad advice.
The research team said that the potential for the system included deployment of robots for better collaboration and for training new staff.
Source: MIT News
Image Credit: MIT
Published on : Mon, 18 Jul 2016
Print as PDF
We can work across many PACS systems. No matter the size of your organization, iNtuition iReview can help you look across your imaging archives and create a unified interpretation view that’s made just for you. The configurable display protocols and user...
Our iNtuition iEMV viewer can display many types of images. It can even do some pretty amazingly advanced things. As a leader in advanced visualization, you can trust that TeraRecon can deliver impressive capabilities, but we strive to make it simple,...
Key FeaturesWe can provide an impressive range of clinical tools and deliver a remarkable clinical experience. On your PACS, off your PACS, within the surgical suite and beyond, iNtuition ensures your workflow is seamless and your imaging costs are minimized....
WHAT YOU SEE IS WHAT YOU GET3D Advanced Visualization is at the core of TeraRecon DNA. We are extending our capability to the 3D printing world with a dedicated image processing workflow to enhance 3D printing outcomes. Printing your model is easier...