get me out

MOTOROLA SOLUTIONS: Responding to city incidents 50% faster with artificial intelligence

In the world of emergency response services, time is of the essence.

Add to it the very demanding and high-stress nature of emergency events, and the personnel who handle the whole lifecycle of emergency response for public safety, could use every bit of help they can get.

Solutions providers like Motorola Solutions believe that technologies like video and artificial intelligence have crucial roles to play.

Dr. Mahesh Saptharishi who joins Motorola as its Chief Technology Officer and Senior VP from the company’s Avigilon acquisition, shared his company’s vision of the integrated command centre workflow.

It’s one which typically starts with an emergency call from the public that is received by a call taker.From there it goes to a computer-aided dispatch (CAD) system before being routed to a frontline responder.

There are a variety of products that correspond to each public safety role and one gets the sense that the products that make it all happen, have to be integrated to allow a seamless flow.

Above all, the handover of information from one role to the next, has to be seamless, timely and informed so that the appropriate action may be taken by first responders on the scene.

Dr. Mahesh believes however, that what we know of as ‘integrated’ today, may actually not be integrated enough.

TIMELY KNOWLEDGE FOR TIMELY ACTION WITH ARTIFICIAL INTELLIGENCE

“It isn’t about a bunch of standalone components that are integrated at a very basic level, “ he cautioned.

He said data-based technologies currently interact in simple ways but this needs to improve to ensure that the solutions used in emergency events are more effective.

This is where artificial intelligence (AI) may can help.

“If I get a report about an incident from my dispatch software, I also need some further context, for example information telling me how significant the accident is, whether anyone is injured and so on,” Dr. Mahesh theorised.

He describes this change as moving beyond data-based queries and into knowledge-based interactions that leverage additional context about what is happening during an event.

DATA-BASED QUERIES VS KNOWLEDGE-BASED INTERACTIONS

Here is an example of context-gathering can be used to save time within the whole public safety workflow.

Firstly, the emergency call-taker may be a different individual from the one handling dispatch duties; and that call-taker may have sensed ambient noises in the background and/or other pertinent information – for example lots of people yelling and so on.

The information exchange between call taker and the dispatcher is often limited due to time pressures. There is only a small window of time to convey information needed to kick start the emergency response .

Dr. Mahesh said, “In an ideal environment where technology is truly integrated, we should be able to review the audio of the call automatically and evaluate the tone of the caller to understand events happening in the background. With AI, these important details could can be extracted automatically.

“It could be something that the person who took the call heard but didn’t have time to transcribe.”

TOO SMART TO BE MANIPULATED

The automatic nature of the whole workflow, throws into stark relief, the areas that could be susceptible to manipulation.

For example, audio that is auto-extracted during the call, could be faked to hide the real situation at the scene of the event.

Dr. Mahesh said, “The results of using voice sentiment tech in healthcare and other industries are promising. It can help to judge if a person is being manipulative, honest or feeling stressed. These important details can be extracted using AI, simply by studying the person’s voice.

“This reduces the time taken for the dispatcher to take action.”

AT WHAT POINT DOES HUMAN HAND HOLDING OF AI  STOP?

As with any machine system that learns, there is a feedback loop between human operators and the robot, or in this case, the AI-based system.

Some of the feedback is explicit, but there is also confirmation implied by the operator.

For example, if the AI suggests to dispatch a fire engine to the site of a burning car and the operator confirms this is the right move, the AI is not only correct, it’s learning

Over time, simply by observing how the operator acts during the whole workflow of an event and the explicit commands and actions used, the AI learns.

MUSCLE MEMORY

All of this is building up towards something Dr. Mahesh terms as ‘muscle memory’, whereby the AI works in ways that seem almost instinctive

Before this happens, there would need to be some analysis to determine whether the operator’s judgement is similar to judgement of AI working behind the scenes.

If AI gets it right  it is rewarded by being allowed to suggest further actions. “In other words, there is now some level of trust that the AI is doing something that the operator mostly agrees with.”

It may require some tweaking of machine learning algorithms before AI systems are actually able to anticipate what operators would ordinarily do when a type of event is reported.

Eventually the machine would proactively suggest the best action to take.

“That’s what I call muscle memory.

“It’s certain things, humans do without thinking about it. We want the machine or AI to also be able to react to the human without the human asking it.”

In the scenario of a burning car, the system would not actually dispatch a fire truck, but only suggest it and present evidence to the operator who makes the ultimate judgement call.

RESPONDING TO A CITY

According to Dr. Mahesh, it is very early days for AI in public safety. The first step involves integrating different solutions and making them work cohesively together. That all starts with having integrated technologies within command centres.

“Our goal is to incrementally roll out these capabilities within the next couple of years,” he said, adding that Motorola Solutions has a human factors research team that works closely with public safety operators, observing how they use their technologies.

“One of the things our team does actively is measure the average response time for common types of calls that come into a dispatch centre. ”Via automation and improving the user experience for operators, we can substantially reduce the time to respond to common emergencies.

“Our hope is to cut response times in half. That’s the outcome we are looking for,” Dr. Mahesh emphasised.

When they succeed, the same operator would be able to process twice as many events as they do today.