Teaching computers to understand natural language

Imagine that your company offered a product – for the purposes of this illustration lets make it a cleaning fluid for glass that comes in a spray bottle. After a successful launch, you ask for feedback from your customers and receive thousands of emails in response. Imagine how much time and effort it would take to read and understand all the text in the emails.

Now imagine that you had a computer that could read, understand, analyse and categorise all that text. And you could talk to it as you would a person and receive constructive information to improve your product . Wouldn’t that be something?

This is what Dr. Alyona Medelyan, co-founder and CEO of Thematic, has been working towards for over a decade. She has developed algorithms that make sense of language data and has been using her expertise to help businesses extract useful knowledge from text. She discussed with EITN, how talking to machines is going to be the future of computing and shared how developments in natural language processing will have an impact on technology.

Alyona Medelyan

Alyona Medelyan

Q: What do you see as the first practical applications of talking to computers?

Dr. Medelyan: When it comes to “talking” specifically, what’s practical and possible today is simple dialogues that a person can have with a computer hands-free. For example, looking up directions while driving, or asking Amazon Echo to set a timer.

In my presentation, I focused on understanding text, which is a bit different. A lot of business problems are about understanding large volumes of text. The most practical and useful application here is making sense of customer feedback that companies receive through surveys, social media comments, support tickets, feature requests, forum posts etc.

Q: I believe voice recognition technology plays a big part in talking to computers. Can you comment about how voice recognition technology has evolved since it was first developed? My experience with voice recognition is that its accuracy leaves a lot to be desired.

Dr.Medelyan: I agree that voice recognition, also called speech-to-text, is key when it comes to interacting with computers using our language. Luckily, there have been some major advances in this area over the past few years thanks to Deep Learning. VentureBeat published an article about Google’s error rate [being] as low as 8% last year. I’m sure it has improved more since then, and I believe Google now offers a speech-to-text API that any software developer can make as part of their system.

Q: What other developments in the technology space do you think is allowing this to become a reality?

Dr.Medelyan: We are still quite far away from being able to talk to computers on any topic, but we are getting pretty good at specific tasks such as figuring out what a document or a piece of text is about. In this specific area, the latest algorithms can be as accurate as people and are used reliably across many applications such as customer insight, chat bots, opinion mining, news analysis and metadata extraction to improve enterprise search.

Q: Apart from efficiency, what do you see as the key advantages of being able to talk to computers?

Dr. Medelyan: The key advantage is not efficiency, but a better experience of interacting with computers. People love talking and it’s the most natural thing for us to do. Interaction with computers is now limited to clicking around, scrolling and pinching. It’s not natural and it’s a means to an end.

As Natural Language Understanding matures, we will be able to perform many different actions through voice rather than having to type or use apps. Efficiency, in my opinion, is secondary.

Q: How would computers be able to deal with accents, colloquialisms and mixed languages eg. Malaysians who might construct a sentence using English, Malay, Cantonese and Tamil?

Dr. Medelyan: Interesting question. I believe, that data-driven approaches can be very effective here. So long as there is a large body of Malaysian text, a computer can make inferences about what different words mean disregarding their origins and language. It’s actually similar to how algorithms deal with synonyms.

Q: The way we write and spell has changed with the popularity of messaging apps. How do you think language will change as a result of talking to computers?

Dr.Medelyan:Hopefully, people will get better at enunciation!

Q: How would you respond to people who might despair about how the English language might devolve by talking to computers?

Dr. Medelyan: I don’t think that the English language will devolve. What happened with messaging, at least in English, is that for a while people started to use abbreviations, such as “2moro” instead of “tomorrow”. In language, people like to use short cuts. Typing in this way is faster and brings down the cost of a text message. I saw some people using this kind of language on paper. But what happened next is that predictive typing got significantly better, and now short cuts [exist] in typing to accept the suggested word. As a result, “text speak”, as they call it in New Zealand, is disappearing. What does devolve is people’s ability to spell. We are so used to spelling correction and predictive typing, that we forget how to spell even the simplest words.

I believe that language itself never devolves. It’s an ever-changing medium and we need to get used to that.

There are no comments

Add yours