ICT Spring 2019: AI, In-Depth (Part 1)
ICT Spring 2019 Luxembourg, discussed the latest Digital, FinTech and Space trends and attended by over 5,000 professionals. AI was one of the burning topics – featuring expert discussions about “AI in everyday life”, “Unleashing Human potential” and “Tech all over: From trends to fashion”, and more.
AI in Everyday Life
Jean Rognetta of Forbes France said that increasingly the tech world was now split between the USA and China – with over 200 Unicorns launched in the USA, 150 in China. (*EITN Note: A unicorn is typically defined as a startup company valued over USD1 billion; representing the statistical rarity of such successful ventures.)
Rognetta claims that up to 40% of European AI companies do not actually have AI in their software, and speculated that it would not be much different in the USA, “AI is now everywhere (except in press and journalism), being mst ubiquitous in marketing (23%) then customer service and IT, each with 16% market share.”
Merging the Physical and the Data Worlds
Bruno Zamborlin, of HyperSurfaces, envisioned as world of intelligent materials, where every object of any shape or size can become data enabled via its surface … Glass, wood, plastic, panel, steering wheel” can all understand physical interactions between physical (people) and the object
“Edge AI” is the technology that allows this to work – Chips are embedded in objects and sensor data is used to record events and process in real time (< 20ms latency) to understand the events.
HyperSurface claims to be the first data company for such physical interaction data- illustrated by a video demo of a “Hyper car door” in which 3-vibration sensors, costing just a few dollars, can detect more than 35 different events when various parts of the door are touched, opened, closed and pointed out that the technology can equally be used for smart homes, smart security, smart shops etc. All of this takes place without WiFi so that the data remains private, and he believes that
Neurosciences x AI = SuperPowers
Professor Diana Derval, of DervalResearch, related the story of the first autonomous car hat confused a garbage bag for a pedestrian – both of which were of complex irregular shapes; saying that the autonomous car’s AI system probably additionally required a heat sensor ability as pattern recognition to solve this problem.
“AI (in autonomous cars) make us think that they are bringing us superpowers, but that this Is quite illusory. Think about different applications needing different styles and Neural science can help define such patterns – the natural world can provide us with other intelligence cues, and perhaps realistically we should strive for Enhanced, not Artificial Intelligence.”
Emotion AI – For Better Human-Machine Relationship
Hazumu Yamazaki, CSO of Empath, shares an AI technology recognizes emotion in voices, primarily joy, anger, calm and sorrow, in real time, and currently its main uses are in robotics and in call centers – to provide real time alerts to bring supervisors in to help their staff with customers who are starting to get frustrated, before they get angry
He also raised some ethical questions that are prompted out of some more sinister request for the use of Empath technology “Can we use Empath as a lie detector, or to see if our partner is cheating?”, and added that AI companies should challenge themselves to imagine the worst kind of dystopia that could come about from negative use of their products.
The four standards to observe: 1. Think about the AI technology & involved ethics 2. Open up discussion to the public 3. Speculative design as a framework, add an artist as a team member 4. Think of a dystopia that you (might possibly) create.
WiFi Supporting AI Systems
Anita Huang, Perspicace, shared about WiFi motion and bio detector – where like radar, Wi-Fi generates a noise which can be disturbed by objects moving in it. So a person walking, jumping, falling, breathing all create their own pattern of disturbances which can be detected.
Obvious applications of this technology are in monitoring old people in their homes, where any fall, rapid or irregular breathing etc can be monitored and an alarm sent out. This is already widely used in nursing homes; emergency services for detecting people in fire or disaster zones, and for greatly enhancing evacuation efficiency, as well as in smart hotels for energy reduction based on peoples’ activities.
AI does NOT exist..
Luc Julia, CTO Samsung Electronics, shared how AI first became a reality in the summer of 1956 in Dartmouth University in the USA with the mathematical modelling of a neuron … then a network of them … then a brain.
He said that AI as it had nothing to do with intelligence as realized by early pioneers in 1961 when they saw that they could not teach their network to understand natural language, despite increasing computer power. “In 1997 Bobby Kasparov was beaten by a machine in chess – it was not through intelligence, but through algorithms.
An example of this lack of underlying intelligence is recognition of objects. To achieve near perfect recognition of “a cat”, a computer needs about 40000 example pictures; whilst a human brain needs just two images to know which is the picture of a cat, or not.
“AI is about what (already) exists today. There is no creativity, no invention in AI – just rules and data and recognition. AI is about following rules … innovation is about breaking rules.”
AI in Mobility
Marcus Willand of Porsche, spoke of challenges in mobility in the future; saying that Uber has put in huge investment in the autonomous driving, as Google’s Waymo and Chinese Baidu Appollo.
He said that in the future, profits in the mobility market will come not from vehicle sales, but from the vehicles’ digital platforms. “When you take a ride in an autonomous car you have time to consume digital services, and the technology driving these platforms is the AI from the autonomous car.” This puts a lot of pressure on traditional car manufacturers and demonstrates the importance of controlling the added value chain for mobility, and the mobility eco-system (insurance, cities, startups, OEMs)
He also talked about the mobility infrastructure, traffic flow and the use of AI to model how a city lives and breathes, for example an app to show how the traffic will evolve over the next hour. Additionally, AI tools can help to anticipate electricity consumption and even allow electric car drivers the opportunity to sell energy back into the grid.
A recent research conducted in Stuttgart on ‘Digital ways to access mobility without a car’, proposed a bundled package to allow customers to buy virtual transport “tokens” either by subscription or ad hoc to be used in any form of transport from bus and taxi to car hire, and also to locate and pay for parking spaces in the city.
The Vision of the Future with AI
Best-selling author Calum Chace shared his vision of the future with AI. He said that it is massively outdated to quote the ‘fact’ that our mobile phone carries more computing power than NASA; adding that in fact today, a good quality toaster has more computing power than NASA had available in 1969; and so applauded the bravery of men who were willing to risk their lives supported by something ‘as intelligent as a toaster’.
Chace said that Moore’s Law (of exponential growth in computational power) is not yet exhausted, and anticipates that in the very near future we will be able to have an intelligent conversation with Siri, and that autonomous cars will be a reality, the job market will see many skills become as redundant as they are replaced by AI solutions.
“Not having a job does not necessarily undermine the feeling of self-worth of a person. Well off retirees and the landed gentry are two examples of a class of citizens who are extremely happy not to be working; hence we need to make everyone rich to avoid the existential despair of having their jobs taken away by machines.”