Data collection: Too fast, too furious
Our data is out there. What we do each and every day, on our social media, within our social networks, and with technology overall, ensures this.
Some may even say that certain communities have become too focused on ensuring privacy of their data, when they should be more concerned about the security of our data ie. what hackers would be able to do with it.
In any case, we have not done a very good job of ensuring our data remains private.
Kavya Pearlman, founder and CEO of the global non-profit XR safety Initiative (XRSI) believes that more can be done to amplify diverse voices and to take a holistic approach to data privacy and safety, which includes regulations.
A discussion among interested parties about this indicates that fines imposed upon companies in breach of privacy regulations, are mere slaps on the wrists, or tiny percentages of their total annual revenue. The hand will still enter the proverbial cookie jar; data grabs for people’s information will still carry on.
And recent events involving multiple data leaks confirm that this data grab is accelerating exponentially. It is like a handbasket descending into hell, at the speed of a bullet train.
The problem with data
During the 2016 US presidential election, Cambridge Analytica bragged they had up to 5000 data points from each American. This happened in 2016, to help a presidential candidate read sentiments to influence the election outcome, by combining advertisements and the psychographics data of each American.
But this was not enough, the data grab continues and new frontiers are being explored all the time. In 2018, a Stanford University report asserted that spending 20 minutes in a VR simulation allows approximately 2 million unique recordings of body language to be captured.
Kavya describes it as an era of constant reality capture, with massive collections of data points like where our gaze falls, how intensely we look at something, our body’s pose and body language, and so on and so forth. This data collection goes way beyond what is considered Personal Identifiable Information.
XRSI coined the term “Biometrically Inferred Data” to address the risks associated with enormous amounts of meta data collection that can potentially lead to privacy and safety issues stemming from emerging technologies such as XR.
According to the XRSI taxonomy standards, XR is an umbrella term used to describe fusion of all the realities – including Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) – which consists of technology mediated experiences enabled via a wide spectrum of hardware and software, including sensory interfaces, applications, and infrastructures.
Facebook’s augmented reality (AR) glasses project Aria and various other organisations jumping into XR space, already collect massive amounts of data at scale that Kavya mentions and rightly cautions us to be wary of.
“Their whole agenda is to observe and learn how we do things,” she said adding that all data collected is going into their contextualized AI, with the aim of creating smarter and more pervasive versions of virtual assistants as we know them today.
Part of the reason why Kavya is advocating privacy in XR environments today, may have to do with her once being closely involved in evaluating similar technologies at Facebook during the 2016 election.
“Right after advising Facebook for third party security risks in 2016, I assumed my next role as the head of security for the oldest existing virtual world, Second Life by Linden Labs.”
Two significant happenings had propelled her towards advocating privacy with focus on XR environments/technologies.
‘I saw a (preview) of a lot of novel stuff that is to come when they launched a VR platform (with VR goggles to view),” she shared.
Another significant event was the introduction of the European Union’s General Data Protection Regulation or GDPR in 2018. Basically, Kavya views the GDPR as being lacking right now and needs to address the types of metadata that impacts individual privacy.
“No matter how many regulatory frameworks you bring about, when it comes to massive scale type of data collection, none of this data collection is taking into account what we call biometrically inferred data (BID),” she said.
GDPR sets guidelines for the collection and processing of personal information from individuals who live in the EU. The definition of personal information in accordance to GDPR, does not include data from recognition of face, voice, or ear shapes, retinal analysis, iris scanning, keystroke analysis, gait analysis, gaze analysis, and more.
How informed of a decision can Artificial Intelligence systems make, if data is incomplete at best, and biased at worst?
According to this website: If you can clean your training dataset from conscious and unconscious assumptions on race, gender, or other ideological concepts, you are able to build an AI system that makes unbiased data-driven decisions.
A responsible AI community, MKAI together with XRSI has also begun exploring the intersection of Extended Realities (XR) and Artificial Intelligence(AI) to help build safe and inclusive immersive ecosystems by building human-centric standards and framework of transparency, safety, accountability, and evidence of effectiveness.
The idea is not to stop technological progress and innovation. Many believe inaccurately that introducing regulations in XR might stifle innovation. When it comes to guidance, XRSI, together with various academic institutions, government entities and communities like MKAI , have begun developing the XRSI Privacy and Safety Framework for XR and Spatial Computing domain. This free, ever-evolving, community driven framework brings hope that we may be able to extend realities with trust after all.
XRSI proposes ethics-based guard rails for a path that technology and companies can navigate in a more responsible way, instead of just moving fast, and breaking things. Kavya Pearlman, a visionary “Cyber Guardian” said, “We need to advocate for our rights and help navigate the uncharted territories of XR.”