Meta’s latest plans show it’s still missing the point on privacy
Join today’s top leaders online at the Data Summit on March 9. Register here.
Meta has big plans for the future. And they all involve extracting even more data from consumers.
Last week, the company released a positioning statement emphasizing a bold, tech-driven vision for its metaverse ambitions: smarter personal assistants, a fully immersive AR/VR experience, universal language translation , self-supervised machine learning, human-like AI, and more. All of this comes with the stated goal of providing advertisers with richer consumer behavioral data and assurances that Meta will accomplish all of this in an ethically sound manner.
However, Meta’s path forward may contain more roadblocks than the company anticipates. Much of what the company is planning represents an attempt to upgrade its business model rather than respond to consumer demand and presents problematic extensions of consumer surveillance rather than a true pivot towards a collection of “responsible” data.
It’s certainly no surprise that Meta feels the need to change tactics. The company’s much-vaunted ad-tech data-tracking revenue model, which thanks to Facebook has brought the company billions of dollars in profits and double-digit annual growth, faces a threat existential. Indeed, Facebook will no longer be able to monetize much of its user base, as Apple and Google make ad tracking an opt-in feature and also create significant limits on the data third-party apps can collect. .
This sudden blocking of customer data will make growth – at least the kind Meta investors are used to seeing – much more difficult. Investors already seem to have noticed: Meta’s stock has lost more than a third of its value since January 1, 2022.
Chances are there are even more issues ahead for Meta. Two of the main pillars of the company’s grand vision – smarter personal assistants and a data-rich virtual world – represent an inherently flawed understanding of what consumers really want, as well as what kind of data Meta is going to be able to collect. .
Meta plans to build personal assistant capabilities to rival Apple’s Siri and Amazon’s Alexa. What Meta hopes to differentiate from its as-yet-unnamed service is how deeply it will integrate into users’ lives. Meta says the Assistant will listen to our conversations and monitor our actions, with the apparent aim of providing us with a more personalized experience. The more the device learns about us, the more it can personalize its recommendations.
But behind these outward benefits to users lies a less explicit but, for Meta, strategically crucial objective: data. Data that goes beyond simple demographics and into the realm of behavioral monitoring. The technology will collect data about users even when they are not directly interacting with it. Connected devices will also be expected to monitor behavioral biometrics: what users look at, their facial expressions, the gestures they make with their hands, etc.
The problem with all of this, aside from the creepy overtones of Big Brother, is that Meta is creating a solution to a problem no one has. Collecting all this additional data certainly meets the needs of Meta, but not those of the consumer. Users don’t want an all-knowing personal assistant listening to everything they say, watching everything they do.
Meta’s big data plans also represent a fundamental misunderstanding on its part of how people use their digital assistants. Google and Amazon have been monitoring usage and adding features since they launched their devices, and what they’ve found is that people use them for factual information, not personal advice. They seek to avoid having to look at the clock or open a newspaper. Just because users agree to devices providing them with weather updates or the latest sports scores doesn’t mean they want them to monitor their children’s behavior or measure their mood.
In other words, the data Meta plans to collect will be used to create technology that has little to do with what consumers actually want from their personal assistants.
Of course, the reason Facebook created an umbrella company called “Meta” is because of its plans to expand into the metaverse using virtual reality/augmented reality (VR/AR) technology. As with its proposed personal assistant, Meta plans to use VR/AR to go beyond simple demographics and collect behavioral data based on where people go, what they do, what they watch. , how long they watch it, etc.
Meta assures customers and investors that it is going to implement AI in a much more “responsible” way. Mark Zuckerberg and his co-presenters at Meta’s February shareholder meeting were keen to remind viewers that privacy will be “built in” to their metaverse. Consumers will understandably be suspicious of such claims, given Meta’s abysmal record of data privacy.
Meta’s assertion of “responsible” AI hinges on some pretty dubious reasoning. The company seems to extrapolate from its Facebook data model. When people use Facebook – posting status photos, commenting on other people’s posts, etc. – it is implicitly understood that Facebook may use this information in any way it deems appropriate.
However, whether this understanding will translate to the virtual world remains an open question. Meta arbitrarily says that it will hold all information resulting from interpersonal interactions in the VR/AR environment that the company will create. And that when people choose to enter the environment and interact with it, they implicitly allow the company to collect all the information it wants.
This is a complete misinterpretation of what ethical data collection is. Just because people choose to enter these environments does not mean they consent to all of their data being collected and the company using that data. Truly ethical data collection involves transparency and authorization, where users know that someone is collecting information, know how it is used, and authorize that collection and use.
Moreover, legislators are very likely to agree with consumers. Existing laws already strongly protect consumers’ biometric data. And Facebook has certainly broken privacy laws in the past. Last year, he was fined $650 million for violating an Illinois privacy law. Other states will likely follow soon with similar legislation, which could jeopardize Meta’s data plans.
Rob Shavell is co-founder and CEO of the online privacy company Delete me. He has been featured as a privacy expert in The Wall Street Journal, The New York Times, The Telegraph, NPR, ABC, NBC and Fox. He is a strong supporter of privacy law reform, including the California Privacy Rights Act (CPRA).
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers