AI Weekly: Facebook fiasco proves we need a better approach to personal data

Mark Zuckerberg testifies in front of the Senate Judiciary and Commerce Committees on April 10, 2018.
Above: Mark Zuckerberg testifies in front of the Senate Judiciary and Commerce Committees on April 10, 2018.

Recent revelations about Cambridge Analytica’s use of Facebook data have a lot of people rightfully concerned about how their personal information is collected and used online. But while Facebook is trying to position the Cambridge Analytics breach (and collection of user data by other third-party apps) as a function of bad actors on its platform, those actors wouldn’t be attracted to Facebook data in the first place if it wasn’t for its power.

In other words, the cardinal sin behind the Cambridge Analytica breach isn’t unethical developer behavior. The cardinal sin is how we as a society have allowed the tech industry to collect and handle user data. Facebook controls a significant component of its users’ social lives, both online and off. Google controls what we know and how we get work done.

This control extends not only to the data we see — like photos, videos, articles, and screeds from conspiracy theorist relatives — but also to the data we don’t see. All of these companies can view how we engage with data: what we find worthwhile, who we find interesting, and so on.

To get a touch dramatic, this is incredibly concerning from a philosophical standpoint, since we don’t have control over information about who we are. Our interactions online are as significant and real as those we have in meatspace, but we only genuinely control information about the latter. It’s something that makes me sincerely worried about the future.

There’s another reason to be worried from a far less intellectual standpoint, however: Our lack of control over this data makes it far harder for us to benefit from it. For example, Siri may never be able to make a decision based on information stored in my Google and Facebook accounts, and there’s nothing I can do about it.

I’d happily give Siri all the data I could about my dining preferences if Apple assured me it would be used only for booking me reservations through my Apple Watch. As it stands, I can’t do that, because that information is trapped inside Google, Foursquare, Yelp, OpenTable, Resy, and yes, Facebook.

Sure, Apple’s assistant integrates with two of the companies on that list to help arrange dining for its users. But that doesn’t provide the sort of deep, personal understanding necessary to turn “Hey Siri, book me dinner for two tonight” into a reservation that perfectly fits my schedule and preferences without further intervention.

Part of this has to do with tech companies wanting to cement their power using the network effects of their data. If I can’t get the information and functionality I want through Siri, I might be willing to switch to the Google Assistant or Alexa. Data control leads to revenue — just look at the quarterly financials of tech companies like Google, Facebook, and Microsoft. But the Cambridge Analytica fiasco also shows how bad actors can abuse data portability through false pretenses.

So what do we do? Data isolation harms the creation of intelligent experiences and increases the power of gigantic companies. Opening up data access provides the potential for abuse. Both are problems worth tackling, in equal measure.

In an ideal world, I’d like to see us shift to a centralized…

Follow Me

Peter Bordes

Exec Chairman & Founder at oneQube
Exec Chairman & Founder of oneQube the leading audience development automation platfrom. Entrepreneur, top 100 most influential angel investors in social media who loves digital innovation, social media marketing. Adventure travel and fishing junkie.
Follow Me

More from Around the Web

Subscribe To Our Newsletter

Join our mailing list to receive the latest news from our network of site partners.

You have Successfully Subscribed!

Pin It on Pinterest