Sharing more with Facebook to improve its value

This point in Kevin O’Keefe’s article titled “Facebook eliminating the junk in your News Feed” on Facebook “click bait” made an interesting point about using Facebook more to improve its value to you as a user:

All too lawyers and other professionals I speak with complain about all the junk they see on Facebook. Part of the reason is that they don’t use it enough to help Facebook know what they like. At the same, Facebook acknowledges they have a problem with “click bait.”

What interests me about this point is that we often think that sharing more with Facebook equates to even more junk in our News Feed because the more you share on Facebook, the more signals you send to the social network and these signals inform the ads and suggestions you receive (probably the same with Google).

Instead, what O’Keefe seems to be saying is that using Facebook more helps Facebook’s algorithms refine your experience with more relevant ads and suggestions:

Just as Google wants you to receive what you are looking for on a search or a news program wants to get you the most important news, Facebook wants you to receive what you consider the most important information and news.

Perhaps more importantly, it seems that using Facebook more actively also helps Facebook determine what to show you more of in your News Feed. This is helpful given that you don’t actually see everything your Facebook friends share in your general News Feed, only what Facebook’s algorithms think you want to see more of.

From a privacy perspective, this approach suggests that you should share more of your personal information for an improved and more relevant Facebook experience, not less. It isn’t an approach designed to restrict the use of your personal information as a strategy to better protect your privacy but rather intended to use more of your personal information in a way that adds more value to you, as well as Facebook.

It reminds me about Jeff Jarvis’ point a while ago about how brands that know more about you can present a more relevant experience of their services to you. Which would you prefer?

Shifting Facebook privacy challenges

When you think about causes for concern when it comes to privacy online, Facebook frequently comes to mind. The world’s largest online social network has roughly 1.32 billion monthly active users with an average of 829 million active daily users in June 2014. It’s no wonder that privacy regulators are watching Facebook and other large providers carefully.

As David Meyer pointed out in his article on GigaOm titled “Facebook has only “pivoted” on one kind of privacy — in other ways, it’s becoming more dangerous“, Facebook has changed but its not necessarily positive:

I must give credit where it is due: As Slate’s Will Oremus wrote in a piece called “Facebook’s Privacy Pivot” a few days ago, the social network has greatly improved its handling of user privacy in recent months. In a sense.

Once a company that seemed to delight in undermining its users’ choice of privacy settings, these days the social network promotes “friends” rather than “public” as its default post setting, it has an “anonymous” version of its site login tool that limits what personal information logged-into services can see, and it’s just generally less… shifty. Hooray for that.

However, there’s privacy and there’s privacy – and the kind that Facebook has decided to no longer play games with is just one facet, albeit an important one. Broadly speaking, it’s the kind that relates to providing a reliable border between private and public spaces. As for privacy from Facebook itself, its advertising customers and surveillance-happy authorities, that’s an entirely different matter.

The challenge facing Facebook users now is that, although Facebook has found a way to better respect users’ sharing preferences, it has found a new revenue option that is based on leveraging the data it holds from its users. As I pointed out in my post titled “You No Longer Control Your Personal Information, Facebook Does” –

The possible ramifications of this are only starting to become clear. For one thing, personal information is already a valuable commodity, it may even become a sort of currency given its enormous value. Facebook is clearly positioning itself well for this new personal data economy. Secondly, as the world heads closer to a sort of Scrutinised Zone, Facebook’s role could include being a powerful non-governmental power bloc with a “citizenry” rivalling the world’s larger nations in terms of population and economic and social influence.

For now, users should consider the possible ramifications for their privacy. Facebook now has an effective mechanism which it will use to trade access to users’ personal information. Before Anonymous Login, this was more covert and involved matching ads to Facebook updates and profile data programmatically. Anonymous Login goes beyond ads. It is a far more overt sales channel for users’ personal information with control shifting to Facebook from the people that control should belong to – the users, us.

Facebook seems to have realised that forcing users to share more than they would prefer to is not good for business. It has also realised that it no longer needs to do that, it has a tremendous amount of personal information it can profitably exploit in other ways. In some respects, users’ privacy has been even more eroded and users may not appreciate these shifting Facebook privacy challenges for quite some time. Whether this will return to bite Facebook will depend on how transparent it is about what it does with users’ personal information.

Reasonably practicable compliance with POPI is not enough

When considering how much you should do to comply with legislation like the Protection of Personal Information Act, you have three choices:

  1. Do as little as possible and see what you can get away with;
  2. Calculate the degree of “reasonably practicable” compliance required and stick with that;
  3. Adopt a more holistic approach to compliance.

Of the three options, the first is clearly a recipe for disaster. The only questions are when disaster will strike and how devastating will it be?

The second option is a popular one. To begin with, it is a practical solution because it takes into account what the law requires of you in order to meet the law’s standard so you limit your potentially significant investment in a compliance program without a corresponding quantitative benefit. Makes sense, right? In a way, yes, but what it doesn’t take into account is that your primary compliance risk is increasingly not regulators (at least not in South Africa where regulators often lack the capacity to respond very quickly), but rather the people who are directly affected by your decisions.

In other words, complying with laws like the Consumer Protection Act and Protection of Personal Information Act is not a quantitative exercise where you empirically (or as close to empirically as a legal compliance assessment can be) calculate your desired degree of compliance and work to that standard. Instead compliance is qualitative.

John Giles published a terrific post on the Michalsons blog titled “Only do what is reasonably practicable to comply with POPI” in which he explains POPI’s baseline compliance standard which is based on reasonableness and how this translates into what is likely an effective quantitative approach to compliance. It is worth saving the article because it is a handy reference for when you need to understand what the law means by “reasonably practicable”.

I don’t believe that this is enough, though. If anything, the question of what is reasonably practicable should only be part of your assessment of what you should do. The next, and arguably more important, question should be “What should we do to ensure not only compliance with the law but also to earn our customers’ trust?”. No, I’m not suggesting you drink the “rainbows and unicorns” energy drink and incur real money complying with some nebulous standard because your customers will like you more. Well, not entirely. What I am suggesting is that there is another dimension to compliance with legislation that affects people in very personal ways.

When you look at recent privacy controversies involving services like Facebook, Google and SnapChat, one theme that emerges from each of these controversies is not that these companies handled users’ personal information in ways they necessarily concealed from users. Their privacy policies describe what they do with users’ personal information in varying degrees. What really upsets users is that they weren’t expecting these companies to do the things they did because users tend to develop a set of expectations of what to expect from their providers which is typically not informed by privacy policies (because few people read them). These expectations are informed by what these companies tell them in marketing campaigns, what other users and the media tell them, what their friends share with them and their experiences with the services themselves.

When a provider steps outside its users’ collective expectations, mobs form and there is chaos in the metaphorical streets. The fact that these companies stuck to their published privacy policies and terms and conditions is largely irrelevant because users are not wholly rational and analytical. They don’t go back to the legal documents, read them quietly and go back to their daily lives when they realise that they mis-read or misunderstood the legal terms and conditions. No, they are outraged because the companies violated the trust users placed in these companies based on users’ expectations.

You may not have the same number of customers as Facebook, Google or SnapChat and your business may be different but if you are considering Protection of Personal Information Act or Consumer Protection Act compliance, you are dealing with the same people: consumers who have expectations and perceptions which you influence but certainly don’t control. If you violate the trust they place in you, the response will be swift and the consequences from a reputational perspective could be severe.

Fountain Square in Downtown Cincinnati Is a Public Square That Works for the City and Its People in a Myriad of Ways: Tyler Davidson Fountain 05/1973

When you develop your compliance program, assess what is reasonably practicable and set that as your commercial baseline. Then, consider how transparent you can be with your customers about what you intend doing with their personal information?

I remember reading a discussion about partners cheating on each other and at one point in the article the writer said that cheating isn’t just about the act but also the thoughts that precede it. If you have thoughts about another person which you don’t want to share with your partner, that is probably a good indication you are contemplating something you shouldn’t be doing. Apply that to your compliance program and ask yourself if you are comfortable disclosing what you intend doing with your customers’ personal information to them? If you are, be transparent about it in your privacy statement/policy and in your communications with your customers.

If you don’t feel comfortable being transparent about how you intend using your customers’ personal information and, instead, intend hiding behind technical legal compliance with the law to justify your data use, you may be setting yourself up for a bitter divorce and a costly battle with your customers. By the time the regulators arrive to assess your compliance, the damage will already have been done and the reasonably practicable thing to do will be to pick up the pieces of your reputation (and possibly your business) and start earning your customers’ trust again.

SnapChat privacy is not what you think

SnapChat’s privacy controls are what made it both enormously popular and troubling to its young users’ parents. When SnapChat launched, it gave users the ability to share photos and videos which promptly vanished into the ether. This appealed to its typically young and privacy conscious users because they finally had a way to share stuff with each other with impunity. This obviously bothered parents and teachers as it potentially gave their children a way to share content they shouldn’t share.

An Federal Trade Commission investigation has led to acknowledgements that content posted on SnapChat isn’t nearly as temporary as everyone may have thought. The New York Times published an article titled “Off the Record in a Chat App? Don’t Be Sure” which began with the following:

What happens on the Internet stays on the Internet.

That truth was laid bare on Thursday, when Snapchat, the popular mobile messaging service, agreed to settle charges by the Federal Trade Commission that messages sent through the company’s app did not disappear as easily as promised.

Snapchat has built its service on a pitch that has always seemed almost too good to be true: that people can send any photo or video to friends and have it vanish without a trace. That promise has appealed to millions of people, particularly younger Internet users seeking refuge from nosy parents, school administrators and potential employers.

Oversight or lie?

The FTC’s release includes the following background to its investigation and its stance:

Snapchat, the developer of a popular mobile messaging app, has agreed to settle Federal Trade Commission charges that it deceived consumers with promises about the disappearing nature of messages sent through the service. The FTC case also alleged that the company deceived consumers over the amount of personal data it collected and the security measures taken to protect that data from misuse and unauthorized disclosure. In fact, the case alleges, Snapchat’s failure to secure its Find Friends feature resulted in a security breach that enabled attackers to compile a database of 4.6 million Snapchat usernames and phone numbers.

According to the FTC’s complaint, Snapchat made multiple misrepresentations to consumers about its product that stood in stark contrast to how the app actually worked.

“If a company markets privacy and security as key selling points in pitching its service to consumers, it is critical that it keep those promises,” said FTC Chairwoman Edith Ramirez. “Any company that makes misrepresentations to consumers about its privacy and security practices risks FTC action.”

Touting the “ephemeral” nature of “snaps,” the term used to describe photo and video messages sent via the app, Snapchat marketed the app’s central feature as the user’s ability to send snaps that would “disappear forever” after the sender-designated time period expired. Despite Snapchat’s claims, the complaint describes several simple ways that recipients could save snaps indefinitely.

Consumers can, for example, use third-party apps to log into the Snapchat service, according to the complaint. Because the service’s deletion feature only functions in the official Snapchat app, recipients can use these widely available third-party apps to view and save snaps indefinitely. Indeed, such third-party apps have been downloaded millions of times. Despite a security researcher warning the company about this possibility, the complaint alleges, Snapchat continued to misrepresent that the sender controls how long a recipient can view a snap.

SnapChat published a brief statement about its agreement with the FTC on its blog which includes the following statement which is fairly worrying:

While we were focused on building, some things didn’t get the attention they could have. One of those was being more precise with how we communicated with the Snapchat community. This morning we entered into a consent decree with the FTC that addresses concerns raised by the commission. Even before today’s consent decree was announced, we had resolved most of those concerns over the past year by improving the wording of our privacy policy, app description, and in-app just-in-time notifications.

On the one hand, the FTC essentially found that SnapChat has been misleading its users about its service’s privacy practices and, on the other hand, SnapChat pointed to a communications lapse, almost as an oversight. Considering that SnapChat has always been focused on the fleeting nature of content posted on the service and the privacy benefits for its users, this doesn’t seem very plausible.

“Improved” privacy policy wording

SnapChat updated its privacy policy on 1 May. The section “Information You Provide To Us” is revealing because it qualifies “Snaps'” transient nature so much, transience seems to be the exception, rather than default behaviour:

We collect information you provide directly to us. For example, we collect information when you create an account, use the Services to send or receive messages, including photos or videos taken via our Services (“Snaps”) and content sent via the chat screen (“Chats”), request customer support or otherwise communicate with us. The types of information we may collect include your username, password, email address, phone number, age and any other information you choose to provide.

When you send or receive messages, we also temporarily collect, process and store the contents of those messages (such as photos, videos, captions and/or Chats) on our servers. The contents of those messages are also temporarily stored on the devices of recipients. Once all recipients have viewed a Snap, we automatically delete the Snap from our servers and our Services are programmed to delete the Snap from the Snapchat app on the recipients’ devices. Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it. Please note that users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device and if you add a Snap to your Story it will be viewable for 24 hours. Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe. We also cannot prevent others from making copies of your messages (e.g., by taking a screenshot). If we are able to detect that the recipient has captured a screenshot of a Snap that you send, we will attempt to notify you. In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted. You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

If you read the second paragraph carefully, you’ll notice the following exceptions to what most users assumed was the service’s default behaviour: permanently deleting Snaps after specified time intervals. I have highlighted the exceptions in the quotes below.

  1. “Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it
  2. “… users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device”
  3. “… if you add a Snap to your Story it will be viewable for 24 hours
  4. “Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe
  5. “We also cannot prevent others from making copies of your messages …”
  6. “In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted

The last sentence emphasises how much its users should rely on the service for meaningful privacy:

You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

Where does this leave SnapChat users?

The problem with these revelations is not that Snaps are actually accessible and may endure in some form or another. The problem is that SnapChat pitched a service that doesn’t retain its users’ content. SnapChat rose to prominence at a time when the world was reeling from revelations about unprecedented government surveillance which seemed to reach deep into a variety of online services we assumed were secure. It’s promise was to protect its users’ privacy and their content from unwanted scrutiny. In many respects, SnapChat seemed to be the first of a new wave of services that placed control in users’ hands.

In the process, SnapChat misled its users fairly dramatically and that is the most troubling aspect of this story. SnapChat users relied on an assumption that their content is transient and this has turned out not to be the case at all. Putting this into context, though, this doesn’t mean SnapChat is inherently less private than any other chat service. Short of poor security practices, this isn’t necessarily the case. It means that SnapChat is fairly comparable to other chat services which haven’t made similar claims about the privacy of their users’ communications.

That said, a significant challenge is that a significant proportion of SnapChat’s users are probably under the age of 18. Although US services are more concerned about children under the age of 13 using their services due to certain laws protecting children in the United States, our law doesn’t draw this distinction. In South Africa, a person under the age of 18 is a child and subject to special protections which SnapChat has had almost no regard for. Not only has SnapChat arguably processed children’s personal information in a manner which would not be acceptable in our law, it is misled those children about the extent to which it protects their privacy. At the very least, they and their parents should be very concerned and circumspect about continuing to use the service.

On a related note, it is worth reading Information Week’s article titled “5 Ways SnapChat Violated Your Privacy, Security“.

Facebook still uses you to sell ads, even without sponsored stories

Facebook recently announced that it has stopped allowing users and brands to create new sponsored stories. Instead, it requires brands and users to purchase ads to promote themselves.

[M]arketers will no longer be able to purchase sponsored stories separately; instead, social context — stories about social actions your friends have taken, such as liking a page or checking in to a restaurant — is now eligible to appear next to all ads shown to friends on Facebook.

This sounds like Facebook has shifted away from explicitly using social actions as advertising tools but what Facebook is doing is possibly an expansion of the sponsored stories model in a different guise. As ReadWriteWeb points out

But Facebook’s new wording—social context—eliminates all phrasing alluding to advertising, potentially leading to more confusion. At least with sponsored stories, one could deduce that “sponsored” means “paid for,” thus, users would recognize their likenesses being used in ads.

<

p>This isn’t a new approach to social advertising. It is a variation of what Facebook has been doing for some time. Facebook users’ personal information is still very much a part of Facebook’s advertising model.

Is privacy a recent fiction or a neglected human right?

Google’s Chief Internet Evangelist, Vint Cerf, recently spoke at the FTC’s Internet of Things Workshop where he suggested that privacy is a recent construct our society created when technology made it possible. Is privacy an anomaly, as he suggests, or is it an important right which technology has enabled and which we are neglecting to the point where we are negating it so we can share more stuff with each other?

Google’s Chief Internet Evangelist, Vint Cerf, recently spoke at the FTC’s Internet of Things Workshop where he suggested that privacy is a recent construct our society created when technology made it possible. According to The Verge’s coverage of his speech, privacy wasn’t a given just a few decades ago –

Elaborating, he explained that privacy wasn’t even guaranteed a few decades ago: he used to live in a small town without home phones where the postmaster saw who everyone was getting mail from. “In a town of 3,000 people there is no privacy. Everybody knows what everybody is doing.”

Rather than privacy being a fundamental right which is being threatened by technology (as we’ve seen through the recent government surveillance revelations courtesy of Edward Snowden), he argued that technology has both enabled what we take for granted as privacy today and, at the same time, is enabling us to erode it. I don’t believe he was arguing that privacy as a preference for being secretive is a recent invention. I suspect people have always tended to have a sense of their private spaces and have protected that in varying degrees, to the extent they have been able to. What is fairly recent is this idea of a right to privacy which is protected by legal authority.

Our right to privacy in South Africa has evolved through our common law and was entrenched as a fundamental right in our Constitution. The right to privacy is described in section 14 of the Constitution:

Everyone has the right to privacy, which includes the right not to have-

(a) their person or home searched;

(b) their property searched;

(c) their possessions seized; or

(d) the privacy of their communications infringed.

It is not an absolute right but it is an important one. It is also a right which we have to be mindful of and actively protect. What has been happening, instead, is that we have become accustomed to abdicating our right to privacy in exchange for access to online services and the ability to share more stuff with each other. This isn’t an egg we can unscramble. As Cerf pointed out (courtesy of The Verge) –

”Our social behavior is also quite damaging with regard to privacy,” Cerf says. He gives an example how a person could be exposed doing something that they wanted to keep secret by being tagged in the background of a stranger’s photo — a photo they never expected to be caught in. “The technology that we use today has far outraced our social intuition, our headlights. … [There’s a] need to develop social conventions that are more respectful of people’s privacy.”

This touches on my 2012 post titled Changing privacy norms where I wrote about our changing understanding of what privacy is and how our online activities are shrinking our expectations of privacy. Most people still think privacy is about secrecy but it is really more about how much control you have over your personal information which you have disclosed and continue to share? The more we share with each other online, the less control we have and virtually every controversy over an apparent privacy violation by a social media service is actually a further consensual encroachment into what remains of our private spaces that we have enabled.

Privacy may have been a very different sensibility before 20th century technologies enabled more effective protections and modern laws created more clearly defined privacy rights but we are complicit in continued privacy erosion through our digital tools. The big question is how aware we are of this, its implications for our future society and whether we are comfortable with the very public future that lies ahead?