Revisiting “front page of the newspaper” wisdom

I’ve been preparing for my presentation at the Advertising and Marketing Law Conference on 15 October and reading through some materials I’ll probably reference in my slides. One paragraph just stood out for me in Anil Dash’s article “What is Public?“:

The conventional wisdom is “Don’t publish anything on social media that you wouldn’t want to see on the front page of the newspaper.” But this is an absurd and impossible standard. The same tools are being used for person-to-person conversations and for making grand pronouncements to the world, often by the same person at different times. Would we say “Don’t write anything in a sealed letter that you don’t want to see on the front page of the newspaper” simply because the technology exists to read that letter without opening it?

I think the reason this stood out for me is because conventional wisdom is that you shouldn’t publish anything online that you wouldn’t want published on the front page of a newspaper or on a billboard at a busy intersection. It makes sense until you consider that we are using the same platforms to share things privately and publicly.

How many people use Twitter for personal sharing as if they and their Twitter friends are the only people who can see otherwise public updates? They certainly don’t intend for their tweets to be shared with everyone who uses Twitter (until they do) and although Twitter is very public (unless you lock down your profile) many of its users still have this illogical expectation that their tweets are not for public consumption.

If anything, this sort of issue highlights how complex privacy is in this digital age. We face a number of tough questions about how we use social media and what seemingly obvious notions like privacy really mean to us.

Privacy is contextual and social, less legal and technical

Privacy is more than a couple settings and a consent checkbox on a form somewhere. Privacy and publicity seem to be pretty straightforward concepts and, legally, they are treated fairly superficially and defined mechanically. A result of that is a similarly superficial treatment in conversations about privacy and publicity in social and commercial engagements which rarely touches on what privacy really means to us. This leaves us fundamentally confused and conflicted about privacy because we have a deeper sense of what privacy means to us but the typical conversation about privacy lacks the language to describe that deeper sense of it all.

Anil Dash and dana boyd recently published articles on Medium titled “What is Public?” and “What is Privacy?“, respectively, which dive deeper into what publicity and privacy mean to us. If you are interested in what privacy and publicity mean in modern times, you should read both articles carefully:

What Is Public? andWhat Is Privacy?

One of the paragraphs in Dash’s article that stood out for me was this one:

What if the public speech on Facebook and Twitter is more akin to a conversation happening between two people at a restaurant? Or two people speaking quietly at home, albeit near a window that happens to be open to the street? And if more than a billion people are active on various social networking applications each week, are we saying that there are now a billion public figures? When did we agree to let media redefine everyone who uses social networks as fair game, with no recourse and no framework for consent?

I agree more with boyd that privacy is more about social convention. I particularly like this extract from boyd’s article:

The very practice of privacy is all about control in a world in which we fully know that we never have control. Our friends might betray us, our spaces might be surveilled, our expectations might be shattered. But this is why achieving privacy is desirable. People want to be in public, but that doesn’t necessarily mean that they want to be public. There’s a huge difference between the two. As a result of the destabilization of social spaces, what’s shocking is how frequently teens have shifted from trying to restrict access to content to trying to restrict access to meaning. They get, at a gut level, that they can’t have control over who sees what’s said, but they hope to instead have control over how that information is interpreted. And thus, we see our collective imagination of what’s private colliding smack into the notion of public. They are less of a continuum and more of an entwined hairball, reshaping and influencing each other in significant ways.

I also think this next extract nicely captures why people become angry with brands and why reputational harm happens at an emotional level. If you represent a brand, you should read this a few times:

When powerful actors, be they companies or governmental agencies, use the excuse of something being “public” to defend their right to look, they systematically assert control over people in a way that fundamentally disenfranchises them. This is the very essence of power and the core of why concepts like “surveillance” matter. Surveillance isn’t simply the all-being all-looking eye. It’s a mechanism by which systems of power assert their power. And it is why people grow angry and distrustful. Why they throw fits over being experimented on. Why they cry privacy foul even when the content being discussed is, for all intents and purposes, public.

Privacy is contextual. Law is also a poor mechanism for protecting it because law tends to be mechanical (it has to be). What we need more is a better awareness of what privacy and publicity mean in a social context and where the line is.

Jeff Jarvis made a statement about privacy in This Week in Google 261 which really caught my attention:

Privacy is a responsibility. It is an ethic of knowing someone else’s information.

Photo credit: Lost in Translation by kris krüg, licensed CC BY-SA 2.0

Sharing more with Facebook to improve its value

This point in Kevin O’Keefe’s article titled “Facebook eliminating the junk in your News Feed” on Facebook “click bait” made an interesting point about using Facebook more to improve its value to you as a user:

All too lawyers and other professionals I speak with complain about all the junk they see on Facebook. Part of the reason is that they don’t use it enough to help Facebook know what they like. At the same, Facebook acknowledges they have a problem with “click bait.”

What interests me about this point is that we often think that sharing more with Facebook equates to even more junk in our News Feed because the more you share on Facebook, the more signals you send to the social network and these signals inform the ads and suggestions you receive (probably the same with Google).

Instead, what O’Keefe seems to be saying is that using Facebook more helps Facebook’s algorithms refine your experience with more relevant ads and suggestions:

Just as Google wants you to receive what you are looking for on a search or a news program wants to get you the most important news, Facebook wants you to receive what you consider the most important information and news.

Perhaps more importantly, it seems that using Facebook more actively also helps Facebook determine what to show you more of in your News Feed. This is helpful given that you don’t actually see everything your Facebook friends share in your general News Feed, only what Facebook’s algorithms think you want to see more of.

From a privacy perspective, this approach suggests that you should share more of your personal information for an improved and more relevant Facebook experience, not less. It isn’t an approach designed to restrict the use of your personal information as a strategy to better protect your privacy but rather intended to use more of your personal information in a way that adds more value to you, as well as Facebook.

It reminds me about Jeff Jarvis’ point a while ago about how brands that know more about you can present a more relevant experience of their services to you. Which would you prefer?

4 suggestions for preserving your digital assets for your heirs after you die

What will happen to your online profiles and data when you die? Before you answer that your digital stuff isn’t all that important so who cares, consider what you are using the digital cloud for:

  1. Email that increasingly includes bank statements, insurance policy information and functions as a backup for when you forget your password for your online profiles;
  2. Document storage and backups for all those policy documents, scans of your ID and passport, accounting records and tax returns;
  3. Photos and videos of your family going back years, decades even (have you maintained your print photos and offline video files to the same extent?);
  4. Various social profiles which you use to keep in touch with friends and family on a daily basis.

The cloud is more than just an incidental part of your life. Unless you are a committed paper-based archivist, you probably have more and more of your life recorded in bits stored on servers around the world and you are likely the only person who can access that data. When the time comes for you to leave this life your family will need to access that data for various reasons and, short of a séance, you won’t be in a position to pass along your access credentials if you don’t plan ahead.

Here are 4 suggestions for how you can do to make sure your family can access your digital assets after you pass on:

  1. Use a password manager like LastPass or 1Password to store all your passwords and key information (I use LastPass and it enables me to store credit card information, ID and passport information and a variety of other sensitive data securely) and use a strong master password to secure your password manager profile (while you’re at it, change your passwords to unique and more secure passwords to protect your profiles better).
  2. Tell your family about your online profiles and how to access them in your will or in a document you leave with your will. If you use a password manager, share the master password with trusted family members or friends so they can unlock your digital assets when the time comes.
  3. Backup your data regularly and automatically. Don’t rely on manual backups. Automate them. Use whichever secure and reliable backup service you prefer (popular options include Dropbox, Google Drive and more) but make sure they include your important stuff and work properly. Storage is becoming cheaper all the time so you should have plenty of space for all your stuff.
  4. Organise your digital archives so they can be easily searched and key documents located by your heirs. One of the first things your family will need to do when you die is report your estate to the relevant authorities and they will need key information to do that. Check with your attorney what they will need and collate that information for them in a convenient folder or location and share that with your family ahead of time.
  5. Make sure you identify all your key online services to your family and explain to them how to access them and your data. Don’t assume that everyone knows the services you use and how to use them effectively. They may not share your passion for those services but you probably don’t want to add to their aggravation by forcing them to stumble around unfamiliar services while grieving for you.

Image credit: ‘Til Death Do Us Part by [n|ck], licensed CC BY 2.0

Is sharing naked photos of your kids child pornography?

(Update 2014-06-12): Professor James Grant, an Associate Professor of Law at the University of the Witwatersrand, has published an article on his site, titled “Child Pornography: Distribution by Parents“, in which he explores the implications of the Criminal Law (Sexual Offences) Amendment Act which also deals with child pornography. That Act also has a pretty broad definition of “child pornography”, possibly even broader than the Films and Publications Act, and is even more problematic for parents. I especially like his comment on the law (and not just because he mentions me):

This is an analysis of the law as it is. It is not a comment on what the law ought to be. I’m not sure our law should be this strict. But then I wish I didn’t live in a world full of depraved monsters. Paul Jacobson has already made all of the sensible remarks. Put the best interests of your child above any of your interests. All I can add is alot of scepticism about human nature. I have met and studied the wrong kind of people and am probably now speaking as a father of the two year old girl I looked after this morning. People are always amazed that their wonderful, kind and friendly neighbour turns out to be a monster. We must never forget for one moment that evil and depravity is banal and that monsters must live somewhere. But here is the problem, in our era of immediate communication and instant access, everyone is your neighbour.

You should definitely read his article too.

The National Prosecuting Authority’s recent warning that “any image of a naked child is child pornography” has, understandably attracted quite a bit of attention. Why is “any image of a naked child” pornography? According to the NPA’s Advocate Bonnie Currie-Gamwo –

… the reason for that is quite simple; it can be abused. What you do innocently, others take and they abuse it.

The NPA cautioned parents against publishing naked photos of their children online as the NPA considers this to be child pornography and the NPA may well prosecute parents who don’t heed the warning. This is problematic for both parents who have become accustomed to sharing photos of their kids growing up as well as photographers commissioned to do family shoots, in particular popular newborn baby shoots and who may have published some of the photos from these shoots in their online catalogues, with or without parents’ consent.

What is “child pornography”

Child pornography is a significant problem and the ease with which content can be shared online has only contributed to child pornography’s proliferation. That said, the NPA’s blanket statement that “any image of a naked child” is child pornography may be too broad. Unfortunately the NPA doesn’t seem to have specified which laws it interprets so broadly.

One possibility is that the NPA is referencing the Films and Publications Act which regulates “the creation, production, possession and distribution of films, games and certain publications” in order to –

  • provide consumer advice to enable adults to make informed viewing, reading and gaming choices, both for themselves and for children in their care;
  • protect children from exposure to disturbing and harmful materials and from premature exposure to adult experiences; and
  • make the use of children in and the exposure of children to pornography punishable.

The Films and Publications Act defines “child pornography” as follows:

child pornography” includes any image, however created, or any description of a person, real or simulated, who is, or who is depicted, made to appear, look like, represented or described as being under the age of 18 years—

(i) engaged in sexual conduct;
(ii) participating in, or assisting another person to participate in, sexual conduct; or
(iii) showing or describing the body, or parts of the body, of such a person in a manner or in circumstances which, within context, amounts to sexual exploitation, or in such a manner that it is capable of being used for the purposes of sexual exploitation;

The Act states that any person who –

(a) unlawfully possesses;
(b) creates, produces or in any way contributes to, or assists in the creation or production of;
(c) imports or in any way takes steps to procure, obtain or access or in any way knowingly assists in, or facilitates the importation, procurement, obtaining or accessing of; or
(d) knowingly makes available, exports, broadcasts or in any way distributes or causes to be made available, exported, broadcast or distributed or assists in making available, exporting, broadcasting or distributing,

any film, game or publication which contains depictions, descriptions or scenes of child pornography or which advocates, advertises, encourages or promotes child pornography or the sexual exploitation of children, shall be guilty of an offence.

Perspectives on “sexual exploitation” of children

The term “sexual conduct” includes a variety of sexual acts and this is the focus of the first two parts of the “child pornography” definition. These two parts are fairly clear but it is the third part which is possibly what the NPA is referring to –

showing or describing the body, or parts of the body, of such a person in a manner or in circumstances which, within context, amounts to sexual exploitation, or in such a manner that it is capable of being used for the purposes of sexual exploitation

The Act doesn’t define “sexual exploitation” so we need to understand what this term means in order to understand the scope of this part of the definition. The World Congress against Commercial Sexual Exploitation of Children defined the “commercial sexual exploitation of children” as:

sexual abuse by the adult and remuneration in cash or kind to the child or a third person or persons. The child is treated as a sexual object and as a commercial object.

The UK National Society for the Prevention of Cruelty to Children describes “sexual exploitation” as follows:

Child sexual exploitation (CSE) is a form of sexual abuse that involves the manipulation and/or coercion of young people under the age of 18 into sexual activity in exchange for things such as money, gifts, accommodation, affection or status. The manipulation or ‘grooming’ process involves befriending children, gaining their trust, and often feeding them drugs and alcohol, sometimes over a long period of time, before the abuse begins. The abusive relationship between victim and perpetrator involves an imbalance of power which limits the victim’s options. It is a form of abuse which is often misunderstood by victims and outsiders as consensual. Although it is true that the victim can be tricked into believing they are in a loving relationship, no child under the age of 18 can ever consent to being abused or exploited.

The United Nations’ task force for Protection from Sexual Exploitation and Abuse describes “sexual exploitation” in the following terms:

“The term “sexual exploitation” means any actual or attempted abuse of a position of vulnerability, differential power, or trust, for sexual purposes, including, but not limited to, profiting monetarily, socially or politically from the sexual exploitation of another.” (UN Secretary-General’s Bulletin on protection from sexual exploitation and abuse (PSEA) (ST/SGB/2003/13))

These three explanations of “sexual exploitation” when it comes to children have common elements:

  • Manipulation, coercion or an abuse of a relatively vulnerable position for sexual purposes;
  • Children’s lack of legal, cognitive or even emotional capability to consent to being exploited sexually.

The idea of sexual exploitation lies at the core of what most people think about when the topic of “child pornography” is raised and it is a vile set of behaviours that do terrible harm to the most vulnerable members of our society. As a parent and as a human being, there is really no justification for this sort of conduct.

Any naked pictures?

The question, though, is whether a parent publishing naked photos of his or her children “amounts to sexual exploitation, or in such a manner that it is capable of being used for the purposes of sexual exploitation”? This question is distinct from a different question, namely whether parents should be publishing naked photos of their children, even if it isn’t child pornography? This second question has more to do with your children’s right to privacy and how you are effectively making decisions for them about how little privacy they will have in a connected world where the Internet doesn’t forget.

Returning to the “child pornography” issue (based on the Films and Publications Act, at any rate), the NPA’s contention that any image of a naked child is child pornography seems to be too broad. The NPA seems to be relying on the last part of the definition which whether the photos are “capable of being used for the purposes of sexual exploitation”? Or, as Advocate Currie-Gamwo put it:

… the reason for that is quite simple; it can be abused. What you do innocently, others take and they abuse it.

Regardless of how your child is depicted in the photo, if the photo can be abused by others, the NPA seems to be saying that falls within “purposes of sexual exploitation”. Going further, parents who publish these photos must be held accountable for the depraved “others” misuse of those photos of your children.

I don’t practice criminal law but that strikes me as a particularly chilling approach to criminal liability and it may not be consistent with how the Films and Publications Act describes the nature of the offence which I outlined earlier. The Act lists what seem to be a series of positive acts and references “knowingly” doing certain things. If a parent were to be held accountable because a paedophile downloaded a photo of his or her child and somehow used it “for the purposes of sexual exploitation”, I wonder whether the parent’s potential negligence could be used to hold the parent liable, even if the parent reasonably ought to have known that this was how the photos could have been misused?

Where does this leave us?

Aside from depictions of children engaged in or participating in forms of sexual conduct, the Films and Publications Act seems to target descriptions or depictions of children’s bodies that amount to actively manipulating, coercing or abusing children, in the process taking advantage of their vulnerability, for sexual purposes. This sort of conduct is clearly abhorrent.

Whether content amounts to child pornography isn’t always clear and there is certainly room for interpretation based on the context but classifying “any image of a naked child” as pornography seems to be interpreting the law too broadly, especially if the possible consequences for parents sharing these sorts of photos with friends and family with innocent intentions can be so severe.

What parents should seriously consider is whether they should share seemingly innocent photos of their naked or partially naked children online. As I mentioned above, the Internet doesn’t forget and when you publish photos of your children publicly, you make decisions about their present and future privacy for them without them being able to make a meaningful decision themselves.

Until this sort of issue reaches a court and is decided (possibly on an interpretation of the law or an assessment of the parents’ right to privacy as a counterweight to the NPA’s scrutiny), we are left with the NPA’s threats of dire action and deciding whether sharing photos of our children is worth the risk posed by an arguably overzealous group of prosecutors. In the context of that uncertainty, here are a few suggestions:

  1. If you feel the urge to publish a naked photo of your child, remember the NPA’s view that it is child pornography and also the reality that there are people who scour the Internet for photos of children to meet their depraved needs. Ask yourself if you want to fuel those needs for the sake of attention from your friends and family?
  2. If you decide to share photos of your children, limit who you share the photos with. It may not help you from the NPA’s perspective but limiting the photos to people who you know and trust keeps those photos out of the hands of those you don’t and adds a little more protection of your children’s privacy.
  3. Photos of children in sexually suggestive or explicit poses are not ok. The law clearly criminalises these sorts of photos so don’t take them and don’t share them.
  4. If you are a photographer and you have been asked to do a photo shoot where the kids may be naked (for example, a newborn shoot), perhaps refrain from publishing those photos or, at least, be very selective about which ones you publish as part of your portfolio. Make sure you ask the parents for consent before you publish any photos of their children (your blanket consent in your privacy policy is not enough) and that the parents understand this additional risk of criminal prosecution.

Regardless of whether the NPA’s interpretation is justified, one clear principle of our law when it comes to children is that we always ask what is in their best interests. Is publishing photos of your naked children in their best interests, or just in yours?

SnapChat privacy is not what you think

SnapChat’s privacy controls are what made it both enormously popular and troubling to its young users’ parents. When SnapChat launched, it gave users the ability to share photos and videos which promptly vanished into the ether. This appealed to its typically young and privacy conscious users because they finally had a way to share stuff with each other with impunity. This obviously bothered parents and teachers as it potentially gave their children a way to share content they shouldn’t share.

An Federal Trade Commission investigation has led to acknowledgements that content posted on SnapChat isn’t nearly as temporary as everyone may have thought. The New York Times published an article titled “Off the Record in a Chat App? Don’t Be Sure” which began with the following:

What happens on the Internet stays on the Internet.

That truth was laid bare on Thursday, when Snapchat, the popular mobile messaging service, agreed to settle charges by the Federal Trade Commission that messages sent through the company’s app did not disappear as easily as promised.

Snapchat has built its service on a pitch that has always seemed almost too good to be true: that people can send any photo or video to friends and have it vanish without a trace. That promise has appealed to millions of people, particularly younger Internet users seeking refuge from nosy parents, school administrators and potential employers.

Oversight or lie?

The FTC’s release includes the following background to its investigation and its stance:

Snapchat, the developer of a popular mobile messaging app, has agreed to settle Federal Trade Commission charges that it deceived consumers with promises about the disappearing nature of messages sent through the service. The FTC case also alleged that the company deceived consumers over the amount of personal data it collected and the security measures taken to protect that data from misuse and unauthorized disclosure. In fact, the case alleges, Snapchat’s failure to secure its Find Friends feature resulted in a security breach that enabled attackers to compile a database of 4.6 million Snapchat usernames and phone numbers.

According to the FTC’s complaint, Snapchat made multiple misrepresentations to consumers about its product that stood in stark contrast to how the app actually worked.

“If a company markets privacy and security as key selling points in pitching its service to consumers, it is critical that it keep those promises,” said FTC Chairwoman Edith Ramirez. “Any company that makes misrepresentations to consumers about its privacy and security practices risks FTC action.”

Touting the “ephemeral” nature of “snaps,” the term used to describe photo and video messages sent via the app, Snapchat marketed the app’s central feature as the user’s ability to send snaps that would “disappear forever” after the sender-designated time period expired. Despite Snapchat’s claims, the complaint describes several simple ways that recipients could save snaps indefinitely.

Consumers can, for example, use third-party apps to log into the Snapchat service, according to the complaint. Because the service’s deletion feature only functions in the official Snapchat app, recipients can use these widely available third-party apps to view and save snaps indefinitely. Indeed, such third-party apps have been downloaded millions of times. Despite a security researcher warning the company about this possibility, the complaint alleges, Snapchat continued to misrepresent that the sender controls how long a recipient can view a snap.

SnapChat published a brief statement about its agreement with the FTC on its blog which includes the following statement which is fairly worrying:

While we were focused on building, some things didn’t get the attention they could have. One of those was being more precise with how we communicated with the Snapchat community. This morning we entered into a consent decree with the FTC that addresses concerns raised by the commission. Even before today’s consent decree was announced, we had resolved most of those concerns over the past year by improving the wording of our privacy policy, app description, and in-app just-in-time notifications.

On the one hand, the FTC essentially found that SnapChat has been misleading its users about its service’s privacy practices and, on the other hand, SnapChat pointed to a communications lapse, almost as an oversight. Considering that SnapChat has always been focused on the fleeting nature of content posted on the service and the privacy benefits for its users, this doesn’t seem very plausible.

“Improved” privacy policy wording

SnapChat updated its privacy policy on 1 May. The section “Information You Provide To Us” is revealing because it qualifies “Snaps'” transient nature so much, transience seems to be the exception, rather than default behaviour:

We collect information you provide directly to us. For example, we collect information when you create an account, use the Services to send or receive messages, including photos or videos taken via our Services (“Snaps”) and content sent via the chat screen (“Chats”), request customer support or otherwise communicate with us. The types of information we may collect include your username, password, email address, phone number, age and any other information you choose to provide.

When you send or receive messages, we also temporarily collect, process and store the contents of those messages (such as photos, videos, captions and/or Chats) on our servers. The contents of those messages are also temporarily stored on the devices of recipients. Once all recipients have viewed a Snap, we automatically delete the Snap from our servers and our Services are programmed to delete the Snap from the Snapchat app on the recipients’ devices. Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it. Please note that users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device and if you add a Snap to your Story it will be viewable for 24 hours. Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe. We also cannot prevent others from making copies of your messages (e.g., by taking a screenshot). If we are able to detect that the recipient has captured a screenshot of a Snap that you send, we will attempt to notify you. In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted. You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

If you read the second paragraph carefully, you’ll notice the following exceptions to what most users assumed was the service’s default behaviour: permanently deleting Snaps after specified time intervals. I have highlighted the exceptions in the quotes below.

  1. “Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it
  2. “… users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device”
  3. “… if you add a Snap to your Story it will be viewable for 24 hours
  4. “Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe
  5. “We also cannot prevent others from making copies of your messages …”
  6. “In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted

The last sentence emphasises how much its users should rely on the service for meaningful privacy:

You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

Where does this leave SnapChat users?

The problem with these revelations is not that Snaps are actually accessible and may endure in some form or another. The problem is that SnapChat pitched a service that doesn’t retain its users’ content. SnapChat rose to prominence at a time when the world was reeling from revelations about unprecedented government surveillance which seemed to reach deep into a variety of online services we assumed were secure. It’s promise was to protect its users’ privacy and their content from unwanted scrutiny. In many respects, SnapChat seemed to be the first of a new wave of services that placed control in users’ hands.

In the process, SnapChat misled its users fairly dramatically and that is the most troubling aspect of this story. SnapChat users relied on an assumption that their content is transient and this has turned out not to be the case at all. Putting this into context, though, this doesn’t mean SnapChat is inherently less private than any other chat service. Short of poor security practices, this isn’t necessarily the case. It means that SnapChat is fairly comparable to other chat services which haven’t made similar claims about the privacy of their users’ communications.

That said, a significant challenge is that a significant proportion of SnapChat’s users are probably under the age of 18. Although US services are more concerned about children under the age of 13 using their services due to certain laws protecting children in the United States, our law doesn’t draw this distinction. In South Africa, a person under the age of 18 is a child and subject to special protections which SnapChat has had almost no regard for. Not only has SnapChat arguably processed children’s personal information in a manner which would not be acceptable in our law, it is misled those children about the extent to which it protects their privacy. At the very least, they and their parents should be very concerned and circumspect about continuing to use the service.

On a related note, it is worth reading Information Week’s article titled “5 Ways SnapChat Violated Your Privacy, Security“.

Bombs under wheelchairs, model airplanes and other stupid tweets

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. Both are stark reminders about the very serious legal consequences for misguided tweets.

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. In the case of the US Airways tweet, it appears that the tweet was a mistake and that the employee concerned will not be fired. Here is an explanation of the incident and some commentary from Sarah and Amber on a recent Social Hour video:

On the other hand, FNB has reportedly launched disciplinary proceedings to deal with its employee’s tweet. According to TechCentral:

Disciplinary processes were under way following an offensive tweet sent from a First National Bank Twitter account, the bank said on Wednesday.

“We can confirm that disciplinary actions are currently under way as we are following the required industrial relations processes,” FNB’s acting head of digital marketing and media, Suzanne Myburgh, said.

In both cases, the companies concerned removed the offending tweets as soon as they discovered them and apologised for the tweets. Both incidents attracted a tremendous amount of attention and both brands were praised for apologising and being transparent about their investigations into their respective incidents. The benefit of this approach has been to mitigate the reputational harm both companies faced by engaging with their followers and keeping their customers updated on their investigations.

It is worth bearing in mind that managing corporate social media profiles at scale is not a simple exercise. As Cerebra’s Mike Stopforth pointed out in his Twitter post-mortem of the FNB tweet controversy:

He went further to characterise the tweet as a single error in the context of a very active Twitter profile:

I don’t think I would characterise the tweet as an “understandable error”. Twitter profiles as prolific as FNB’s @RBJacobs profile require careful attention to the kinds of tweets that may be published and to what extent the teams managing these profiles can inject their personalities into the corporate personality or representation of the brand online.

As I pointed out in my blog post titled “Gender activism, trolls and being fired for tweeting“, employees need to understand there are serious legal consequences for their bad decisions –

> From a Legal Perspective

The legal issues here are perhaps not as exciting as the raging debate and threats but they are important nonetheless. One of the central themes in the blog posts by both companies, Playhaven and SendGrid, is that employees who fail to fulfil their obligations towards their employers can be dismissed. In both Richards’ and Playhaven’s ex-employee’s cases, both individuals brought their employers into disrepute through their actions and, in this respect, exposed themselves to disciplinary action.

Employees owe their employers a number of duties and they can be disciplined if they fail to honour their obligations towards their employers. Employees’ duties include the duties to –

  • further the employer’s business interests;
  • be respectful and obedient; and
  • not to bring the employer into disrepute.

This last duty has received considerable attention in recent complaints brought to the > Commission for Conciliation, Mediation and Arbitration> including the case of > Sedick & another and Krisray (Pty) Ltd (2011) 32 ILJ 752 (CCMA)> where the commissioner upheld the employees’ dismissals and commented as follows:

Taking into account all the circumstances – what was written; where the comments were posted; to whom they were directed, to whom they were available and last but by no means least, by whom they were said – I find that the comments served to bring the management into disrepute with persons both within and outside the employment and that the potential for damage to that reputation amongst customers, suppliers and competitors was real.


This case emphasizes the extent to which employees may, and may not, rely on the protection of statute in respect of their postings on the Internet. The Internet is a public domain and its content is, for the most part, open to anyone who has the time and inclination to search it out. If employees wish their opinions to remain private, they should refrain from posting them on the Internet.

FNB clearly seems to have a process in place to identify, respond to and address incidents such as this tweet. It presumably has a sound policy framework that it will rely on when dealing with its incident. This is where a social engagement policy (what used to be a “social media policy” and which has evolved since then) is really important.

Although much of the focus of a social engagement policy has traditionally been on behaviours which must align with the brand, the policy also serves an important disciplinary function by clearly communicating a standard which employees using social communication tools must meet. This, in turn, ties into one of the important requirements of a sound disciplinary procedure: demonstrating that a clear standard was effectively communicated to employees who were aware of the standard and failed to meet it.

We may learn what happens to the FNB employee who published that ill-advised tweet. What is certain, though, is that this won’t be the last incident like this. We will see more incidents at other companies and the sooner companies develop effective processes to address these incidents, the better.

Free is the death of the open Web and privacy is the sacrificial offering

The problem with free services is that they have to make money in some way or another and the way that they generally do this is through advertising which leverages our personal information in order to give some kind of value to their advertisers. We agree to this when we sign up for these services. The extent of our agreement is documented in privacy policies which few people read and truly consider.

What this means is that we are essentially trading information about ourselves for access to these services which, admittedly, we do see value in otherwise we wouldn’t use them quite so much.

Parchman Penal Farm. View of Afican American female prison inmates standing beside slaughtered hogs.

This article was originally published on MarkLives in my TechLaw columnon 28 November 2013.

Our insistence on having access to free services such as Facebook or Twitter both heralds the death of the open Web and, at the same time, has given rise to most of the online privacy-related controversies in recent years.

The problem with free services is that they have to make money in some way or another and the way that they generally do this is through advertising which leverages our personal information in order to give some kind of value to their advertisers. We agree to this when we sign up for these services. The extent of our agreement is documented in privacy policies which few people read and truly consider.

What this means is that we are essentially trading information about ourselves for access to these services which, admittedly, we do see value in otherwise we wouldn’t use them quite so much.

What happens in the meantime is that these free services find themselves having to extract more and more value from us using our personal information by tailoring their infrastructure to take more advantage of our preferences, relationships and data. What this means for consumers is that the respective for consumer privacy often takes a back seat to extracting more value from prices for advertisers.

Public pressure on these free services, typically reputational harm, is what keeps them relatively honest and often attracts regulatory oversight in the interests of preserving consumers’ rights and protecting their privacy.

Despite this, most of the privacy controversies on the social Web emerge from at the intersection of a need to establish a sustainable and profitable revenue model while, at the same time, maintaining some degree of respect for consumers privacy rights in order to secure users’ trust in these services. Google has been particularly vocal about the importance it places on users’ trust as an incentive to “do no evil”.

An emerging trend is that social services become more closed and limit interoperability with other services. The idea being to ensure that users spend more and more of their time in investing more and more of themselves in these services to maximise value. A consequence of this is markedly less emphasis on open standards supporting an open, interoperable Web and that, on the whole, is enormously detrimental to this idea of an open Web where people can engage with each other across multiple platforms and services.

One area where we are seeing this happening again is in the instant messaging or chat space which has seen a resurgence of interest, likely because this is where younger users seem to be heading as the platform for their preferred social experience. Facebook and Google have developed mobile messaging services (Messenger and Hangouts, respectively) partly to compete with enormously popular mobile-based messaging services like WhatsApp and, more recently, WeChat. Unfortunately these services are largely not compatible or interoperable with each other.

What is happening with these chat/messaging services echoes of what happened originally with email services in the Internet’s distant past. Back then if you were using one email provider you typically couldn’t send an email to a user using another email provider because each email service ran on proprietary and incompatible platforms. This eventually changed with the adoption of open standards facilitating the distribution of email across different service providers and enormously enhanced the value of the email as a primary communication method which we all rely on today.

Despite open messaging protocols like XMPP (formerly Jabber) which enable providers to create interoperable messaging products, the current generation of messaging services are following defunct proprietary models that once crippled email. Even Google, which baked XMPP into its Google Talk service has abandoned XMPP in favour of its proprietary Hangouts service which has replaced Google Talk as Google’s primary chat service. Facebook’s Messenger, at one point, supported XMPP (and may still) but Facebook’s emphasis (like Google, Apple, WhatsApp, WeChat and others) is to entice users to switch to its platform as their primary chat service. Chat is a pretty sticky service and if a brand can entice users to switch, their overall service use would likely increase considerably, enhancing their value to advertisers and paying an even higher price for these “free” services.


p>In the meantime, hopes for an open Web based on interoperable standards and protocols are fading. Our hopes now lie with companies like Mozilla and, ironically, to an extent with Google which is still an advocate for an open Web and their desire to move beyond closed platforms and continue building an interoperable Web capable of generating meaningful revenue to support free services.