Facebook defamation is not necessarily illegal

That the respondent in the latest High Court Facebook defamation case, M v B, was ordered to remove defamatory posts on Facebook isn’t remarkable. What is more interesting about that case is that it reiterates a principle that a court will not step in and proactively block future defamatory posts.

The applicant in this case, M (SAFLII redacts personal information about parties in cases it publishes in certain circumstances), brought an urgent application to the Kwa-Zulu Natal High Court on 9 September 2013 to order his ex-wife, B, to –

  1. “remove all messages as contained in annexure ‘D’ to the applicant’s founding affidavit, from her Facebook page;”
  2. “refrain from posting any defamatory statements about the applicant on her Facebook page;” and
  3. “refrain from in any way making, publishing and/or distributing defamatory statements about the applicant.”

The urgent application was successful and M was granted an interim order which M subsequently sought to have made final. Judge Chetty’s judgment on this was delivered just over a year after the initial application was launched, on 19 September 2014.

Background

Judge Chetty gave the following background to the applications:

[3] It is necessary to sketch the brief history of the matter, and particularly the facts giving rise to the launching of the application. The applicant and the respondent are the biological parents of a minor child, a daughter P born in July 2008. At the time of the launching of the application, the child was five years old. The respondent and the applicant were never married, and at the time of the institution of these proceedings, were no longer in a relationship. P lives with the respondent. In terms of an arrangement between the parties, the applicant has contact with his child every alternate weekend from Friday afternoon until Sunday afternoon. It is not disputed that in accordance with this agreement, the applicant picked up his daughter on the weekend commencing 30 August 2013 and returned her to the respondent on Sunday 1 September 2013.

[4] During the course of this particular weekend the applicant and his daughter visited the house of a friend, and ended up staying over. During the course of the evening, other friends gathered at the house eventually resulting in P sharing a bed with an adult female, who is a pre-primary school teacher, and someone known to her as she had babysat P on previous occasions. The applicant has categorically stated that he has never had a romantic relationship with the teacher concerned. P was safely returned to her mother on the Sunday.

[5] In the week that followed, the applicant received calls from several friends drawing his attention to a posting by the respondent on Facebook, under the heading “DEBATE”. The posting reads as follows:

‘DEBATE: your ex has your daughter (5) for the weekend and is sleeping at a mates house. They all (about six adults) go jolling and your ex’s drunk, 50 yr old girl “friend” ends up sleeping with your daughter cause he doesn’t want his girl “friend” sleeping in a single bed she can share the double bed with his/your daughter! How would you feel?’

[6] It is not in dispute that at the time of this posting the respondent had 592 “Facebook friends”. A number of the respondent’s ‘friends’ responded to her posting and were critical of the behaviour of the applicant. The respondent further contributed towards the debate by making subsequent postings to that set out above. These postings or messages appear as annexure ‘A’ to the applicant’s founding papers. The initial postings resulted in a further debate with the respondent’s brother S[…] B[…], who questioned the aspersions cast by the respondent on the applicant and the teacher with whom P shared a bed. These postings appear as annexure ‘B’ to the applicant’s founding papers.

[7] In light of the postings, which the applicant regarded as defamatory and detrimental to his business reputation, he engaged his attorneys who wrote to the respondent on 4 September 2013 clarifying that during the weekend in which the applicant had access to P, at no time therein was she placed in any danger, nor was her safety compromised in any way. His attorneys then called upon the respondent to remove the offending postings (annexures ‘A’ and ‘B ‘to the founding papers) from her Facebook page by the close of business on 4 September 2013, failing which they threatened litigation.

[8] According to the respondent, she removed the offending postings by 5 September 2013. Accordingly, at the time when the application came before my colleague Nkosi J, the respondent contended in her opposing affidavit that there was no need for the application as she had long since complied with the demand and removed the postings. In support of the submission, the respondent attached an SMS received from the applicant on 5 September 2013 stating:

‘And well done on removing your false Facebook posting – you’ve saved yourself from a lawsuit. Ensure no further defamatory posts are put up or you’ll find yourself in Court!!’

[9] As is evident from the prayers sought in the Notice of Motion, notwithstanding the removal of postings in the form of annexures A and B, the applicant persisted in his application for urgent relief on the basis that the respondent had failed to take down the postings on what is referred to as her Facebook Wall, which the applicant contends “retained a partisan version of the debate”. The postings on the respondent Face Wall appeared as annexure D to the applicant’s founding affidavit. The applicant contended that the contents of annexure ‘D’ defamed him, even though the respondent has deleted the earlier postings on her Facebook page. In order to understand the applicant’s complaint, a perusal of the respondent’s Facebook Wall reflects the contents of active debate taking place between the respondent and her friends. The subject of the debate continues to be the incident relating to the applicant’s care (or neglect) of his daughter over the weekend at the end of August 2013. In particular, the opening message on the applicant’s Facebook Wall is the following:

‘This is my FB page which I can get opinions on matters close to my heart, if you don’t like it then go read someone else’s and defriend me!’

[10] This message was posted in response to earlier messages from the respondent’s brother, S[…] B[…], who it would appear, did not take kindly to the insinuations of neglect aimed at the applicant.

The Court’s decision

These facts are pretty similar to two 2013 Facebook defamation case which I wrote about, H v W and Isparta v Richter and Another. The order directing B to remove defamatory posts from her Facebook Wall was not particularly controversial. There was some discussion about the timing of the application and B’s efforts to remove some defamatory posts but this order was in line with Judge Willis’ judgment in H v W and Acting Judge Hiemstra in Isparta v Richter and Another. After considering arguments from both sides, Judge Chetty found against B:

[20] Other than a denial that the postings were defamatory, the respondent does not make out any argument of the public interest in respect of the statements attributed to the applicant. I am satisfied that the applicant was entitled to approach the Court on an urgent basis at the time that he did. I am accordingly satisfied that the applicant has made out a case for first part of the rule nisi, in terms of the relief sought in prayer 2.1 of the Notice of Motion, to be confirmed.

Facebook_defamation

The Court then moved on to the second part of the matter, namely whether M should be entitled to a final order, essentially, prohibiting B from publishing defamatory comments about M in the future. This may seem like a perfectly reasonable order but it is important to bear in mind that just because a comment is defamatory, doesn’t mean that it is wrongful. As Judge Chetty pointed out –

[24] On the other hand, the respondent submitted that there is no basis at common law for a Court to curtail the respondent in respect of material which is not as yet known to the Court, nor has it been presented or published. As such the Court is asked to speculate on what could constitute a defamatory statement, uttered or published by the respondent against the applicant. It was correctly submitted in my view that even if the statement in the future by the respondent is defamatory of the applicant, it is equally so that not every defamatory statement is per se actionable in that the respondent may have a good defence to its publication. For example, the respondent might be under a legal duty to furnish information about the applicant in connection with an investigation of a crime, or she could be a member of a public body which places on her a social duty to make defamatory statements about the applicant. To this extent, the respondent may make defamatory statements about the applicant in circumstances where they may be a qualified privilege. Obviously it would be necessary to ascertain the nature of the occasion in order to determine whether any privilege attaches to it. The difficulty in granting such an order is evident, albeit in the context of the publication of an article, from the judgement in Roberts v The Critic Ltd & others 1919 WLD 26 at 30–31 where the Court held:

‘I think I have jurisdiction to make an order restraining the publication of a specific statement that is defamatory, but in the present case I am asked to restrain the publication of an article in so far as it is defamatory; if the applicant’s contention is correct this will come to the same thing as restraining any continuation of the article at all, because that contention is that no continuation of the article can be written that is not defamatory… . There is the grave difficulty in the way of granting an interdict restraining the publication of an article which purports to deal with a matter of great public interest, and which I have not before me. It is impossible to say what it will contain, however grave one’s suspicions may be. The respondents specifically state that the continuation will not be libellous, nor will it slander the petitioner; nor will it affect her good name and fair fame. It can only be determined upon the publication of the article if this statement be true. I think it is impossible for me to deal with it now. In the cases I have referred to the defendants insisted on the right to publish the statements complained of. The interdict must therefore be discharged.’

[25] At the same time it has also been held that it is lawful to publish a defamatory statement which is fair comment on facts that are true and in matters of public interest, as well as in circumstances where it is reasonably necessary for and relevant to the defence of one’s character or reputation. Counsel relied on the judgement of Willis J in H v W (supra) para 40 in support of his submission that Courts should not be eager to prohibit or restrict parties in respect of future conduct, of which one can only speculate in the present. The Court held that:

‘Although judges learn to be adept at reading tealeaves, they are seldom good at gazing meaningfully into crystal balls. For this reason, I shall not go so far as “interdicting and restraining the respondent from posting any information pertaining to the applicant on Facebook or any other social media”. I have no way of knowing for certain that there will be no circumstances in the future that may justify publication about the applicant.’

Although judges probably wouldn’t have a difficulty ordering a person not to do something that is clearly and unjustifiably wrongful in the future (that is largely what an interdict is for), the challenge M faced with this part of his application is that a future defamatory statement could well be justifiable and not wrongful. As I pointed out in my post, Judge Willis considered a couple justifications in H v W –

After exploring Twitter briefly, Judge Willis turned to established case law in South Africa including authority for the proposition Roos expressed that a privacy infringement can be justified in a similar way that defamation can be justified and a more recent Supreme Court of Appeal judgment in the 2004 Mthembi-Mahanyele v Mail & Guardian case which, according to Judge Willis –

affirmed the principle that the test for determining whether the words in respect of which there is a
complaint have a defamatory meaning is whether a reasonable person of ordinary intelligence might reasonably understand the words concerned to convey a meaning defamatory of the litigant concerned

The Court, in the Mthembi-Mahanyele case set out the test for defamation as follows (and cited a 1993 case in the then-Appellate Division of Argus Printing and Publishing Co Ltd v Esselen’s Estate) –

The test for determining whether words published are defamatory is to ask whether a ‘reasonable person of ordinary intelligence might reasonably understand the words … to convey a meaning defamatory of the plaintiff… . The test is an objective one. In the absence of an innuendo, the reasonable person of ordinary intelligence is taken to understand the words alleged to be defamatory in their natural and ordinary meaning. In determining this natural and ordinary meaning the Court must take account not only of what the words expressly say, but also of what they imply’

Referencing one of the justifications for (or defences to) defamation, namely that the defamatory material be true and to the public benefit or in the public interest, Judge Willis drew an important distinction that is worth bearing in mind –

A distinction must always be kept between what ‘is interesting to the public’ as opposed to ‘what it is in the public interest to make known’. The courts do not pander to prurience.

The Court moved on to explore another justification, fair comment. In order to qualify as “fair comment” –

the comment “must be based on facts expressly stated or clearly indicated and admitted or proved to be true”

The person relying on this justification must prove that the comment is, indeed, fair comment and “malice or improper motive” will defeat this justification or defence, regardless of its demonstrably factual nature. In this particular case, the Court found that W acted maliciously and she was unable to prevail with this defence.

Because defamation can be justified in appropriate circumstances and because judges can’t predict when defamatory statements will be justifiable in a particular context, proactively blocking defamatory Facebook posts is inherently problematic. Judge Chetty summarised the point:

As set out earlier this argument must fail because it is clear that not every defamatory statement made by the respondent about the applicant would be actionable.

Reasonably practicable compliance with POPI is not enough

When considering how much you should do to comply with legislation like the Protection of Personal Information Act, you have three choices:

  1. Do as little as possible and see what you can get away with;
  2. Calculate the degree of “reasonably practicable” compliance required and stick with that;
  3. Adopt a more holistic approach to compliance.

Of the three options, the first is clearly a recipe for disaster. The only questions are when disaster will strike and how devastating will it be?

The second option is a popular one. To begin with, it is a practical solution because it takes into account what the law requires of you in order to meet the law’s standard so you limit your potentially significant investment in a compliance program without a corresponding quantitative benefit. Makes sense, right? In a way, yes, but what it doesn’t take into account is that your primary compliance risk is increasingly not regulators (at least not in South Africa where regulators often lack the capacity to respond very quickly), but rather the people who are directly affected by your decisions.

In other words, complying with laws like the Consumer Protection Act and Protection of Personal Information Act is not a quantitative exercise where you empirically (or as close to empirically as a legal compliance assessment can be) calculate your desired degree of compliance and work to that standard. Instead compliance is qualitative.

John Giles published a terrific post on the Michalsons blog titled “Only do what is reasonably practicable to comply with POPI” in which he explains POPI’s baseline compliance standard which is based on reasonableness and how this translates into what is likely an effective quantitative approach to compliance. It is worth saving the article because it is a handy reference for when you need to understand what the law means by “reasonably practicable”.

I don’t believe that this is enough, though. If anything, the question of what is reasonably practicable should only be part of your assessment of what you should do. The next, and arguably more important, question should be “What should we do to ensure not only compliance with the law but also to earn our customers’ trust?”. No, I’m not suggesting you drink the “rainbows and unicorns” energy drink and incur real money complying with some nebulous standard because your customers will like you more. Well, not entirely. What I am suggesting is that there is another dimension to compliance with legislation that affects people in very personal ways.

When you look at recent privacy controversies involving services like Facebook, Google and SnapChat, one theme that emerges from each of these controversies is not that these companies handled users’ personal information in ways they necessarily concealed from users. Their privacy policies describe what they do with users’ personal information in varying degrees. What really upsets users is that they weren’t expecting these companies to do the things they did because users tend to develop a set of expectations of what to expect from their providers which is typically not informed by privacy policies (because few people read them). These expectations are informed by what these companies tell them in marketing campaigns, what other users and the media tell them, what their friends share with them and their experiences with the services themselves.

When a provider steps outside its users’ collective expectations, mobs form and there is chaos in the metaphorical streets. The fact that these companies stuck to their published privacy policies and terms and conditions is largely irrelevant because users are not wholly rational and analytical. They don’t go back to the legal documents, read them quietly and go back to their daily lives when they realise that they mis-read or misunderstood the legal terms and conditions. No, they are outraged because the companies violated the trust users placed in these companies based on users’ expectations.

You may not have the same number of customers as Facebook, Google or SnapChat and your business may be different but if you are considering Protection of Personal Information Act or Consumer Protection Act compliance, you are dealing with the same people: consumers who have expectations and perceptions which you influence but certainly don’t control. If you violate the trust they place in you, the response will be swift and the consequences from a reputational perspective could be severe.

Fountain Square in Downtown Cincinnati Is a Public Square That Works for the City and Its People in a Myriad of Ways: Tyler Davidson Fountain 05/1973

When you develop your compliance program, assess what is reasonably practicable and set that as your commercial baseline. Then, consider how transparent you can be with your customers about what you intend doing with their personal information?

I remember reading a discussion about partners cheating on each other and at one point in the article the writer said that cheating isn’t just about the act but also the thoughts that precede it. If you have thoughts about another person which you don’t want to share with your partner, that is probably a good indication you are contemplating something you shouldn’t be doing. Apply that to your compliance program and ask yourself if you are comfortable disclosing what you intend doing with your customers’ personal information to them? If you are, be transparent about it in your privacy statement/policy and in your communications with your customers.

If you don’t feel comfortable being transparent about how you intend using your customers’ personal information and, instead, intend hiding behind technical legal compliance with the law to justify your data use, you may be setting yourself up for a bitter divorce and a costly battle with your customers. By the time the regulators arrive to assess your compliance, the damage will already have been done and the reasonably practicable thing to do will be to pick up the pieces of your reputation (and possibly your business) and start earning your customers’ trust again.

When it comes to data protection, transparency and trust are essential

Fountain Square in Downtown Cincinnati Is a Public Square That Works for the City and Its People in a Myriad of Ways: Light Rain Falls at End of Noontime Israeli Birthday Celebration 05/1973
Fountain Square in Downtown Cincinnati Is a Public Square That Works for the City and Its People in a Myriad of Ways: Light Rain Falls at End of Noontime Israeli Birthday Celebration 05/1973

When it comes to privacy, two key success factors are transparency that engenders trust. Responsible data processing is how you move from transparency to trust.

I wrote an article about this which I published on LinkedIn (it was also published on MarkLives) which I titled “Trust is more important than sales“. You may find it interesting.

Community feedback: be careful what you wish for

Occupy Wall Street S15 Arrest

The first anniversary of Occupy Wall Street gathered at Washington Square Park, Occupy Town Square. A march down Broadway to Zuccotti Park started at 6pm on September 15th.

A recent New York Police Department attempt to engage with New Yorkers serves as a reminder that crowdsourcing positive feedback doesn’t always work quite as well as you may hope, if it works at all. As Ars Technica reported:

The Twitterverse was abuzz Tuesday evening after the New York City Police Department made what it thought was a harmless request to its followers: post pictures that include NYPD officers and use the #MyNYPD hashtag.

Much to the NYPD’s surprise and chagrin, the simple tweet brought on a torrent of criticism from the Internet. The result was national coverage of hundreds of photos depicting apparent police brutality by NYPD officers, which individuals diligently tweeted with the hashtag #myNYPD.

The Ars article touches on a number of other, similar attempts to elicit positive feedback from communities and the clear trend is that the community will give you its assessment of what you are doing and represent, it won’t necessarily give you the feedback you probably want.

This isn’t necessarily a reason not to engage with your community but it does require courage. If you want honest feedback, community feedback is a terrific opportunity to get it. If, on the other hand, you don’t want to venture outside a positive reinforcement bubble, perhaps start with a different sort of campaign.

Bombs under wheelchairs, model airplanes and other stupid tweets

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. Both are stark reminders about the very serious legal consequences for misguided tweets.

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. In the case of the US Airways tweet, it appears that the tweet was a mistake and that the employee concerned will not be fired. Here is an explanation of the incident and some commentary from Sarah and Amber on a recent Social Hour video:

On the other hand, FNB has reportedly launched disciplinary proceedings to deal with its employee’s tweet. According to TechCentral:

Disciplinary processes were under way following an offensive tweet sent from a First National Bank Twitter account, the bank said on Wednesday.

“We can confirm that disciplinary actions are currently under way as we are following the required industrial relations processes,” FNB’s acting head of digital marketing and media, Suzanne Myburgh, said.

In both cases, the companies concerned removed the offending tweets as soon as they discovered them and apologised for the tweets. Both incidents attracted a tremendous amount of attention and both brands were praised for apologising and being transparent about their investigations into their respective incidents. The benefit of this approach has been to mitigate the reputational harm both companies faced by engaging with their followers and keeping their customers updated on their investigations.

It is worth bearing in mind that managing corporate social media profiles at scale is not a simple exercise. As Cerebra’s Mike Stopforth pointed out in his Twitter post-mortem of the FNB tweet controversy:

He went further to characterise the tweet as a single error in the context of a very active Twitter profile:

I don’t think I would characterise the tweet as an “understandable error”. Twitter profiles as prolific as FNB’s @RBJacobs profile require careful attention to the kinds of tweets that may be published and to what extent the teams managing these profiles can inject their personalities into the corporate personality or representation of the brand online.

As I pointed out in my blog post titled “Gender activism, trolls and being fired for tweeting“, employees need to understand there are serious legal consequences for their bad decisions –

> From a Legal Perspective

The legal issues here are perhaps not as exciting as the raging debate and threats but they are important nonetheless. One of the central themes in the blog posts by both companies, Playhaven and SendGrid, is that employees who fail to fulfil their obligations towards their employers can be dismissed. In both Richards’ and Playhaven’s ex-employee’s cases, both individuals brought their employers into disrepute through their actions and, in this respect, exposed themselves to disciplinary action.

Employees owe their employers a number of duties and they can be disciplined if they fail to honour their obligations towards their employers. Employees’ duties include the duties to –

  • further the employer’s business interests;
  • be respectful and obedient; and
  • not to bring the employer into disrepute.

This last duty has received considerable attention in recent complaints brought to the > Commission for Conciliation, Mediation and Arbitration> including the case of > Sedick & another and Krisray (Pty) Ltd (2011) 32 ILJ 752 (CCMA)> where the commissioner upheld the employees’ dismissals and commented as follows:

Taking into account all the circumstances – what was written; where the comments were posted; to whom they were directed, to whom they were available and last but by no means least, by whom they were said – I find that the comments served to bring the management into disrepute with persons both within and outside the employment and that the potential for damage to that reputation amongst customers, suppliers and competitors was real.

and

This case emphasizes the extent to which employees may, and may not, rely on the protection of statute in respect of their postings on the Internet. The Internet is a public domain and its content is, for the most part, open to anyone who has the time and inclination to search it out. If employees wish their opinions to remain private, they should refrain from posting them on the Internet.

FNB clearly seems to have a process in place to identify, respond to and address incidents such as this tweet. It presumably has a sound policy framework that it will rely on when dealing with its incident. This is where a social engagement policy (what used to be a “social media policy” and which has evolved since then) is really important.

Although much of the focus of a social engagement policy has traditionally been on behaviours which must align with the brand, the policy also serves an important disciplinary function by clearly communicating a standard which employees using social communication tools must meet. This, in turn, ties into one of the important requirements of a sound disciplinary procedure: demonstrating that a clear standard was effectively communicated to employees who were aware of the standard and failed to meet it.

We may learn what happens to the FNB employee who published that ill-advised tweet. What is certain, though, is that this won’t be the last incident like this. We will see more incidents at other companies and the sooner companies develop effective processes to address these incidents, the better.

Digital marketing law interview on @BallzRadio

Paul was interviewed about aspects of digital marketing law on Ballz Radio today. The interview was part of the business segment and Paul chatted to the team about some consumer protection issues, transparency, terms and conditions and privacy concerns.

Fortunately, Ballz Radio publishes the audio and video of the interviews. You can listen to the audio using the SoundCloud player below:

Nokia’s errant F-bomb tweet and a reputational smear

Although the tweet was almost certainly not sanctioned by Nokia’s marketing team, it highlights the importance of carefully managing not only access to a brand’s social profiles and establishing clear guidelines for people who do have access to those profiles explaining what acceptable behaviour and content are because whatever is published using those platforms is going to be perceived as representative of the brand to some degree. Aside from the obvious reputational smear, consider the economic impact of a brand that is perceived to have taken a strong stand against its customers, especially at a time when it is undergoing considerable transformation.

I’ve always admired how Nokia engages with its customers and fans (often the same) using social media. I love their YouTube videos and I was a passionate evangelist for their products for a couple years before I eventually put down my Nokia N97, flirted briefly with Android (my experience with the HTC Desire was less than “wow”) and switched to an iPhone. Nokia’s staff have always struck me as deeply passionate about their products and the work they do so the tweet that appeared on the Nokia New Zealand Twitter stream yesterday must have been a shock to many.

2013-11-26 Nokia NZ F-you tweet

The tweet was, predictably, taken down and the following apology was published soon afterwards:

As The Next Web pointed out in its post about the tweet, there are a number of explanations for the offending tweet but that may not have mattered much at the time:

As you’d expect, the post has since been deleted. We can think of a few explanations for it, such as hacking, a disgruntled employee, an account mixup, or a practical joke gone awry, but whatever it is, Nokia isn’t going to be winning the Internets today. We’ve reached out to the company to see if we can find out what happened.

Although the tweet was almost certainly not sanctioned by Nokia’s marketing team, it highlights the importance of carefully managing not only access to a brand’s social profiles and establishing clear guidelines for people who do have access to those profiles explaining what acceptable behaviour and content are because whatever is published using those platforms is going to be perceived as representative of the brand to some degree. Aside from the obvious reputational smear, consider the economic impact of a brand that is perceived to have taken a strong stand against its customers, especially at a time when it is undergoing considerable transformation. What if this drop in Nokia’s share price was a result of the tweet (I don’t see an indication that this is the case but this scenario is hypothetically possible)?

So what can brands do?

  1. For starters, manage access to social profiles using a centrally controlled dashboard of some sort. Services like Hootsuite allow brands to establish user accounts and to grant access to multiple social profiles. They also allow for a degree of moderation and, importantly, to revoke access to those profiles without needing to disclose each profile’s access credentials. Of course, access to the dashboard’s administrative settings should also be carefully managed.
  2. Brands should ensure that the passwords they use to secure their social media profiles are secure. Don’t use simple passwords because they are easy to remember, use long and pseudo-random strings with mixed characters. Services like LastPass make managing these long passwords pretty easy and LastPass’ recent update allows people to share passwords.
  3. Implement clear and effectively worded social engagement policies to manage internal stakeholder (not just employees but contractors and partners too) expectations about what they can do on the brand’s behalf. These policies should go beyond simple social media policies and should extend to different forms of engagement. An effective model focuses more on behaviours than on specific technologies. Crucially, these policies should form part of a company’s internal policy framework and be effective performance management tools. Three line social media policies written to be catchy and praise-worthy in the media are typically useless from an enforcement perspective which is, essentially, their purpose.

This particular tweet is just indicative of an ongoing risk brands face. Just as social media profiles are wonderful tools for engaging meaningfully with various stakeholders, they can also be used to wreak havoc on a brand’s reputation. This reminds me of that saying “a moment on your lips, a lifetime on your hips“ (or something like that). A lifetime on the social Web is can be measured in days and weeks because that is how long it can take to kill a reputation. A tweet can be the start of that.

PPC Lead Generation’s Privacy Risks

PPC lead generation is a search-based lead generation technique which leverages search terms to surface (preferably) relevant ads in search results. When you click on those ads you are often taken to landing pages where you have the option of submitting your details to a company so it can get in touch with you about its products and services. It’s a pretty smart marketing option because it begins with the premise that you are searching for what the company offers. It is also a potentially risky proposition for brands that fail to implement adequate privacy protections.

PPC_lead_generation_-_Google_Search - modified

PPC lead generation is a search-based lead generation technique which leverages search terms to surface (preferably) relevant ads in search results. When you click on those ads you are often taken to landing pages where you have the option of submitting your details to a company so it can get in touch with you about its products and services. It’s a pretty smart marketing option because it begins with the premise that you are searching for what the company offers. Here is an example:

How PPC Lead Generation Works

Let’s assume you are in the market for home insurance so you search for “home insurance”:

You’ll notice a couple ads which relate to “home insurance” and which are identified as ads. These are sponsored or paid ads which are displayed in your search results based on your search terms. The companies that purchase the ads (often an agency specialising in this sort of advertising) select key words that they believe will correspond with your search terms so when you run your search, their ads are displayed as relevant search results (Google regards these ads as something which may be valuable to you so it built an ad sales model based on this process). You click on a link in one of the ads and you are taken to a landing page which can look something like this:

Notice the form on the right? That form is an opportunity for you to submit your details to the brand behind the campaign, in this case MiWay, so its sales representatives can contact you about its products and services. Once you submit your details, you become a sales lead (hence the term “lead generation”). The “PPC” bit stands for “Pay Per Click” which is a reference to the payment model the advertiser agrees to. The advertiser pays for each click on the ad. Some advertisers will pay their agencies for leads generated. It depends on the advertiser’s preferences and the agency’s business model.

The Privacy Considerations

This form of advertising is an interesting one because it begins with a person searching for something she is interested in. In this example, “home insurance”. When she is presented with search results relevant to her search terms and she clicks on one of them (in this case the MiWay ad), she is implicitly indicating an interest in what the relevant brand has to offer. So far she is consenting to some of her personal information being collected although it is likely data such as her IP address, general location, browser and computer information and so on.

Assuming the ad takes her to a page that is relevant to her search term and the ad text which informed her decision which ad to click on, there aren’t any privacy concerns so far. If the ad is misleading then any personal information the advertiser collects so far is without her permission because she was expecting a different result and that would have informed her consent in whatever form she gave it.

Once she she loads the landing page, the situation changes somewhat. Presented with the form, the advertiser has two options:

  1. rely on the consumer’s continued implicit consent to have the personal information she submits through the form to process it as the advertiser intends processing it, or
  2. explain what personal information the advertiser will collect through its interaction with the consumer, what it will do with that personal information and under what circumstances it will share that personal information with others.

The first option is inherently risky because the consumer assumes that the brand itself, namely MiWay, will collect the consumer’s personal information and will only use it to contact the consumer. That, at least, is the impression the landing page gives. The consumer may also assume that her personal information will not be used for cross-selling, disclosure to associated companies and will be limited to what she submits through the form. This may not be the case.

Often what happens is that the agency collects leads generated through the landing page and passes them along to its client, the company behind the brand. That company may want to use that personal information to market other products and services within its group, share it with partners and so on. There is also little, if any, indication of how long the personal information will be stored, how it will be stored and at what point it will be destroyed.

All of these answers should be communicated to consumers going forward if they are to make informed decisions about who can process their personal information and under what circumstances under the expanded privacy compliance framework Protection of Personal Information Act is going to introduce shortly. One of the best ways to do this at the moment is through a clear privacy policy framework which solicits that consent from consumers arriving at the landing page. These policies should clearly identify the parties handling the personal information consumers submit and what happens to it from the time it is submitted.

Agencies have a couple options when it comes to implementing privacy policy frameworks which range from incorporating their clients’ privacy policy frameworks (assuming they are appropriate) to publishing custom policies. Whichever option, it is not a very complex process, it just needs to be done with sufficient thought about the compliance requirements marketers face.

Risk management doesn’t stop at a privacy policy. It extends to data management and ensuring that personal information is processed securely and consistently with privacy policies’ requirements. Agencies should also consider whether they have sufficiently structured their contractual relationships with their clients (and vice versa) in order to manage potential liability flowing from privacy violations which could occur and which could be remarkably costly, both in terms of reputational harm and monetary cost.

<

p>The potential harm is not always foreseeable and neither is its extent. A good example of this is the recent Adobe privacy breach which has had far-reaching implications not just for Adobe itself but for users who use a range of other services. This is just not something companies or their agencies can afford to ignore. They could be the next trending news item with a plummeting share price.