Community feedback: be careful what you wish for

Occupy Wall Street S15 Arrest

The first anniversary of Occupy Wall Street gathered at Washington Square Park, Occupy Town Square. A march down Broadway to Zuccotti Park started at 6pm on September 15th.

A recent New York Police Department attempt to engage with New Yorkers serves as a reminder that crowdsourcing positive feedback doesn’t always work quite as well as you may hope, if it works at all. As Ars Technica reported:

The Twitterverse was abuzz Tuesday evening after the New York City Police Department made what it thought was a harmless request to its followers: post pictures that include NYPD officers and use the #MyNYPD hashtag.

Much to the NYPD’s surprise and chagrin, the simple tweet brought on a torrent of criticism from the Internet. The result was national coverage of hundreds of photos depicting apparent police brutality by NYPD officers, which individuals diligently tweeted with the hashtag #myNYPD.

The Ars article touches on a number of other, similar attempts to elicit positive feedback from communities and the clear trend is that the community will give you its assessment of what you are doing and represent, it won’t necessarily give you the feedback you probably want.

This isn’t necessarily a reason not to engage with your community but it does require courage. If you want honest feedback, community feedback is a terrific opportunity to get it. If, on the other hand, you don’t want to venture outside a positive reinforcement bubble, perhaps start with a different sort of campaign.

Brands, accurate facial recognition and why transparency is critical

Introducing accurate facial recognition into the mix potentially removes the need for you to tell Facebook (or a future Facebook connected site or app) who you are before your data is shared and your experience modified. All you will need to do now is show up and let a camera see you long enough to capture a reasonably clear image of your face. From there you will be identified, placed into a particular context and things will happen. As a brand, there are some interesting opportunities. Imagine your guests arrive at your event and, instead of relying on guests to manually check in, a webcam at the door connected to your Facebook Page recognises the guests as they arrive and posts an update in your stream sharing their arrival. This isn’t happening yet but it is very possible. 

Facebook’s new artificial intelligence group recently published a research paper titled “DeepFace: Closing the Gap to Human-Level Performance in Face Verification” which describes its advances in facial recognition technology. The abstract is pretty technical so I highlighted the big takeaway that may interest you:

In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4,000 identities, where each identity has an average of over a thousand samples. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.25% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 25%, closely approaching human-level performance.

According to the MIT Technology Review’s article titled “Facebook Creates Software That Matches Faces Almost as Well as You Do”, human beings recognise faces correctly 97.53% of the time which makes DeepFace just about as accurate as humans when it comes to identifying your face. What does this mean for brands? Quite a lot although probably not right away.

One of the service features that will continue to distinguish brands and their service offerings is a brand’s ability to present its customers with a deeply personal and meaningful service. Brands have been working on ways to personalise their services for quite some time and have used demographics, location, culture and, more recently (and as we have increasingly seen on Facebook and Google properties), your interests. All of this information is being associated with your identity so when you connect to a site or an app with your Facebook profile, for example, you share your interests, connections and other signals from your profile with the site or the app which then customises your experience, tells you which of your friends are also using the site or the app (making it more likely that you will continue to use it) or do a number of other things to present a version of the site or the app that is more relevant to you.

Introducing accurate facial recognition into the mix potentially removes the need for you to tell Facebook (or a future Facebook connected site or app) who you are before your data is shared and your experience modified. All you will need to do now is show up and let a camera see you long enough to capture a reasonably clear image of your face. From there you will be identified, placed into a particular context and things will happen. As a brand, there are some interesting opportunities. Imagine your guests arrive at your event and, instead of relying on guests to manually check in, a webcam at the door connected to your Facebook Page recognises the guests as they arrive and posts an update in your stream sharing their arrival. This isn’t happening yet but it is very possible.

Of course whether users allow this will likely depend on Facebook’s (or the relevant service’s) data protection policy (with this sort of technology, the term “privacy policy” is totally inappropriate – privacy is a memory) and the controls Facebook will make available to users to permit the service to automatically identify and tag them more publicly than it does at the moment. The challenge is that most users don’t pay much attention to their privacy settings and don’t customise them to suit their preferences. That doesn’t prevent them from being outraged when brands use their profile data in otherwise permissible ways. This may not seem like a problem but, from a reputation perspective, it can be.

Even though this technology is not implemented particularly widely, accurate facial recognition associated with identities and personal information profiles is probably not far off. It is going to scare consumers who will become aware of the myriad cameras and opportunities for them to be identified and located in specific contexts. The remnants of their privacy (by obscurity) will be whittled down to almost nothing and they won’t expect it. As a brand, this technology offers a number of opportunities to engage with customers in a very meaningful and personal way but catching them by surprise is almost certainly going to backfire, largely because the backlash will be so much more intense, precisely because the possible applications of this technology are so personal.

Preparing customers for implementations of these sorts of technologies and reducing the risk of significant reputational harm requires transparency and a healthy dose of courage to be as transparent as you need to be about how you intend engaging with your customers. As I pointed out in my talk at the recent SA Privacy Management Summit, brands have little to gain by being opaque. Transparency is a critical risk management tool, it engenders trust and keeps brands accountable and honest. That is scary for brands not accustomed to being in the spotlight but if they want to engage more effectively with their customers and earn their loyalty, they can’t do it by being evasive and catching their customers by surprise.

Widespread facial recognition will have a fairly profound impact on data protection when businesses adopt it on a larger scale. The opportunities for brands are tremendous and could, literally, revolutionise how a customer perceives a brand. To paraphrase a worn adage, with this great power comes great responsibility and brands should think carefully about how to introduce these tools to their customers and obtain their buy-in. Even though facial recognition is still in fairly limited use, brands have been using various tools and techniques to leverage customers’ identifies and personal data to customise their experiences of a brand’s products and services for some time now. Transparency is more likely to win customers’ trust even though it scares many brands silly. That said –

Courage is not the absence of fear, but rather the judgement that something else is more important than fear.

— James Neil Hollingworth

Fake White House bombing tweet craters stock markets

The Associated Press Twitter profile was hacked yesterday and a fake tweet about a bombing at the White House was published. The result was dramatic, the US stock market plummeted and only recovered about 10 minutes later when AP tweeted that it had been hacked and since locked its Twitter profile down.

2013-04-23 706x410q70AP Tweet on White House

The Associated Press Twitter profile was hacked yesterday and a fake tweet about a bombing at the White House was published. The result was dramatic, the US stock market plummeted and only recovered about 10 minutes later when AP tweeted that it had been hacked and since locked its Twitter profile down. According to an AP release on Yahoo! News:

The false tweet went out shortly after 1 p.m. and briefly sent the Dow Jones industrial average sharply lower. The Dow fell 143 points, from 14,697 to 14,554, after the fake Twitter posting, and then quickly recovered.

A Securities and Exchange Commission spokeswoman declined comment on the incident.

AP spokesman Paul Colford said the news cooperative is working with Twitter to investigate the issue. The AP has disabled its other Twitter accounts following the attack, Colford added.

The Syrian Electronic Army claimed responsibility for the hack. This couldn’t be corroborated.

This is a dramatic example of a growing trend which is impacting more and more businesses whose reputations are being affected by negative sentiment. While this particular event was manufactured using a hack, many companies are finding their market values dropping in response to more sincere sentiment expressed by angry customers and other concerned stakeholders.

What makes these trends even more worrying for brands is that trading systems are increasingly automated and make use of proprietary algorithims to identify trends and respond. They are often not smart enough to detect hoaxes like the White House bombing tweet and their responses can have very real consequences nonetheless. According to International Business Times:

“I think there was a lot of damage done on that,” Sean Murphy, a treasuries trader at Societe Generale in New York, told Reuters. “Automatically electronic trading kicks in and they don’t know the difference between a fictitious story and the truth and immediately started to buy and took us right back to the day’s highs.”

One possible explanation for the event, if not for its severity, is that a vast quantity of equity trades are now controlled by computers that take their cues from proprietary algorithmic trading programs. One problem with such trading, however, is that it can create a snowball effect, albeit one that normally self-corrects. That was what happend in the May 2010 “flash crash” that lasted less than 20 minutes, but still erased more than $800 billion of market value during that time.

From a brand’s perspective, online reputation management is more important than ever before and not just from a warm and fuzzy branding perspective. It is crucial from a corporate governance perspective and is something boards should be concerned about and should be addressing. The days of a few negative tweets’ impact being limited to a few angry hashtags are over. If negative sentiment triggers viral responses, a company could see its market value crater and it may not be able to recover from that. The King 3 Code may not be mandatory and the JSE’s ability to enforce compliance with it may be limited, but a company’s stakeholders can cripple a business before unprepared managers have a chance to realise what has happened.

<

p>The starting point for most businesses is their policy and communications frameworks, both internal and external. Legal frameworks are important components and should be thoughtfully designed to support a company’s broader efforts to anticipate and manage these risks.

Defamation law’s chilling effects on social media

If you look to recent cases, you generally see this issue arising in the context of politicians and sports personalities whose indiscretions are published online (usually Twitter) and disseminated rapidly. Embarrassed plaintiffs and applicants approach courts, indignant, and seek to silence the debates and expressions of schadenfreude. The courts, applying the law as they understand it to this new medium, grant orders which sometimes just seem to be out of touch with new realities. What concerns me about these cases is that simply applying these legal principles to this new, unprecedented landscape can, and often does, have a chilling effect on freedom of expression. 

Jamie's 3rd birthday party photos-24

Quirk invited me to listen to and watch Emma Sadleir speak about social media and the law last Friday. She took the Quirk team and a few guests (which included me) through South African law on defamation and how it related to social media. For the most part she dealt with fundamentals in our law and, at one point, she pointed out that, in her view, retweeting a defamatory tweet exposed the re-tweeter to a defamation claim alongside the original poster.

@emmasadleir “anyone can be sued in ‘chain of publication’”… “but there is a ‘innocence of dissemination’ defence” #UoQJozi

— justinspratt (@justinspratt) March 1, 2013

I don’t necessarily agree with Emma’s views but I agree that a court will likely see retweets as endorsements and will hold re-tweeters (and equivalent users on other platforms) liable for defamation because they clicked a button and shared a defamatory update with their followers or connections.

While I can understand the argument and agree there is merit to it, as well as the challenge that retweeting and similar sharing online potentially and exponentially aggravates the initial defamation, I don’t necessarily agree that it should be actionable on this scale.

If you look to recent cases, you generally see this issue arising in the context of politicians and sports personalities whose indiscretions are published online (usually Twitter) and disseminated rapidly. Embarrassed plaintiffs and applicants approach courts, indignant, and seek to silence the debates and expressions of schadenfreude. The courts, applying the law as they understand it to this new medium, grant orders which sometimes just seem to be out of touch with new realities. What concerns me about these cases is that simply applying these legal principles to this new, unprecedented landscape can, and often does, have a chilling effect on freedom of expression.

The social Web is an unparalleled platform for expression (both desirable and undesirable). It is absolutely used for undesirable purposes that include unjustifiably harming reputations, economically harming content creators by exploiting their work without their permission and harming systems around the world. At the same time, it is a powerful platform for previously disenfranchised voices which include protestors fighting oppressive regimes and consumers speaking out against irresponsible brands.

Applying conventional defamation law to these scenarios without developing a more nuanced and robust model of what should be protected free expression could have the effect of stunting what could otherwise be a radically transformative shift in our collective culture towards a more transparent and empowered society. A quote from the 1925 US Supreme Court case of Whitney vs California seems appropriate:

To courageous, self-reliant men, with confidence in the power of free and fearless reasoning applied through the processes of popular government, no danger flowing from speech can be deemed clear and present unless the incidence of the evil apprehended is so imminent that it may befall before there is opportunity for full discussion. If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.

First National Bank and its marketing consent problem

Little cakes on stick things-14

Innovative bank, FNB, has a consent problem. Jason Elk published a blog post over the weekend titled “FNB, what on earth are you doing to your customers?” in which he took issue with a consent mechanism FNB has been making use of or some time now. Essentially, this consent mechanism requires that customers agree to receive marketing information from the FirstRand Group in order to remain eligible to receive many of the benefits FNB gives its customers and which may have attracted many of its customers in the first place.

2012-10 FNB Consent model

Jason’s concern is essentially as follows:

The nutshell version is that FNB is instructing me to say YES to receiving marketing of “other products and services”, or “forfeit any current reward programs (I’m) participating in and be excluded from programs (I) may qualify for in the future”. These include eBucks, fuel rewards and airtime rewards.

So hang on. I’m switching my bond account to FNB, adding the biggest asset I own to my existing portfolio that includes my car, credit facilities, call accounts, savings accounts, cheque accounts, cards and other accounts and services, and because I don’t want to receive marketing messages I will be excluded from eBucks and other rewards immediately and in the future? So instead of rewarding me further, I’m being punished for bringing even more business to the bank. Not the ‘do more’ bank I thought I knew.

FNB’s CEO, Michael Jordaan, responded to Jason on Twitter and, essentially, indicated to him that FNB requires the consent in order to communicate useful information about its products and services to customers. It had no intention of making use of the consent for “blanket marketing” which Jordaan professed a dislike for:

While I understand the need for a consent in order to communicate useful information to customers, FNB’s consent model, in this case, is problematic. The Protection of Personal Information Bill (likely to become the Protection of Personal Information Act before the end of this year) defines “consent” as follows:

any voluntary, specific and informed expression of will in terms of which permission is given for the processing of personal information

The key terms here are “voluntary, specific and informed”. This means that a consent given in terms of the Protection of Personal Information Act can’t be a “dumb” consent. The person giving the consent has to clearly understand what he or she is consenting to, must be consenting to that action voluntarily (in other words, without that consent being coerced) and that consent must be fairly focused on particular activities that the person is informed about.

This is reinforced by several “Conditions for Lawful Processing of Personal Information” which are set out in Chapter 3 of the Protection of Personal Information Bill. These conditions include a processing limitation intended to moderate the extent to which personal information is processed as well as a Purpose Specification condition which requires that personal information be, among other things, collected for a very specific purpose.

Section 10, which forms part of the processing limitation condition, states that –

Personal information may only be processed if, given the purpose for which it is processed, it is adequate, relevant and not excessive.

Section 13 of the Protection of Personal Information Bill includes the following:

Personal information must be collected for a specific, explicitly defined and lawful purpose related to a function or activity of the responsible party.

<

p>In the case of FNB’s consent model, there appears to be a disconnect between FNB’s apparent intention behind the consent and what the consent of wording actually allows for. As you can see from the consent wording, it is a fairly broad consent to receive information about the FirstRand Group’s products and services”. The consent mechanism goes further –

… current or future participation in FirstRand rewards programs … is dependent on you having granted the Bank consent to market other products and services to you. By processing a “No” instruction you will forfeit any current reward programs you are participating in and will be excluded from programs you may qualify for in future.

If no selection is made, marketing consent will default to “No”

This consent wording is a little contradictory in the sense that the mechanism itself is legally correct in that the Bank has requested an opt in from its customers and, in the absence of this opt in, the Bank will assume that the customer does not wish to be marketed to. The difficulty is that the consent required for what is essentially products and services related information is couched as a consent to receive marketing information about products and services from the FirstRand Group. The scope of the marketing consent required relative to what FNB appears to require, as its CEO clarified on Twitter, is very different.

The FirstRand Group includes a number of other entities, aside from FNB. Consenting to receive marketing information about the FirstRand Group’s products and services may well encompass far more than specific information about FNB products and services which a customer may be utilising. This could be a violation of the processing limitation condition in the Protection of Personal Information Bill. If so, this would render the consent sought to broad.

The concern Jason highlights in his blog post goes to the definition of consent in the first place. It is probably fair to say that many of FNB’s customers were attracted to the bank by its rewards programs and requiring a seemingly broad consent to receive marketing about potentially unrelated products and services in exchange for eligibility for these rewards programs may well undermine the “voluntary” requirement in the consent definition.

What this all means is that FNB’s consent mechanism may not obtain the appropriate consent required by the Protection of Personal Information Act. This consent mechanism is simply too blunt an instrument for what the bank appears to require. One option is for the bank to split the consents required into a mandatory consent to receive product and services related information pertaining to the products and services the customer is making use of from a consent to receive marketing information regarding the FirstRand Group’s products and services, generally. Because these consents may have to be accompanied by an opt out mechanism, they should also be accompanied by appropriate waivers from the customer in the event the customer elects not to receive product and services related information and either misses out on an opportunity or incurs costs due to not taking advantage of something communicated to the customer, for example.

While the suggestion probably will not be welcome news to FNB’s marketing team, it may be a necessary adjustment to the consent model in order to bring it into line with the Protection of Personal Information Act. Of course this is dependent on these provisions being interpreted on the basis I have suggested and a more flexible interpretation may allow for this consent mechanism to remain in place going forward.

A consequence of this, though, is that FNB may be facing a reputational storm from customers accustomed to the bank’s innovative approach to customer service now facing a somewhat overbearing approach to obtaining consent for marketing purposes. The effects of this may be less desirable than the consequences of changing the consent mechanism.