All posts by Paul E. Paray

EU-US Privacy Shield may soon be suspended


The EU-US Privacy Shield may finally be in actual jeopardy.  It was previously thought that given the high stakes, this data transfer accommodation implemented as a replacement for the judicially invalidated Safe Harbor program was too important an agreement to be withdrawn and that only another judicial ruling could render its death knell.  That is no longer the case.   A vote today by the European Parliament made sure of that.

As reported by the IAPP,  on July 5, 2018 the European Parliament passed a non-binding resolution by a vote of 303 to 223 votes and 29 abstentions to have the European Commission suspend the EU-US Privacy Shield “unless the U.S. is fully compliant” by September 1, 2018.    This is the second September review of the EU-US Privacy Shield.

Between the GDPR requirements left out of the EU-US Privacy Shield, the Cambridge Analytica fiasco that still dogs Facebook, the US’s adoption of the Clarifying Lawful Overseas Use of Data Act (CLOUD Act) – a statute that expressly allows access to trans-border personal data, the US’s pulling out of the Iran deal despite strong pressure from the EU, and the current tariff barbs being sent across the Atlantic, the long-term health of EU-US Privacy Shield can no longer be considered a given.   Companies who have been reliant on this data transfer accommodation should certainly consider alternatives as soon as possible.

UPDATE:  October 23, 2019

As reported in TechCrunch, the EU-US Privacy Shield has withstood its last review given the appointment of an ombudsperson role but there still remains pending litigation targeting it.

UPDATE:  July 16, 2020

On July 16, 2020, the EU Court of Justice decided “Schrems II” and invalidated the EU Commission’s Decision 2016/1250 regarding the adequacy of the EU-U.S. Privacy Shield (‘the Privacy Shield Decision’).  As described in the Press Release:

[T]he limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.

In rejecting the use of a Privacy Shield Ombudsperson who was independent from the Intelligence Community – the agreed-upon safeguard found in the Privacy Shield Decision, the Court of Justice ruled that such a mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.” 

New California law provides statutory damages for data incidents

With the June 28, 2018 signing of The California Consumer Privacy Act of 2018, data breach class counsel are rejoicing that they finally have a private right of action backed with statutory damages.  Even though there were previous statutory remedies for privacy violations, the recent California law has gone where no other law has gone before by expressly providing a private right of action for a data breach that also allows for a minimum statutory amount.  Not surprisingly given it was the first state to pass a breach notification law, the California legislature again led the way.

After certain data incidents involving the loss of consumer data, California consumers will have beginning on January 1, 2020 a private right of action that can also be brought on a class-wide basis.   Specifically, any consumer whose unencrypted or nonredacted personal information “is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’ violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information may institute a civil action . . . to recover damages in an amount not less than one hundred dollars ($100) and not greater than seven hundred and fifty ($750) per consumer per incident or actual damages.”   Section 1798.150(a)(1).

Despite being groundbreaking, there are still numerous hurdles class counsel must surmount before a class can be certified.  For example, the private right of action may not be allowed unless the compromised information is subject to unauthorized use.  Section 1798.150(a)(1).   Accordingly, those incidents where unauthorized use is not in issue are not subject to the statute.

Moreover, the law can only be used against a business with “gross revenues in excess of twenty-five million dollars ($25,000,000)” or one that purchases personal data on “50,000 or more consumers, households, or devices” or one that “derives 50 percent or more of its annual revenues from selling consumers’ personal information.” Section 1798.140(c).

Curiously, the law allows a business to “cure” its security violation; and thereby avoid suit, but leaves to the imagination exactly how that curing process would play out.   Section 1798.150(b)(1).

And finally, this private right of action can be withdrawn if the California Attorney General files its own suit after being provided notice of a consumer’s lawsuit.  Section 1798.150(b)(3).   The AG’s office has 30 days to decide whether or not to file suit after being provided with the consumer’s lawsuit notice.

Notwithstanding the last-minute changes made to this last-minute statute, it still provides California consumers with the country’s most expansive statutory privacy rights– rights that will be immediately deployed by class counsel after 2020.   Most analysis on this new law, however, has focused on comparing it to the EU’s GDPR privacy regime – a recently implemented privacy regime that impacts many  US-based companies.    In addition to the privacy requirements, companies processing significant amounts of consumer personal data should also take the class action risk very seriously and if they do not already purchase insurance for that risk, they should at least evaluate transferring some of this liability risk by way of the privacy and data security insurance long been available to most any company.

UPDATE:  September 28, 2018

SB211 was signed into law largely to “technically correct” errors in the law but nevertheless made two significant changes to Section 1798.150 when it removed the prior requirement that consumers notify the Attorney General prior to bringing any action for a data breach and removed the prior requirement that the Attorney General could bar consumer plaintiffs from bringing suit.  These two significant changes will certainly make for a very interesting class action year in 2020.

UPDATE:  February 26, 2019

On February 22, 2019, a proposed amendment to the law was proposed that would do away with a cure provision, expand the statutory damages provision to any violation of the law, and limit the role of the Attorney General in policing violations.  If passed, these changes will significantly alter the reach of the law by making the plaintiff’s bar’s arsenal even wider and the law’s penalties that much stronger.

Supreme Court sides with privacy advocates in Carpenter

On June 22, 2018, the United States Supreme Court ruled that obtaining cell-site location information without a probable cause warrant violates the Fourth Amendment despite the fact there were no actual associated property rights in the data.   In writing for the majority, Chief Justice Roberts sided with the liberal wing of the Court and against those Justices looking to affirm the robbery conviction in question.  Justice Gorsuch’s Dissent correctly points out, however, that the “most promising line of argument” available to Carpenter was not well-developed by Carpenter, namely that he had positive property rights in his geo-location data.  Gorsuch, J., Dissent at 21.   Instead, the Majority ruled there was a reasonable expectation of privacy in the data in question despite the lack of any available property rights.

This decision could have been a potential clarion call regarding privacy rights well beyond that found in a Fourth Amendment context.  Instead of confirming the data’s true value – again, as a positive property right, the Majority determined that a third-party’s access and consent to use the data in question did not negate the data’s ability to give rise to a reasonable expectation of privacy – in effect, carefully distinguishing the so-called third party doctrine previously applied by the Court.  In so doing, the Majority carefully parsed precedent on this issue – now giving it a second tier analysis status, rather than outright reject it as apparently sought by Justice Gorsuch in his Dissent.  Gorsuch, J., Dissent at 5 – 8.

Recognizing the contortions taken by the Majority, Justice Alito fairly screamed for Congressional intervention given this perceived affront to existing Fourth Amendment precedent.  Alito, J., Dissent at 27 (“Legislation is much preferable to the development of an entirely new body of Fourth Amendment caselaw for many reasons, including the enormous complexity of the subject, the need to respond to rapidly changing technology, and the Fourth Amendment’s limited scope.”).

On April 17, 2018, the Court previously dismissed another matter involving application of the Stored Communications Act and “rapidly changing technology” given Congressional intervention on the issue rendered moot the question before the Court.  Given that the Carpenter Majority’s Constitutional analysis may leave little room for future Congressional intervention, subsequent courts will have to grapple with deciphering the potential import of this decision – a decision with a remarkable four separately written dissents.

For example, what exactly constitutes a “a comprehensive chronicle” of defendant’s past movements or how will heretofore unknown future non-property “privacy rights” give rise to a reasonable expectation of privacy?  Justice Gorsuch was correctly very much concerned about the uncertainty springing from this decision.  See Gorsuch, J., Dissent at 12 (“In the end, our lower court colleagues are left with two amorphous balancing tests, a series of weighty and incommensurable principles to consider in them, and a few illustrative examples that seem little more than the product of judicial intuition.”).

Despite how the Court in Carpenter references location-based tracking as some sort of newfound innovation, location-based tracking has been a percolating privacy issue for more than seven years.  To that end, even though privacy advocates and criminal defense lawyers may very well bask in this decision for years to come, by the Court not clearly ruling how certain privacy rights can give rise to a positive property right under the Fourth Amendment, privacy advocates may have actually lost a more impactive battle they could have won.

More specifically, if the ACLU – on behalf of Mr. Carpenter, had fully argued the more appropriate positive law approach discussed by  Justice Gorsuch we may all be reading a 6-3 decision that found privacy rights in data can indeed give rise to property rights under the Fourth Amendment.  Gorsuch, J., Dissent at 21 (“Before the district court and court of appeals, Mr. Carpenter pursued only a Katz“reasonable expectations” argument. He did not invoke the law of property or any analogies to the common law, either there or in his petition for certiorari. Even in his merits brief before this Court, Mr. Carpenter’s discussion of his positive law rights in cell-site data was cursory. He offered no analysis, for example, of what rights state law might provide him in addition to those supplied by §222. In these circumstances, I cannot help but conclude — reluctantly — that Mr. Carpenter forfeited perhaps his most promising line of argument.”).  For now, privacy advocates will have to be satisfied with the actual ruling before them – one that leaves the door quite open to future expansions of the “reasonable expectation of privacy”.

OCR wins $4.3 million HIPAA Victory against MD Anderson

On June 18, 2018, the the Office for Civil Rights (OCR) posted a press release announcing its summary judgment victory against the University of Texas MD Anderson Cancer Center (MD Anderson) – a ruling that will require MD Anderson to pay $4,348,000 in civil money penalties to OCR.   According to the press release, this is only the second HIPAA summary judgment victory in OCR’s history and the $4.3 million is the fourth largest amount ever awarded to OCR for HIPAA violations.

The June 1, 2018 Administrative Law Judge’s decision ultimately hinged on a stolen unencrypted laptop and several lost unencrypted USB thumb drives containing “identifying information such as patient names, addresses, and Social Security numbers; and clinical information such as diagnoses, assessments, prognoses, and treatment regimes” of a total of 33,500 individuals.  Decision at 2.

The hefty fine was based on the fact MD Anderson knew encryption was an essential risk management tool since 2006 yet did not get around to fully deploying encrypted devices until after the losses in question.  According to the ALJ, MD Anderson before then made only “half-hearted and incomplete efforts at encryption”.  Decision at 5.

According to the ALJ:

The question is whether Respondent took the necessary steps to address the risk that it had identified – the potential for data loss due to the storage of ePHI on unencrypted devices. As I have explained, the failure to address that risk is the sum and substance ofRespondent’s noncompliance. Had it done so, then unauthorized acts by Respondent’s employees might be relevant to the issue of compliance. But, failure by Respondent to take the security measures that it had identified as necessary renders irrelevant the issue of whether employees were playing by the rules, because that failure created a risk whether or not Respondent’s employees did so.

Decision at 14 (emphasis in original).

This latest OCR action may very well be appealed given the jurisdictional arguments made by MD Anderson.  No matter what the final appellate result, however, the ruling should slam the lid on any covered entity ever questioning again whether encryption is worth the cost of deployment.     Whether it is from a state enforcement action or OCR settlements based on vendor negligence, laptops stolen from a car, or a USB thumb drive improperly taken from an IT department, when it comes to encryption an ounce of prevention is definitely worth at least a pound of cure.

Facebook and Google face GDPR complaints on day one

Privacy activist Max Schrems is at it again.   Early morning on May 25, 2018, Mr. Schrems’ group – NOYB.eu (none of your business), filed complaints in four EU member countries claiming that the purported GDPR consents now obtained by Facebook and Google are impermissible “forced consents” given they provide nothing more than a take it or leave it proposition for users.  Facebook previously launched a campaign claiming that it was fully on board with GDPR despite the risks entailed in these “pop up consents”.

Max Schrems  should not be underestimated – he single-handedly forced a replacement to the former EU Safe Harbor regime.    The Safe Harbor regime previously governed data transfers between the US and EU but was invalidated on October 6, 2015 in a case brought by Mr. Schrems in the EU Court of Justice.

Mr. Schrems’ most recent actions go at the heart of the current online advertising duopoly and his actions against Facebook and Google should be taken seriously by them given Schrems’ prior successes and the fact he may very well be correct in his assessment of GDPR – a privacy regime that is purposefully ambiguous in the area of consent.

Consensus 2018 blockchain event exceeds expectations

After attending the largest early adopter tech conferences conducted over the past thirty years – from Internet World, VR World, COMDEX, CES, RSA, Game Developers Conference, etc., it is easy to say Coindesk’s recent Consensus 2018 Conference – the foundation for NYC’s “Blockchain Week”, was one of the largest gatherings of early technology adopters and backers ever packed in a single location.  Almost beside the point, Consensus 2018 was also easily the largest blockchain event to date.

Despite exceeding pretty much all expectations, it was not, however, without some controversy.  Noticeably absent from the event was Vitalik Buterin as well as any Ethereum presence other than a scheduled announcement and booth presence for the Enterprise Ethereum Alliance.  The visionary Buterin boycotted the event given disagreements with the sponsor and a purported grievance with the  $2,999 price tag  – despite the fact Mr. Buterin himself could have bought tickets for all 8400+ attendees if he wanted.  Buterin’s thought leadership and insights were certainly missed so hopefully next year there will be some sort of peace accord that brings him back into the fold.

According to the emcee for the event – a Brit anxiously pacing up and down with the obligatory iPad seemingly issued to all tech conference emcees, half of the attendees hailed from outside the United States.  In fact, meals and private meetings were enjoyed with folks visiting from South Korea, Australia, Finland, Switzerland, Portugal, Brazil, Berlin, Hong Kong, Vancouver, and Toronto – and that was only on the first of two attendance days.  Unlike what was shown by the early days of the web ecosystem, this gathering more than anything concretely demonstrates that any decentralized ledger future will be shaped by those outside the United States as much as by persons located within its borders.

The caliber of the audience – more so than the speakers, also demonstrates that the financial and professional institutions who missed out on the web ecosystem’s early brick laying are avoiding past mistakes.  Sensing just how disruptive things may soon get, they were out in full force – with Deloitte leading the Big Four charge and the purported naysayer JP Morgan having a sophisticated presence from New York and London.   Notwithstanding the fact the exhibit hall was stacked with ICO and ICO-wannabee companies that will likely go away in a few years, foundational companies were front and center promoting the tools and business models needed before blockchain can be digested by the masses in any meaningful way.

While companies wait to “cross the chasm”, investors are taking sides by investing in token economies and novel ramp up technologies.   And, after the speculative sheen has faded, the lasting result will be efficiencies in commerce one could only have dreamt about a few years ago.    Simply put, the “trust protocol” that will eventually be layered on top of our current digital ecosystem will create new opportunities for pretty much any company willing to listen and adapt.

Supreme Court takes Google cy pres fund case

On April 30, 2018, the United States Supreme Court granted certiorari so that it could determine whether a settlement in a privacy class action against Google was “fair, reasonable, and adequate” when the roughly $5 million settlement only went to cy pres recipients rather than actual class members.  Specifically, the Court is to decide:

Whether, or in what circumstances, a cy pres award of class action proceeds that provides no direct relief to class members supports class certification and comports with the requirement that a settlement binding class members must be “fair, reasonable, and adequate.”

As previously recognized, the use of cy pres settlements has been a troublesome trend in privacy class action settlements given it allows plaintiffs’ counsel to quickly file and resolve class actions before  actual damages can be made readily apparent.  Indeed, attorney generals have objected  to cy pres settlements given the lack of redress available to victims.  Given Justice Roberts prior pronouncement on the topic, it may very well be the case that cy pres funding  – which previously only took place in settlements after plaintiffs were actually compensated, may very well no longer be an acceptable means of quickly ending a privacy class action.

Facebook doubles down on GDPR despite the risks

On April 17, 2018, Facebook’s Chief Privacy Officer – Erin Egan, proclaimed:  “[t]oday we’re introducing new privacy experiences for everyone on Facebook as part of the EU’s General Data Protection Regulation (GDPR), including updates to our terms and data policy.”  According to Ms. Egan, “people in the EU will see specific details relevant only to people who live there” yet “there is nothing different about the controls and protections we offer around the world.”  In her blog post, Ms. Egan also reaffirmed something said numerous times by Mark Zuckerberg during recent Congressional Hearings, namely “we continue to commit that we do not sell information about you to advertisers or other partners.”  Tellingly, the phrase “information about you” was never elaborated upon by Ms. Egan.

As is often the case, the devil is in the details.  First, the fact “controls and protections” found on a Facebook account may be similar around the globe – as was the case before Ms. Egan’s blog post, does not mean the privacy laws protecting Facebook users have remained the same.  Quite the contrary is true given that the choice of law provision applicable to Facebook’s users was just amended from Facebook’s low-tax home domicile of Ireland to the non-GDPR land of California  – expressly now leaving about 1.5 billion users potentially outside the purview of  the GDPR.  When asked by Ars Technica why the choice of law provision was changed, Facebook purportedly said “the change had been made in the name of the companies’ business interests. The company declined to elaborate further.”

Second, neither Facebook’s new Terms of Service nor its new Data Policy – both last revised on April 19, 2018, define the word “you” or “your”.  As well, the revised Data Policy expressly gives Facebook broad latitude in its use of undefined user “information”:

We use the information we have (including your activity off our Products, such as the websites you visit and ads you see) to help advertisers and other partners measure the effectiveness and distribution of their ads and services, and understand the types of people who use their services and how people interact with their websites, apps, and services.

Indeed, apparently armed with this undefined user “information”, Facebook recently launched a program that analyzes user data sufficiently to purportedly predict behavior for advertisers.

If the undefined “you” in Facebook’s agreements differs from the composite “you” created by Facebook that is pseudonymized, repurposed and then sold to advertisers, one could never tell from any of Facebook’s agreements.  Interestingly, Recital 78 and Article 25 of the GDPR expressly consider “pseudonymising personal data” a best practice for companies developing Privacy by Design compliance initiatives.  Under the GDPR, pseudonymized data can even be processed for purposes different from which the data was originally collected.  The only problem with the GDPR’s exalting of pseudonymizing is that companies now oftentimes discover the sovereign identity “you” when provided information concerning the composite “you” that is pseudonymized by Facebook.

It would have been comforting if Facebook’s auditors were on top of this longtime “nudge wink” between Facebook and the advertising industry.  Unfortunately, they are not.  In an April 18, 2018 paper titled, “Understanding and Improving Privacy “Audits” under FTC Orders”, author Megan Gray points out that Facebook’s FTC audit assessments are circular – “Management asserts it has a reasonable privacy program. Based on management’s assertion, we certify that the company has a reasonable privacy program.”

In effect, this audit process ultimately renders Facebook’s assessments “almost indecipherable” and “requiring certified-auditor knowledge.”  As correctly summed up by Gizmodo, “[t]he current process essentially allows companies under consent orders to self-regulate.”  Accordingly, it is no surprise that PwC’s auditing cleared Facebook’s privacy practices “in an assessment completed last year of the period in which data analytics consultancy Cambridge Analytica gained access to the personal data of millions of Facebook users”.

Notwithstanding its aptitude for parsing words, Facebook will soon be in uncharted and unpredictable privacy waters where disclaimers and popup consent forms may not easily tread.  Even though no one can say with certainty how things will play out after the GDPR’s formal launch on May 25, 2018, one thing is sure – Facebook has very publicly committed to GDPR compliance.  And, to the extent there are failings in such compliance, there are more than a handful of class counsel and global governmental agencies ready to pounce on Facebook and its partners.

Did Facebook close the door to self-regulation?

On April 10, 2018, Facebook’s CEO began his two-day testimony before Senate and House Congressional committees in a quintessential US setting but may have brought with him a groundbreaking privacy regime from across the Atlantic in the process.  Mr. Zuckerberg testified:  “The internet is growing in importance around the world in people’s lives and I think that it is inevitable that there will need to be some regulation.”  The Net Neutrality regulations Zuckerberg  may have had in mind may not be what is ultimately in store for Facebook.

GDPR

By way of background, the EU’s General Data Protection Regulation (679/2016/EU) – which recognizes that the “protection of natural persons in relation to the processing of personal data is a fundamental right”, requires the implementation of an EU-wide regime of country-specific laws effective by May 25, 2018.   Despite its current Brexit status, the UK has also voluntarily implemented GDPR .

The GDPR harmonizes to a great degree the privacy laws of every EU country and broadly controls the use of personal data in connection with either the offering of any goods or services to persons in the EU or the monitoring of EU-based persons.  Companies must ensure that they only collect and process the minimum required personal data for the express use given under an unequivocal affirmative consent.  The new consent requirements found in the GDPR bring this privacy regime to compliance levels never before seen.

Companies that collect and use personal data must now clearly explain to data subjects the exact uses made of such personal data – with evidence maintained that demonstrate related processes are compliant and followed in each individual case. Persons must also be afforded the opportunity to easily withdraw their consent to this use of personal data at any time and without suffering any detriment as a result of their request.  Moreover, persons protected under the GDPR have a right to be forgotten, i.e., all their personal data deleted, and a right to reject any data profiling.

Not unlike rights under 15 U.S.C. § 1681c of the Fair Credit Reporting Act when it comes to credit information, persons will also have the right to have their personal data amended and rectified and the right to be informed as to what personal data is currently being retained or used.  Unfortunately, getting Facebook to comply with these subject-access requests has previously been a difficult task.  Some have argued that the right to be forgotten – which is actually now more properly termed a “right to erasure”, can only work when GDPR becomes a global privacy regime having “globally connected legislation to ensure that information stored outside of the EU also underlies similar strict privacy regulation.”

A “serious breach” of GDPR requirements may result in a fine of up to 4% of the annual worldwide revenue of the impacted company – with the minimum fine set at €20 million. Disregarding the potential lack of enforceability for this extra-jurisdictional law, companies have been prepping for the GDPR privacy regime for years.   Indeed, given the potential downside, multi-national companies based in the US have not surprisingly spent millions of dollars on their GDPR compliance efforts.

Under the GDPR, the EU is for the first time in line with the US as regards data breach notification – but with a uniform and much stricter obligation to notice regulatory authorities within 72 hours of a breach.  Given Alabama has recently enacted its own data breach notification law – one that requires notification within 45 days of a breach if the breach is reasonably likely to cause “substantial harm” to the individual to whom the information relates, all fifty US states now have a data breach notification law.  Nevertheless, the current patchwork standard for breach notice in the US is far from uniform and certainly much less onerous than the blanket one set forth in the GDPR.

GDPR and Facebook

As set forth on its website, “Facebook and its affiliates, including Instagram, Oculus and WhatsApp, will all comply with the GDPR. . . Facebook may serve as a data processor.  When Facebook acts as a data processor, businesses are responsible for ensuring data they share with us complies with the GDPR.”  As a data processor who employs more than 250 persons, Facebook is obliged under GDPR to keep detailed records of all of their processing activities.  In other words, GDPR opens up the door to accessing Facebook’s vast data mining activities only hinted at by the recent Cambridge Analytica brouhaha.

On April 11, 2018, Mark Zuckerberg testified before the House Energy and Commerce Committee that GDPR “will be positive” and that requiring companies obtain “affirmative consent” makes sense.  According to Mr. Zuckerberg, there are a few parts of GDPR that are “important and good”.  For example, users should know what data companies have and users should be able to control this data.   When asked if GDPR got anything wrong, however, he could not answer the question and simply said he would have to “think about it”.  He was asked to provide his response to the House Energy and Commerce Committee at a later date.

GDPR, Facebook and Congress

Free-market Republicans who typically shy away from regulatory intervention gave more than passing nods to potential legislative intervention as regards Facebook.  Sen. John Kennedy (R., La.) bluntly recognized that Facebook’s “user agreement sucks.”  And, Senate Commerce Committee Chairman John Thune (R., S.D.) said:  “I’m not convinced that Facebook’s users have the information they need to make meaningful choices.” He also said that while Washington has “been wiling to defer to tech companies effort to regulate themselves. . . this may be changing.”  Mr. Kennedy was again more blunt: “There’s some impurities in the Facebook punch bowl. . . I don’t want to have to vote to regulate Facebook.  But by god, I will. That depends on you.”

Not waiting for Senators Kennedy and Thune to act, Senators Edward J. Markey (D-Mass.) and Richard Blumenthal (D-Conn.) – two longtime privacy advocates, announced on April 10, 2018 their Customer Online Notification for Stopping Edge-provider Network Transgressions (CONSENT) Act – proposed legislation requiring the Federal Trade Commission (FTC) to establish specific privacy protections “for customers of online edge providers like Facebook and Google.”  Among other things, the CONSENT Act would require that these “edge providers” obtain opt-in consent from users “to use, share, or sell users’ personal information” as well as notify users about “all collection, use, and sharing of users’ personal information.”  Although on its face the proposed law is not nearly as onerous as the GDPR privacy regime, there is nothing stopping the FTC from promulgating future regulations that not only include opt-in consent and use disclosures but also GDPR requirements that would never had been on the table before Mr. Zuckerberg began his unsworn testimony before Congress.

In a prior interview with the Washington Post, Senator Markey said:  “I think that this [Facebook] privacy spill is politically the equivalent of the oil spill in the Gulf of Mexico.  Because it involves our very democracy, I think [it] is going to draw more attention of the American public to this issue.”

GDPR, Facebook, Congress and the Monetization of Consumer Data

On the heels of recent comments from Facebook’s COO regarding the possibility Facebook might one day charge users a fee, Zuckerberg left the door open to the possibility of charging consumers for use of its social media platform.  During his April 11, 2018 House testimony, Zuckerberg again denied that Facebook sells its user data, saying: “That’s not how advertising works.”  A day earlier Zuckerberg repeated numerous times that Facebook did not sell consumer data – prodding Sen. John Cornyn (R-Texas) to exclaim:   “You clearly rent it!”  No matter how Mr. Zuckerberg perceives advertising as working or whether or not Facebook actually “sells” consumer data, one takeaway from these hearings is that perception can quickly morph into reality.

Not surprisingly, California is not waiting for the federal government to act and has percolating its own mini-GDPR.  The proposed California Consumer Privacy Act of 2018 ballot initiative would give consumers the right to ask businesses what of their personal data is collected and how it’s being used.   It will be voted on in November 2018 and already faces opposition from Facebook and other California companies standing to lose significant revenue because there is a private right of action under the proposed law.  Given there is no “opt-in” requirement in this ballot initiative, GDPR will remain the gold standard when it comes to protecting consumer data from unregulated monetization.

Apple’s Tim Cook jumped for higher ground during Zuckerberg’s testimony and publicly said Apple – unlike Facebook, does not monetize its customers and would welcome legislative solutions.  Specifically, Cook said:  “The truth is, we could make a ton of money if we monetized our customer — if our customer was our product. We’ve elected not to do that.”

Apple’s perspective is either surprisingly narrow or deliberately pinched.  Obviously, the smartphones that are the backbone of Apple’s success thrive in a social media environment where Facebook does exactly what it wants, namely provide “free” services that are habitually accessed throughout the day.  Accordingly if Facebook loses revenue due to legislative intervention, Apple will likely not be far behind.

There is hope for both platform providers and device manufacturers even if that happens.  As recognized by the Project Director at the Georgetown Center for Business and Public Policy, “If the [internet’s] grand bargain unravels, entrepreneurs will no doubt innovate new ways to make money and continue developing disruptive products and services.”

Unbridled data consumption and privacy protection can successfully coexist when immutable and transparent data is bound by a secure and continuous unequivocal affirmative consent.  In essence, user data must be treated like a protected commodity that can actually benefit the owner.   Indeed, Congresswoman Debbi Dingell (R., Mi.) ended her April 11, 2018 questioning of Zuckerberg by opining that data protection was no less important than having “clean air and clear water”.   A company that is able to keep “pure” a user’s data while feeding such data into various digital media ecosystems and compensating the data owner in the process will have found the middle ground previously consciously avoided by existing billion-dollar platforms.

Sometimes all it takes is one door to close for another one to open.

Utility tokens are not a “bad idea”

In his February 8, 2018 opinion piece, Santander’s Julio Faura suggests that “utility tokens are a bad idea” because it would be a “lie to ourselves” to suggest ICOs were not actually selling securities.  Rather, in Mr. Faura’s opinion “we should collectively work on a framework to build a clearly defined scheme for ICOs, recognizing from the very beginning that they are securities.”  And, this “ICO process should be designed in collaboration with regulators to comply with securities law.”  Mr. Faura’s opinion piece does not exist in a vacuum.  In a report dated February 5, 2018, Goldman Sachs Group Inc.’s global head of investment research suggests that investors in ICOs could possibly lose their entire investments – which ties to Mr. Faura’s underlying premise that ICOs should be regulated “to protect investors”.

It is not clear how his proposed hybrid solution would ever get implemented given it requires complete buy-in from capital markets and regulators so would be a non-starter from day one – why would existing financial institutions and regulators scuttle existing methods of raising capital or attempt to squeeze ICOs under traditional securities law even if considered a sale of securities?  Answer:  They would not.  Ripple – a company partially funded by Santander InnoVentures, offers a glimpse on how traditional financial markets will compete using blockchain technology.

Mr. Faura paints all sales of cryptocurrencies with the same brush by claiming each one of them actually offers securities subject to SEC scrutiny.   That is simply not the case.  Indeed, does Mr. Faura wonder why the SEC has not knocked on Ripple’s XRP “digital asset” door even though it trades on numerous exchanges?  Even though there was no formal ICO to launch that centralized token, it now trades on 18 platforms where “individual purchases” of the XRP coin can be made.  Indeed, after raising over $93 million by September 2016, no ICO was needed.

One ICO left untouched by the SEC was “gate keeped” by Perkins Coie and involves an ICO for a utility token that raised $35 million in under a minute’s time.   This “BAT utility token” creates a digital advertising ecosystem tied to consumer attention – which is why it is the “Basic Attention Token”.  Such ecosystem would certainly be an upgrade from the current digital advertising scheme wedded to the Web ecosystem of 1995.

All told, it seems that the SEC and other regulatory bodies have actually taken a very measured approach in this area – aggressively focusing on obvious fraudsters first in order to deter subsequent fraudsters while letting the technology play out a bit in the wild.  Not surprisingly, the plaintiff’s bar has been doing a good job picking up the slack in those instances when the SEC has not yet moved.   See Davy v. Paragon Coin, Inc., et al., Case No. 18-cv-00671 (N.D. Cal. January 30, 2018) and Paige v. Bitconnect Intern. PLC, et al., Case No. 3:18-CV-58-JHM (W.D. Ky. January 29, 2018).

Recent public SEC statements seem to back this interpretation of their ICO position. On February 6, 2018, SEC Chairman Jay Clayton recently testified that the potential derived from blockchain was “very significant” – his co-witness, CFTC Chairman Christopher Giancarlo, went so far as to say there was “enormous potential” that “seems extraordinary” for blockchain-based businesses.  Yet, during his testimony, Chairman Clayton said the SEC would continue to “crack down hard” on fraud and manipulation involving ICOs offering an unregistered security.  This is consistent with prior messaging given that Chairman Clayton requested on December 11, 2017 that the SEC’s Enforcement Division “vigorously” enforce and recommend action against ICOs that may be in violation of the federal securities laws.  The fact some 2017 ICOs raising hundreds of millions of dollars were not addressed by the SEC, however, provides a clear “nudge wink” that not all ICOs come under SEC regulatory control.

As with BAT, in the future, there will likely be many more utility tokens built on disruptive blockchain initiatives that escape SEC scrutiny given they are not perceived as securities.  The fact that the SEC has not yet moved on them – despite moving against Munchee, Inc. weeks after the Munchee MUN offering, signals the SEC will temper its enforcement activities when faced with a disruptive blockchain initiative that begets true intrinsic value.   In other words, utility tokens may very well be a good idea after all.