Category Archives: Privacy

Addressing COVID-19 Cybersecurity Threats

When implementing COVID-19 business continuity plans, companies should take into consideration security threats from cybercriminals looking to exploit fear, uncertainty and doubt – better known as FUD.  Fear can drive a thirst for the latest information and may lead employees to seek online information in a careless fashion – leaving best practices by the wayside.

According to Reinsurance News, there has already been “a surge of coronavirus-related cyber attacks”.  Many phishing attacks “have either claimed to have an attached list of people with the virus or have even asked the victim to make a bitcoin payment for it.” Not all employees are accustomed to the risks from a corporate-wide work from home (WFH) policy given the previous lack of intersection between work and personal computers. 

One cyber security firm released information outlining these WFH risks. And,  another security provider offers a common-sense refresher:  “If you get an email that looks like it is from the WHO (World Health Organization) and you don’t normally get emails from the WHO, you should be cautious.” In addition to recommendations made by security consultants, there are privacy-forward recommendations that will necessarily mitigate against phishing exploits.  For example, WFH employees should be steered towards privacy browsers such as Brave and Firefox to avoid fingerprinting and search engines such as Duckduckgo for private searches.  A comprehensive listing of privacy-forward online tools is found at PrivacyTools.IO.    

Criminals have already exploited the current FUD by creating very convincing COVID-19-related links.   As reported by Brian Krebs, several Russian language cybercrime forums now sell a “digital Coronavirus infection kit” that uses the Hopkins interactive map of real-time infections as part of a Java-based malware deployment scheme. The kit only costs $200 if the buyer has a Java code signing certificate and $700 if the buyer uses the seller’s certificate. 

At a very basic level, WFH employees should be reminded not to click on sources of information other than clean URLs such as CDC.Gov or open unsolicited attachments even if they appear coming from a known associate.  Now that banks, hotels, and health providers are  sending emails alerting their clients of newly-implemented COVID-19 procedures, it is especially easy to succumb to spear phishing exploits – which is the hallmark of state-sponsored groups.  As recently reported, government-backed hacking groups from China, North Korea, and Russia have begun using COVID-19-based phishing lures to infect victims with malware and gain infrastructure access.  These recent attacks primarily targeted users in countries outside the US but there should be little doubt more groups will focus on the US in the coming weeks. Until ramped up testing demonstrates that the COVID-19 risk has passed, companies are well advised to focus some of their security diligence on these targeted attacks.

This does not mean employees need to be fed yet more FUD – this time regarding network security, without some good news. Employees can be reminded of the fact a decade ago we survived another pandemic. Specifically, between April 2009 and April 2010, there were 60.8 million cases, 274,304 hospitalizations, and 12,469 deaths in the United States caused by the Swine Flu. Globally, the Swine Flu infected between 700 million and 1.4 billion people, resulting in 150,000 to 575,000 deaths. Moreover, the young were a vector for Swine Flu yet are not for COVID-19. And, a large band of 25 – 35 year olds are better in two days – hardly a bad cold, for COVID-19 whereas there was no such band for the Swine Flu. On the downside, COVID-19 has a more efficient transmission mechanism than Swine Flu and we are better suited to develop influenza vaccines than we are for coronavirus vaccines.

UPDATE: April 23, 2020

The CDC reports in its latest published statistics there were 802,583 reported cases of COVID-19 and 44,575 associated deaths. Without a doubt, this pandemic is certainly much worse that the Swine Flu pandemic as previously reported by the CDC. Moreover, the current “panic pandemic” certainly shows no indications of subsiding.

Whether the governmental measures taken actually ratcheted up the body count or caused them to diminish is left for historians and clinicians to analyze. The hard fact remains the body count keeps going up and the U.S. economy is still on lock down as of April 23, 2020.

UPDATE: May 1, 2020

On April 30, 2020, it was reported Tonya Ugoretz, deputy Assistant Director of the FBI Cyber Division, stated the FBI’s Internet Crime Complaint Center (IC3) is currently receiving between 3,000 and 4,000 cybersecurity complaints daily – IC3 normally averages 1,000 daily complaints.

UPDATE: May 6, 2020

On May 5, 2020, a joint alert from the United States Department of Homeland Security Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Cyber Security Centre warned of APTs targeting healthcare and essential services.

The alert warned of “ongoing activity by APT groups against organizations involved in both national and international COVID-19 responses.”  This May 5, 2020 alert follows an April 8, 2020 Alert that warned in broader terms of malicious cyber actors exploiting COVID-19.

APTs are conducted by nation-state actors given the level of resources and money needed to launch such an attack.  Moreover, they generally take between eight and nine months to plan and coordinate before launching.  It is particularly disheartening that these recent attacks include those launched by state-backed Chinese hackers known as APT 41.  As one cybersecurity firm points out in a recently-released white paper:  “APT41’s involvement is impossible to deny.” 

Distilled to its essence, the uncovered APT41 attacks mean that before COVID-19 was even on US shores, Chinese state-actors were planning attacks targeting the healthcare and pharmaceutical sectors.  One can only hope the cyberattacks were not coordinated alongside the spread of the virus – a virus that only became public months after a coordinated attack would have been first planned.

A Statutory “Right of Compensation”

The below was excerpted and modified from written testimony submitted to the New York Senate.

According to the World Economic Forum (“WEF”), “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Rethinking Personal Data:  Strengthening Trust, at 7, World Economic Forum Report (May 2012).  A subsequent paper from the WEF suggests “claims of personal data being ‘a new asset class’ are the strongest for . . . the inferred data [Institutions] possess about individuals on the basis that they invested the time, energy and resources in creating it.” Rethinking Personal Data: A New Lens for Strengthening Trust, at 17, World Economic Forum Report (May 2014). 

Surveillance advertising prevents those privacy rights underlying these assets from being easily managed – it is not up to consumers whether they will be fed an ad based on previous website visits or purchases it will just happen.  Indeed, according to a survey of 1,000 persons conducted by Ipsos Public Affairs and released by Microsoft in January 2013, forty-five percent of respondents felt they had little or no control over the personal information companies gather about them while they are browsing the Web or using online services.   Indeed, even before the prevalence of online surveillance advertising, consumers faced privacy diminution from widespread tracking efforts.  See Steve Bibas, A Contractual Approach to Data Privacy, 17 Harv. J. Law & Public Policy 591 (Spring 1994) (“Although the ready availability of information helps us to trust others and coordinate actions, it also lessens our privacy. George Orwell presciently expressed our fear of losing all privacy to an omniscient Big Brother. Computers today track our telephone calls, credit-card spending, plane flights, educational and employment records, medical histories, and more.  Someone with free access to this information could piece together a coherent picture of our actions.”). 

Current law, however, does not prevent someone from collecting publicly available information to create a comprehensive consumer profile – nor should there be any such law.  Similarly, there should not be the right to opt out of having publicly recorded information sold or shared.   The same rules, however, should not apply to personal information that is not collected solely from public sources.

This does not mean companies should have the unfettered right to use personal data. To address the market disparity between owners of personal data “assets” and those companies who monetize this data, Alastair Mactaggart launched in 2017 a California ballot initiative that led to the enactment of the California Consumer Privacy Act (CCPA).  Now that CCPA has come online – albeit in a weakened form due to the amendments signed into the law, Mr. Mactaggart is pushing a new ballot initiative that presumably will strengthen CCPA.      

While the EU by way of its General Data Protection Regulation (GDPR) ultimately looks to protect the EU and its residents from countries with weaker data processing safeguards, the path begun by CCPA – and continuing with Mactaggart’s latest ballot initiative – The California Privacy Rights and Enforcement Act  of 2020 (Mactaggart 2020 Ballot Initiative), seeks to protect individuals on a more fundamental level by giving them novel statutory rights.   For example, so long as companies are compliant with GDPR’s processing legal requirements and have a legal basis to conduct such processing, data subjects are left without any real financial recourse.  Under GDPR, data subjects may be provided with specific rights, including the right to erasure, processing, portability, access and correction, etc., yet these rights are in no way transferable or of any value apart from GDPR’s regulatory scheme.

CCPA apparently has as its core mission the bringing of transparency to the use of consumer data.  Before the enactment of CCPA, California was like all other states in that its residents could not readily learn what personal information a business had collected about them, how such information was used, and how to prevent such use from taking place in the future.  Despite similar protections in GDPR, the fact that GDPR precludes processing of data unless certain requirements are met, represents a fundamental difference between the US and EU approaches.  Under the EU approach, subject to certain exceptions a company can continue processing data as long as this GDPR data processing regime is followed.  Under the CCPA approach, consumers have a say on whether processing takes place no matter what level of compliance as long as financial triggers exist for the use of the data.   

State legislatures should study the Mactaggart 2020 Ballot Initiative given its correction of defects and weaknesses found in CCPA while still pushing forward CCPA’s consumer-first mandate.  This does not mean, however, the Mactaggart 2020 Ballot Initiative is not without flaws.  The suggestion that a browser-based solution can easily address the “Do Not Sell” requirement is flawed given the 12-month wait period before a company can again request consent after a “Do Not Sell” request.  When taking into consideration VPN usage and the transient nature of a browser’s tracking tools, a future viable solution will require much more of a comprehensive technology framework – all the while without requiring users to create a new account.  Cal. Civ. Code § 1798.135(a)(1)–(2) (precluding companies from requiring consumers to create a new account as a means of enforcing their right to bar sales of data).

Moreover, the CCPA’s Sisyphean task of creating a “Do Not Sell My Personal Information” link that is both “clear and conspicuous” and yet found on every page that collects personal information is not rectified in the Mactaggart 2020 Ballot Initiative.  With most compliant sites, links have simply been added to every footer for those site visitors with a California IP address – even if generated by a VPN, or they are simply directed to a CCPA-specific page. A footer link is hardly “clear and conspicuous” yet that coupled with a linked page having a “Do Not Sell My Persona Information” button is now the regulatory-accepted compliance tool for this requirement.

CCPA recognizes that the consumer data problem relates to the vast amounts of information in the hands of few data merchants.  There are also Congressional efforts focused on combating large bad actors by instilling more transparency in the data collection process.  Specifically, U.S. Sens. Mark R. Warner (D-VA), Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have introduced “the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, bipartisan legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.”  The bill expressly focuses on only those technology platforms with over 100 million monthly active users.  As stated in the press release, “Sens. Warner and Hawley have partnered on the DASHBOARD Act, legislation to require data harvesting companies such as social media platforms to disclose how they are monetizing consumer data, as well as the Do Not Track Act, which would allow users to opt out of non-essential data collection, modeled after the Federal Trade Commission’s (FTC) “Do Not Call” list.”

As with CCPA, the Mactaggart 2020 Ballot Initiative, and other state initiatives, federal initiatives generally focus on transparency – helping consumers understand what and how data is collected; control – allowing consumers reject usage of personal data; and accountability – providing adequate data security and compliance coupled with consumer consents.  Ultimately, these are the three pillars of any privacy law worth enacting. There is one more pillar, however, that has not gotten much attention yet is equally important.

A Right of Compensation

Lacking in current federal and state privacy laws and bills is the statutory pronouncement that consumers have an actual property right derived from their personal data.  Providing for a specific statutory property right that is fixed and delineated in a privacy law will make transparency, control and accountability much easier to enforce. 

The privacy community has long toyed with ascribing property rights to personal data.  See Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1379 (2000) (“One answer to the question “Why ownership?” then, is that it seems we simply cannot help ourselves. Property talk is just how we talk about matters of great importance. In particular, it is how we talk about the allocation of rights in things, and personally-identified information seems “thingified” (or detached from self) in ways that other sorts of private matters—intimate privacy, for example—are not. On this view, the “propertization” of the informational privacy debate is a matter of course; it merely testifies to the enormous power of property thinking in shaping the rules and patterns by which we live.”).

Some have voiced preemptive opposition to any data ownership approach.  See generally Sarah Jeong, We don’t allow people to sell their kidneys. We shouldn’t let them sell the details of their lives, either, The New York Times (July 5, 2019) (“Legally vesting ownership in data isn’t a new idea. It’s often been kicked around as a way to strengthen privacy. But the entire analogy of owning data, like owning a house or a car, falls apart with a little scrutiny. A property right is alienable — once you sell your house, it’s gone. But the most fundamental human rights are inalienable, often because the rights become meaningless once they are alienable. What’s the point of life and liberty if you can sell them?”); Mark MacCarthy, Privacy Is Not A Property Right In Personal Information, Forbes (November 2, 2018) (“Some commentators new to the privacy debate are quick to offer what they think is a clever idea: assign property rights over personal information to the user and let the marketplace decide what happens next. Whether this idea is meritorious has big implications for how we think about things like data portability and consent.  Turns out it’s wrong.”). 

As referenced in Mark MacCarthy’s opinion piece, the notion of personal data as property conflicts with the reality, for example, that medical information can simultaneously potentially be owned by patients, medical schools, pharmacies, doctors, pharmaceutical companies, EMR software vendors, advertising companies and Internet service providers. Opponents of data property rights also wonder how property ownership rights will be allocated – for associated payments or veto power, to the constituent owners.

It can also be argued that personal data continually percolates uncontrolled around the world and constitutes a “social good” that can never be owned by individuals.  For example, it is the underpinning of a good deal of medical research that ends up curing disease.  Information concerning a consumer’s interaction with others presumably also allows participants to these interactions to also claim ownership of related inferred data to themselves.  Moreover, it is easy to argue the First Amendment should bar the creation of a data property regime given it might potentially stifle speech between parties.

The “social good” argument is likely the one with the strongest appeal.  For example, on November 11, 2019 The Wall Street Journal exposed Google’s “Project Nightingale” and its resulting company access to the health information maintained by Ascension – one of the nation’s leading health systems.  In a November 11, 2019 blog post, Google explained that this arrangement was to support Ascension “with technology that helps them to deliver better care to patients across the United States.”  What is noticeably absent from the blog post is whether Google will also obtain access to patient medical records in a deidentified or other manner.  This is noteworthy given last year researchers at Google announced a way to predict a person’s blood pressure, age, and smoking status simply from an image of their retina.  In order to do so, however, Google first had to analyze retinal images from 284,335 patients.  Given health research is obviously a “social good” the use or sale of deidentified protected health information (PHI) has long been an accepted use of medical data.  Oregon’s failed Senate Bill 703 would have been the first in the nation to require specific consent for the sale of deidentified PHI that is now currently sold each year for billions. 

No matter how they are ultimately couched, all of the paternalistic arguments against individuals having property rights in their data still miss the mark.  First, simply because a privacy right may be perceived as “inalienable” – as it is under the California Constitution, does not mean there cannot be transferable “compensation units” derived from such rights.  Indeed, certain inalienable rights, e.g., right to freedom, right to property, etc., are routinely suspended during a trial and after conviction based on the voluntary commission of a crime.  This unfortunately happens every day throughout the country.  There is no reason a person should be precluded from voluntarily transforming certain ascribed rights into fungible ownership interests for a set duration and upon a specific set of circumstances.  No one currently corrals persons living on the streets claiming they need to assert their right to privacy despite the fact outdoor sleeping is obviously a knowing waiver of a right to privacy.   Similarly, persons every day voluntarily join affinity clubs to obtain rewards while trading away unknown personal data in an unknown surveillance arrangement.  Such conduct certainly does not mean the inalienable “right to privacy” was shredded up and destroyed by such individuals. 

The fact that multiple parties may claim ownership rights in the same personal data also does not negate the fact an ownership regime can viably exist – only that it will require careful coordination and adequate technology to implement.  Moreover, any argument based on the First Amendment also misses the mark in the same way no one has a First Amendment right to produce a copyright-protected play without proper consent from the writer.

Providing consumers with the ability and “statutory right to trade one’s personal data” – even if the fair market value of such data might be actually quite minimal, is the actual specific ownership right that should be statutorily created.  Noted academics long ago suspected this might be the correct path to take.  See Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1391 (2000) (“A successful data privacy regime is precisely one that guarantees individuals the right to trade their personal information for perceived benefits, and that places the lowest transaction cost barriers in the way of consensual trades. If individuals choose to trade their personal data away without placing restrictions on secondary or tertiary uses, surely it is their business. On this view, choice rather than ownership is (or should be) the engine of privacy policy. What matters most is that personal data is owned at the end of the day in the manner the parties have agreed.”) (emphasis added); Id. at 1383 (“A relational approach to personally-identified data might, but need not, assign “ownership” or control of exchange based on possession.”); Richard A. Posner, The Right of Privacy, 12 Ga. L. Rev. 393, 394 (Spring 1977) (“People invariably possess information, including facts about themselves and contents of communications, that they will incur costs to conceal. Sometimes such information is of value to others: that is, others will incur costs to discover it. Thus we have two economic goods, “privacy” and “prying.” . . . An alternative [economic analysis of privacy] is to regard privacy and prying as intermediate rather than final goods, instrumental rather than ultimate values. Under this approach, people are assumed not to desire or value privacy or prying in themselves but to use these goods as inputs into the production of income or some other broad measure of utility or welfare.”) (emphasis added).

Current efforts at creating a statutory privacy regime can actually be considered precursors to a statutory “transactional property” approach.  Under CCPA:  “A business may offer financial incentives, including payments to consumers as compensation, for the collection of personal information, the sale of personal information, or the deletion of personal information.”  Cal. Civ. Code § 1798.125(b)(1).   Indeed, the healthcare privacy regime of HIPAA long understood the possibility PHI might be sold by a covered entity.  See 45 CFR § 164.508(a)(4)(i) (“Notwithstanding any provision of this subpart, other than the transition provisions in § 164.532, a covered entity must obtain an authorization for any disclosure of protected health information which is a sale of protected health information, as defined in § 164.501 of this subpart. (ii) Such authorization must state that the disclosure will result in remuneration to the covered entity.”).  Moreover, HIPAA even anticipates state statutes having greater protections.  See 45 CFR § 160.203 (There is an express exemption under HIPAA for State law when that “State law relates to the privacy of health information and is more stringent than a standard, requirement, or implementation specification adopted” under HIPAA).

A transactional property approach empowers consumers without placing unnecessary barriers on the “social good” use of data – it is even the trigger for certain of CCPA’s consumer rights.  Consumers could either choose to accept certain new statutory protections, i.e., the right to delete, or lease their data based on an economic model that would allow for the transparency needed to determine whether the data is even able to be sold.  If data is not actually salable, consumers should be limited in how they can prevent companies from using their data given the countervailing social good inherent in the free exchange of consumer data.   If there is no existing viable market for the consumer data in question, there should not be any associated requirement that a company pay any set amount for such data or be precluded from using such data in a deidentified format.  In other words, the burdens claimed by opponents of a property approach would be mitigated – consumers would only be given a piece of the pie and not the whole pie and any purported “veto power” would never really come into existence.  Moreover, a regulatory framework that allows market dynamics dictate the applicability of protections afforded to consumers is likely the fairest approach to both consumers and data merchants alike. 

Similar to the way the Mactaggart 2020 Ballot Initiative proposes the creation of a new California agency, namely the California Privacy Protection Agency (CPPA) which would cost $10 million to implement, it is suggested that a public benefit corporation ensure the necessary framework get implemented.  In other words, unlike in California where CPPA currentlly only buttresses the enforcement and regulatory work done by the California Attorney General’s Office, a public Data Protection Corporation (DPC) would coordinate with the private sector to ensure the requirements of a privacy law are viable and can come to life.  Simply put, the creation of the DPC will ensure the  current compliance problems visited on those companies subject to CCPA never come to life.  There is analogous precedent for the creation of the DPC found in the environmental arena.

No one can dispute one primary purpose of an environmental law is to either prevent potential toxins from infiltrating land, water and air or to remove and properly dispose of the pollutants if already released.  Addressing improperly used consumer data similarly needs a massive cleanup effort and can take a page from how environmental concerns were previously addressed in New York.  To that end, in 1970 the New York State Environmental Facilities Corporation (EFC) was created by the New York State Environmental Facilities Corporation Act.   

As a public benefit corporation of the State, EFC is a corporate entity separate and apart from the State.  State law empowers the EFC to provide financing for certain environmental projects as well as “render technical advice and assistance to private entities, state agencies and local government units on sewage treatment and collection, pollution control, recycling, hazardous waste abatement, solid waste disposal and other related subjects.”  Indeed, as stated by the EFC on its website, its mission is to provide “expert technical assistance for environmental projects in New York State. . . . We promote innovative environmental technologies and practices in all of our programs.”

Similarly, the DPC would provide technical assistance in conformance with the enacting law’s mandate to protect consumer data.  At a basic level, there is never the need to grant access to all data for all purposes to all companies interested in consumer data.   Whether by evaluating current zero-knowledge proof solutions – where a verifier has “zero knowledge of” information unnecessary for an actual verification, or determining the feasibility of certain self-sovereign identity solutions, the DPC can ultimately provide the necessary “secret sauce” for a successful privacy law.    Statutory efforts to legislate on privacy will forever be hamstrung if implementation technology remains an afterthought that will presumably simply sort itself out after a law is passed.  The goal of the DPC would be to ensure there are adequate technical means available to execute on the legislation passed – not to pick technology sides or inadvertently delay private sector efforts at technology development.  

To sum up, the four major components needed in a successful privacy law should begin with the creation of a statutory “Right of Compensation” and end with the means to effectuate such a right: 

  1. Creation of a “transactional property right” in consumer data giving rise to a new Right of Compensation;
  2. Development of a compliance framework that would only apply to companies maintaining significant amounts of consumer data;
  3. Insertion of rights and obligations that focus on the three established privacy pillars of transparency, control and accountability; and
  4. Creation of a “Data Protection Corporation” – a public corporation largely tasked with ensuring that what is statutorily required is feasible from both a technological and market perspective.

Microsoft Occupies Throne on Data Privacy Day

As Data Privacy Day is celebrated today, BigTech companies continue shoving each other for the data privacy throne.  Apple would have you believe it is the one true leader of the data privacy rights movement.  Even though Amazon has recently been moving in the right direction, when it comes to “Data Privacy Day”, only Microsoft makes it a daily event. 

Obviously, all companies benefit by moving towards a better data privacy regime.  As recognized by Mastercard’s Chief Privacy Officer:   “Privacy and accountability are central to our data-driven innovation, and have become key differentiators for our brand. This research reinforces the fact that privacy is a critical investment for forward-looking companies.”

In one of the first studies to estimate privacy returns for companies on a global scale, Cisco’s 2020 Data Privacy Benchmark Study assessed the benefits companies see in areas such as “operational efficiency, fewer and less costly data breaches, reduced sales delays, [and] improved customer loyalty and trust”.   More than 70% of those surveyed indicated they saw  “significant” or “very significant” benefits in each of these areas based on their investments in data privacy initiatives.  As for the actual quantification of these benefits, for every $1 of investment, the average company purportedly received $2.70 of benefit. For Microsoft, this monetary lift does not appear to drive its privacy epiphany.

Seeking to erase years of insecure Windows development contributing to countless data incidents, Microsoft’s newfound focus on data privacy the past five years originates from the very top. It’s privacy head recently testified before Congress exactly because she is a longstanding privacy steward now seeking Congressional help for consumers.  Microsoft CEO Satya Nadella went one step further at the 2020 World Economic Forum in Davos by suggesting that consumers obtain compensation for their data:  “Data that you contribute to the world has utility for you, utility for the business that may be giving you a service in return — and the world at large.  How do we account for that surplus being created around data? And who is in control around giving those rights?”  He recognized:  “What if the consumer benefited from their data as well as advertisers? More work needs to be done around data dignity – and new business models in the 2020s.”

This is not to say Microsoft is now rushing to compensate consumers for the use of private data.  Recently, it was uncovered that Microsoft built cancer algorithms using patient data obtained from Providence Health & Services in Renton, Washington.  No report exists of Microsoft compensating patients for this use of their data.  Nevertheless, when it comes to building the brightest path for data privacy there remains no other BigTech company suggesting that consumers be compensated for their data or promotes the use of a decentralized identity for consumers – the likely precursor to any viable “right of compensation” statutory scheme. When it comes time to finally do the right thing, Microsoft will apparently be leading the way to ensure it gets correctly done.

UPDATE: March 5, 2020

According to the Verge Tech Survey 2020: “Microsoft leads big tech companies in the number of Americans who say they trust it, at 75 percent of survey respondents. Amazon is close behind, at 73 percent. Pulling up the rear is Facebook: just 41 percent of Americans say they trust the company to safeguard their personal information.”

Facebook still faces monster biometrics class action

On January 21, 2020, the United States Supreme Court denied Facebook’s Petition for a Writ of Certiorari in Patel v. Facebook (19-706) – leaving intact an affirmed lower court ruling against Facebook.  See Patel v. Facebook Inc., 290 F. Supp. 3d 948 (N.D. Cal. 2018), aff’d, Patel v. Facebook, Inc., 932 F. 3d 1264 (9th Cir. 2019), cert. denied, Patel v. Facebook, Inc., 589 U.S. __, (January 21, 2020)Consumer advocates pushed throughout in favor of plaintiffs

Class counsel alleged in the complaint that Facebook’s “Tag Suggestions” program – a now-terminated program that scanned for and identified people in uploaded photographs for purposes of photo tagging, improperly collected and stored biometric data without prior notice or consent in violation of the Illinois Biometric Information Privacy Act (BIPA), 740 Ill. Comp. Stat. 14/1 et seq.  Specifically, Section 15(b) of BIBA provides that biometric data may not be obtained without (1) written notice that biometric data is at issue, (2) written notice of why and for how long the data is being collected and stored, and (3) written consent from the subject.

Facebook sought dismissal arguing the lack of Article III standing necessary for all federal lawsuits – in essence, arguing that the mere technical violation of BIBA’s statutory notice and consent provisions did not actually cause any real harm to the plaintiffs.  In rejecting that argument, the District Court, found that actual and concrete harm sufficiently existed to create Article III standing.  Patel, supra, 290 F. Supp. 3d at 953 – 954 (“BIPA vested in Illinois residents the right to control their biometric information by requiring notice before collection and giving residents the power to say no by withholding consent. As the Illinois legislature found, these procedural protections are particularly crucial in our digital world because technology now permits the wholesale collection and storage of an individual’s unique biometric identifiers — identifiers that cannot be changed if compromised or misused. When an online service simply disregards the Illinois procedures, as Facebook is alleged to have done, the right of the individual to maintain her biometric privacy vanishes into thin air. The precise harm the Illinois legislature sought to prevent is then realized.”).

Even though this suit may still be dismissed on other grounds given the only argument that actually percolated all the way up to the Supreme Court was the standing issue, this was definitely Facebook’s strongest defense so it now faces likely exposure in the billions.   A class comprised of seven million potential members with statutory damages based only on a single uploaded picture per person could yield damages of between $7 billion for a negligence finding and $35 billion for an intentional or reckless finding.  In addition, this remains only one of several BIPA class actions against Facebook currently litigated around the federal judiciary.  Despite its $5 billion mea culpa with the FTC, Facebook’s privacy exposures are certainly nowhere near its rear view mirror.

The Supreme Court may eventually take on a new privacy standing case but it will likely be a specific Google case that gets the nod – a case where the Supreme Court previously ruled: “Because there remain substantial questions about whether any of the named plaintiffs has standing to sue in light of our decision in Spokeo, Inc. v. Robins, 578 U. S. ___ (2016), we vacate the judgment of the Ninth Circuit and remand for further proceedings.”  And, if this Google “referrer headers” case does not get the nod, as states continue to push the boundaries of privacy rights, the Supreme Court will certainly revisit its Spokeo decision to determine whether the violation of some future privacy law merits federal standing – especially when only a “trifle of injury” is alleged. Ultimately, the question that may be answered by the Court is whether the mere alleged violation of a law addressing digital privacy rights sufficiently constitutes an Article III injury. See Patel, supra, 932 F. 3d at 1273.

UPDATE:  January 30, 2020

During its January 29, 2020 conference call, Facebook disclosed charges related to the $550 million settlement of this BIPA case.  Not wanting to roll the dice on a potential billion dollar verdict, Facebook will pay what Plaintiff’s counsel described as “the largest cash settlement ever to resolve a privacy-related lawsuit.” 

Senate Moves Towards A Comprehensive Privacy Law

On December 4, 2019, testimony given by Julie Brill, Maureen Ohlhausen, Laura Moy, Nuala O’Connor and Michelle Richardson helped move the ball forward for a new bipartisan federal privacy law.  Their testimony was right on the money – except for the natural corporate disdain for a private right of action, and the potential for a federal privacy law seems greater than ever. For a great overview,  IAPP has released a comparison of the two most recent federal attempts to curb unbridled surveillance capitalism. 

With any luck, there may be a new federal law on the books in 2020. Not waiting to see what happens in Washington, states like New York and New Jersey will likely follow the lead of California and pass their own very comprehensive privacy laws in 2020 – perhaps well exceeding what is found in California. Having such laws succumb to express preemption may end up being the most compelling legislative driver for certain federal lawmakers now on the fence.

University of Rochester Medical Center Gets Hit with a $3 Million HIPAA Fine

On November 5, 2019, the University of Rochester Medical Center (URMC) agreed to a corrective action plan and payment of $3 million due to the 2013 and 2017 loss of an unencrypted flash drive and theft of an unencrypted laptop, respectively.

The apparent reason for the large fine was the fact that “in 2010, [the Office for Civil Rights (OCR)] investigated URMC concerning a similar breach involving a lost unencrypted flash drive and provided technical assistance to URMC. Despite the previous OCR investigation, and URMC’s own identification of a lack of encryption as a high risk to ePHI, URMC permitted the continued use of unencrypted mobile devices.”

As with most OCR enforcement actions, there is typically an industry wide message with each large fine – in this case there are two, namely the failure to encrypt will simply no longer be tolerated and once given a pass by OCR be sure not to waste it.

UPDATE:  December 3, 2019

In keeping with its apparent practice of announcing HIPAA violation resolutions in clusters, on November 7, 2019, OCR announced a $1.6 million penalty against  the Texas Health and Human Services Commission for violations of the Privacy and Security Rules had between 2013 and 2017.  The primary breach occurred when “an internal application was moved from a private, secure server to a public server and a flaw in the software code allowed access to ePHI without access credentials.”  OCR also determined that  in addition to the impermissible disclosure, there was a failure “to perform an accurate, thorough, and enterprise-wide risk analysis that meets the requirements of45 C.F.R. § 164.308(a)(l)(ii)(a) [Security Rule].”  Interestingly, the OCR applied its new civil money penalty caps published in April

And, on November 27, 2019, OCR revealed its enforcement settlement with a hospital network that sent bills to patients containing “the patient names, account numbers, and dates of service” of 577 other patients.  Sentara Hospitals – based in Virginia and North Carolina, did not think such information was protected health information (PHI) and only notified the 8 patients where there was also a disclosure of treatment information.  Given that Sentara “persisted in its refusal to properly report the breach even after being explicitly advised of their duty to do so by OCR”, it was stuck with a $2.175 million penalty.  Given that PHI has been interpreted to include healthcare payment information linked to a specific individual, Sentara was obviously taking a chance when it ignored OCR’s advice. On the other hand, protected health information is expressly defined to mean “individually identifiable health information” so there was at least a colorable argument that payment information – even if related to the provision of healthcare, is not “health information” in any direct sense. 45 CFR § 160.401.

Providing some year-end advice that should also not be disregarded, on December 2, 2019, OCR released its Fall 2019 Cybersecurity Newsletter focusing on ransomware and how covered entities and business associates should apply the Security Rule as a mitigation tool against this threat. 

These latest announcements were clustered to push one primary message, namely do not disregard explicit counsel from OCR given that when it comes to the OCR it most certainly holds a grudge when ignored. In addition, CE’s and BA’s are well advised to deploy an enterprise-wide risk analysis that determines whether there are out-facing vulnerabilities that should be patched. And finally, as shown by the significant amount assessed against the University of Rochester Medical Center, future disregard of encryption as a risk mitigation tool will likely lead to enhanced penalties going forward.

Google and Facebook’s Privacy Long Game May Pay Off

On September 13, 2019, the California Legislature adjourned with numerous CCPA amendments ready for the signature of Gov. Gavin Newsom.  Two amendments that ultimately passed, AB 25 – which provides a one-year moratorium on CCPA’s application to employee, beneficiary and emergency contact information,  and AB 1355 – a broad-ranging amendment to the law, are particularly helpful for business owners. Other changes to CCPA, including AB 1146, AB 874, and AB 1564 either do not alter in any material way the spirit or intent of the law or are redundant to changes found in AB 1355. There was also one proposed amendment – AB 846, that was withdrawn for consideration until next year but would have greatly enhanced the protections found in CCPA by creating a private right of action for notification and data usage failures. 

Three of the changes found in AB 1355 are noteworthy given in some very real ways they chip away from the consumer-first thrust of CCPA.  First, by modifying the definition of “personal information” to mean “reasonably capable of being associated with” a particular consumer or household, instead of just “capable of being [so] associated”, CCPA may get a reasonableness component that would give companies a strong new argument when defending a private action breach claim.  Moreover, the AB 1355 amendments explicitly state that deidentified and aggregate information are exempt from CCPA – in effect, potentially giving social media platforms a sought-after CCPA safety hatch. 

And finally, the AB 1355 Amendment states that the reasonableness of charging a different price or rate or providing a different level or quality of goods or services for the use of data should be measured in relation to the value of the personal information to the business and not to the consumer – as it was initially drafted.  Given that most social media platforms and data brokers actually place very low values on specific consumer data, this change is of obvious great significance.  Not surprising given the heavy lobbying, these and other changes actually benefit data merchants to the detriment of consumers.

AB 1355 is significant for other reasons.

On September 10, 2019, fifty-one CEOs wrote a letter to Congressional leaders asking them “to pass, as soon as possible, a comprehensive consumer data privacy law that strengthens protections for consumers and establishes a national privacy framework to enable continued innovation and growth in the digital economy.”  The signatories to this letter come from a broad range of industries, including retail (Walmart, Amazon, Target, Macy’s), banking (JPMorgan Chase, Bank of America, Citigroup), card brands (American Express, Visa, Mastercard), technology (Salesforce, SAP, SAS Institute, IBM, Dell, Qualcomm), as well as consumer goods and pharmaceutical (Bristol-Myers Squibb, Johnson & Johnson, Procter & Gamble), insurance (Chubb, New York Life Insurance, Principal, State Farm, USAA), and media-rich telecommunications (AT&T, Comcast). 

Conspicuously absent from this list of companies are the two largest beneficiaries of Business Roundtable’s privacy initiative – Facebook and Google. 

As set forth in their CEO letter:  “Business Roundtable has released a Framework for Consumer Privacy Legislation (attached to this letter), which provides a detailed roadmap of issues that a federal consumer privacy law should address.”  If one takes a look at this proposed Business Roundtable Framework, Facebook and Google’s sought-after end game comes better into focus – which is especially impressive given that neither company is even a current member of the Business Roundtable.   

Business Roundtable’s Framework proposes that a new federal law “establish a national standard for breach notification that preempts state laws” and prevents the “state-by-state approach to regulating consumer privacy.”  As well, the Business Roundtable Framework specifically also states that “[a] national consumer privacy law should not provide for a private right of action.”

Apparently, everything may fall into place for those who feast on consumer data.  First, CCPA may have been weakened sufficiently to make 2020 not nearly the onerous compliance year most companies expected – especially since the tabling of AB 856 and its creation of a new right of action for breach of CCPA’s consumer notification and use provisions.  Given California’s privacy statutes may very well end up being the model for a federal law, weakening CCPA before pushing for a federal law was the necessary initial step in this two-step dance. 

And secondly, as shown by the September 10, 2019 CEO letter to Congressional leaders, there is a broad coalition of companies seeking both federal preemption as well as the express killing of a private right of action – the two requirements needed to push back consumer-friendly state initiatives and class action lawyers.  Class action lawyers and fiercely independent states – such as Maine and Vermont, are largely immune to lobbyists.

While others may have publicly taken up their fight, Google and Facebook are smoking cigars in a dark backroom somewhere laughing at how brilliantly their plan may ultimately play out. 

UPDATE:  October 16, 2019

Without any fanfare or even a mention on the California Governor’s website, Governor Newsom quietly signed into law all of the CCPA amendments put on his table, including AB 1355 which amends § 1798.140(o)(2) of the CCPA, to provide that personal information “does not include consumer information that is deidentified or aggregate consumer information” – making all social media platforms raise a toast to their victory, and amends Cal. Civ. Code § 1798.150(a)(1) of the CCPA to reaffirm that class-action lawsuits may be brought only for data breaches when personal information is “nonencrypted and nonredacted” and thereby shut out wide swaths of potential claims. 

In addition, the Governor signed into law the following amendments – some of which further weakened CCPA’s reach:  AB 874, AB 25, AB 1146, AB 1564, AB 1130, and AB 1202.  Coupled with the Attorney General’s Office releasing the day before its twenty-four pages of guidance – which many have correctly interpreted as providing little real guidance, it is clear why all eyes should now be squarely focused on Alastair Mactaggart and his November 2020 Ballot Initiative

Back to School for Ransomware

Even though the first significant uptick in ransomware attacks began over three years ago, a steady increase in frequency and severity has likely now made ransomware exploits the number one security threat faced by most businesses today.  McAfee places the ransomware growth rate for the last quarter at 118%.  Many smaller businesses were previously on notice but chose to ignore the warning signs. Thankfully, after the 2017 ransomware attacks unleashed by the Wannacry strain of Cryptolocker, some companies did address ransomware risk by implementing better employee training while others decided to upgrade legacy software and initiate offsite backups.

Those who did not adequately address this risk, however, are now facing much larger extortion demands.  Also, the risk landscape has changed dramatically over the past several years with  ransomware becoming an equal opportunity attack that will now target local governments as well as dental offices. Indeed, even first grade students are now being impacted by network security intrusions that not too long ago only previously targeted only large universities. 

Despite the recent public trend of paying these extortion demands, the FBI has long advocated not paying a ransom in response to a ransomware attack. Specifically, the FBI has said:  “Paying a ransom doesn’t guarantee an organization that it will get its data back—we’ve seen cases where organizations never got a decryption key after having paid the ransom. Paying a ransom not only emboldens current cyber criminals to target more organizations, it also offers an incentive for other criminals to get involved in this type of illegal activity. And finally, by paying a ransom, an organization might inadvertently be funding other illicit activity associated with criminals.”

Another result of this increase in activity has been an increase in insurance purchased to cover an extortion demand as well as the related expenses incurred during a ransomware attack.  For example, the City of Baltimore may soon approve spending $835,000 for $20 million in coverage but only because it previously sustained a ransomware attack that set it back over $18 million

In fact, some have argued that by having insurance for this exposure the industry itself is actually at the root of increased ransomware activity.  Those in the security industry correctly point out that what drives these actors turns more on quick conversion rates rather than whether an insurer stands behind a victim.  To suggest the insurance industry is the cause of this problem gives threat actors way too much credit while completely ignoring the benefits derived from the cyber insurance underwriting process.

In the same way it is never too late to go back to school, it is never too late to begin importing a more robust security and privacy profile into an organization – which is the only real way to diminish the risk of a ransomware attack.  As suggested in 2016:  “Given the serious threat of ransomware, businesses large and small are reminded to at least do the basics – train staff regarding email and social media policies, implement minimum IT security protocols, regularly backup data, plan for disaster, and regularly test your plans.” 

Facebook Dodges Potential FTC Bullet

On July 24, 2019, the FTC filed its Stipulated Order requiring that Facebook comply with newly-imposed privacy requirements for a period of twenty years.  The most noteworthy aspect of this Order, however, does not relate to the specifics of this compliance framework – which can easily be addressed with the right counsel. Rather, the requirement that is more challenging for Facebook is the one creating an “Independent Privacy Committee” within Facebook’s Board of Directors “consisting of Independent Directors, all of whom” have “(1) the ability to understand corporate compliance and accountability programs and to read and understand data protection and privacy policies and procedures, and (2) such other relevant privacy and compliance experience reasonably necessary to exercise his or her duties on the Independent Privacy Committee.” 

Such specific requirements regarding the capabilities of a Board member are more than a bit unusual.    Given the fiduciary responsibilities of Board members as well as the reputations of those willing to become members of this “Independent Privacy Committee”, this novel requirement may actually do something to curtail future privacy transgressions.

There is no doubt the FTC resolution was Facebook’s well-orchestrated attempt at rehabilitating its tattered reputation.  As stated in Facebook’s blog response:  “Billions of people around the world use our products to make their lives richer and to help their organizations thrive. That makes it especially important that the people who use our platform can trust that their information is protected. This agreement is an unambiguous commitment to do that.”  Indeed, this agreement may even be marketed as a way of bolstering dwindling user engagement.

It remains to be seen, however, whether or not the Stipulated Order provides an “unambiguous commitment” to do anything other than resolve specific violations of a prior FTC Decision and Order, In re Facebook, Inc., C-4365, 2012 FTC LEXIS 135 (F.T.C. July 27, 2012). Indeed, Commissioner Rohit Chopra – who assumed office on May 2, 2018, filed a forceful dissent objecting to the lax settlement of this violated Order: “Facebook flagrantly violated the FTC’s 2012 order by deceiving its users and allowing pay-for-play data harvesting by developers” and this settlement “imposes no meaningful changes to the company’s structure or financial incentives, which led to these violations.”

Facebook’s regulatory problems are far from over – the DOJ just announced a wide-ranging antitrust probe that includes Facebook.  Specifically, the Department of Justice’s Antitrust Division will review “whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation, or otherwise harmed consumers.” This antitrust probe will likely end up being much more interesting and potentially damaging to Facebook than the recent FTC settlement – especially depending on what road is taken by its potential privacy-killing Calibra business unit.

Senate Banking Committee Focuses on Libra Privacy Issues

On July 16, 2019, a Senate Panel lobbed missives across the Libra bow when questioning David Marcus, the head of Facebook’s Calibra subsidiary.   As suggested by the title of the hearing – “Examining Facebook’s Proposed Digital Currency and Data Privacy Considerations”, today’s hearing was really all about Facebook and not about digital currencies or blockchain technologies in any broader context.

Using a tone that permeated for much of the hearing, Sen. John Kennedy ignored Facebook’s participation in a Swiss Association that purportedly leaves Facebook with little control over Libra and instead mocked: “Facebook wants to control the monetary supply. What could possibly go wrong?” Sen. Sherrod Brown (D-OH) reinforced this lack of trust when he said that Facebook was dangerous because it did not “respect the power of the technologies they are playing with, like a toddler who has gotten his hands on a book of matches, Facebook has burned down the house over and over, and called every arson a ‘learning experience.'”

Sen. Brian Schatz summed up the mood nicely when he recognized: “You’re making an argument for cryptocurrencies generally. The question is not, ‘Should the U.S. lead in this?’ Why in the world, of all companies, given the last couple of years, should [Facebook] do this?” 

On a more substantive side, the hearing was driven by a concern for privacy rights. As reported in The Wall Street Journal,  Mr. Marcus suggested that Facebook would not monetize users’ data related to Libra because no financial or account data from the Libra network would be shared with Facebook:  “We’ve heard loud and clear from people, they don’t want those two types of data streams connected.”

Even though it did not garner much public analysis, Chairman Crapo’s Statement provides an important privacy perspective that may also set the table for future legislative action: “Individuals are the rightful owners of their data. They should be granted a certain set of privacy rights, and the ability to protect those rights through informed consent, including full disclosure of the data that is being gathered and how it is being used.”

And, despite all of his protestations to the contrary, in his own prepared testimony, Mr. Marcus actually provides a rough roadmap detailing how the financial and transactional data obtained by Calibra could directly bolster Facebook’s data surveillance revenue.

Specifically, Mr. Marcus states: “The Calibra wallet will let users send Libra to almost anyone with a smartphone, similar to how they might send a text message, and at low-to-no cost.  We expect that the Calibra wallet will ultimately be one of many services, and one of many digital wallets, available to consumers on the Libra network.   We do not expect Calibra to make money at the outset, and Calibra customers’ account and financial information will not be shared with Facebook, Inc., and as a result cannot be used for ad targeting. Our first goal is to create utility and adoption, enabling people around the world— especially the unbanked and underbanked—to take part in the financial ecosystem.  But we expect that the Calibra wallet will be immediately beneficial to Facebook more broadly because it will allow many of the 90 million small- and medium-sized businesses that use the Facebook platform to transact more directly with Facebook’s many users, which we hope will result in consumers and businesses using Facebook more. That increased usage is likely to yield greater advertising revenue for Facebook.

To suggest that the mere ancillary use of Facebook’s platforms by Calibra users will alone cause an increase in advertising revenue makes little sense.  The only way Calibra will yield greater “advertising revenue” to Facebook is directly related to the well-understood increase in value user data would have after alignment takes place between transaction data and the other data obtained from Facebook’s platforms and services.  Indeed, advertisers have long recognized that personalization data is not nearly as useful as relevance data.

A long-term goal of Facebook’s Libra project, namely combining user data with associated financial and transactional data, should not be considered well-hidden. Mr. Marcus’ written testimony all but confirms Facebook will eventually harvest transactional and KYC data:  “Calibra will not share customers’ account information or financial data with Facebook unless people agree to permit such sharing.”  Indeed, Sen. Pat Toomey specifically asked Mr. Marcus whether Facebook intended to seek user consent to monetize Calibra-derived financial data and Mr. Marcus incredibly responded: “I can’t think of any reason right now for us to do this.” Really?

Facebook likely only has to ask and it will get whatever user permissions necessary to satisfy existing regulatory and statutory requirements.  Depending on the ultimate success of Amazon’s recent $10 offer for tracking data, Facebook may not even need to give much in return for such consent. In other words, once this particular genie is let out of the bottle there will likely be no turning back and any unencumbered launch of Libra might very well be the death knell for data privacy as we know it.

UPDATE: July 18, 2019

House Financial Services Committee Hearing of July 17, 2019

One major difference between the Senate hearing conducted on July 16, 2019 and the House Financial Services Committee hearing of July 17, 2019 was the sort of testimony provided by industry experts.  Even though the Senate smartly sought testimony from Wall Street and blockchain industry expert Caitlin Long, unlike with the House, there were no one educating the Senate on Calibra’s privacy issues.

For example, MIT Professor Gary Gensler’s prepared House testimony lays out a number of questions regarding privacy that Facebook should answer at some point:  “We know that many of the most intrusive privacy practices of concern to privacy regulators have actually been subject to some form of consumer consent. So, it will be essential to conduct a more thorough analysis of what uses of Libra data should be allowed and which uses should be prohibited. How would such restrictions be monitored and enforced? What are the limited exceptions and might Calibra broadly seek customer consent in the form of standard user agreements? It would be likely that Calibra would want to commercialize this data. At a minimum, without sharing the raw transaction data from customers’ Calibra Wallets, it would still likely analyze such data to earn money either through advertisements or by offering targeted services to wallet holders.”  

As well, in the prepared written testimony of Robert Weissman, President of Public Citizen, there is a long discussion explaining why Facebook is a “Corporate Surveillance Leviathan” that cannot be trusted with the proposed Calibra wallet.

The House Hearing also raised the issue of whether Facebook would be able to pick and choose users of the Calibra wallet – potentially forcing persons to conform their behavior to Facebook standards. In one highlight of the House Hearing, Congressman Sean Duffy waved a twenty-dollar bill in the air while making the point that anyone, including persons who say horrible things, can use a twenty-dollar bill but: “Who can use Calibra?”  In response, Mr. Marcus pointed out anyone who could satisfy Calibra KYC requirements – which then begged the loaded follow-up question from Congressman Duffy:  “Could Milo Yiannopoulos and Louis Farrakhan use Calibra [given they are both banned from Facebook]?”  In response, Mr. Marcus said that an applicable policy hasn’t yet been written but that it was “an important question that [Facebook] needed to be thoughtful about.”  

Given Facebook’s poor track record – indeed, former Facebook executives readily acknowledge Facebook holds too much market power and should not be trusted going forward, these and other “important questions” must be answered as soon as possible.