All posts by Paul E. Paray

Ransomware Groups Declare War on US Hospitals

A recent phase of the ongoing two-pronged cyber war between Russia/Iran/North Korea and China against the United States has taken an ugly turn.  The Russian faction has launched various sophisticated ransomware attacks against healthcare providers and hospital systems across the United States.  

As stated in an October 28, 2020 Alert from the Cybersecurity & Infrastructure Security Agency (CISA), there is “credible information of an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.”  In addition to the CISA Alert, cybersecurity firms battling this latest threat have shared how these latest attacks are perpetrated.

Our current healthcare cyber battle is further complicated given an October 1, 2020 Advisory from U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) reminding ransomware victims against conducting business with those on the OFAC list – including specific ransomware groups such as the Russia-based group behind the Dridex malware.  The OFAC advisory is likely driven by the FBI – which has long advocated against victims making ransomware payments.  No matter what the motivation, however, OFAC has exacerbated the current crisis given the OFAC Advisory warns the primary civil combatants against making violative ransomware payments, namely companies “providing cyber insurance, digital forensics and incident response, and financial services that may involve processing ransom payments (including depository institutions and money services businesses).”

Over the past several years, the cybersecurity community has seen a tremendous uptick in the deployment of ransomware – even leading to board level scrutiny.   No different from SQL injection exploits that were commonly warned against so many years ago yet still remain an exposure for so many websites, ransomware will not go away anytime soon.  The necessary cyber defensive skillset is far from fully disbursed to potential victims.  For example, indicators of compromise (IOCs) shared with the cybersecurity community would likely be ignored by most IT staff given they do not even have the means of searching internally for IOCs within their network.

Taking into consideration the old adage:  “If you fail to plan, you plan to fail,” healthcare providers and hospital systems should immediately seek out specialized cybersecurity experts who are currently fighting this battle before it is too late.

Platform Immunity at Risk?

On September 23, 2020, the Department of Justice released its proposed changes to Section 230 of the DMCA – the first serious attempt at reigning in the immunity rights enjoyed by the duopoly of Facebook and Google.  In his cover letter, the Attorney General wrote:  “I am pleased to present for consideration by Congress a legislative proposal to modernize and clarify the immunity that 47 U.S.C. § 230 provides to online platforms that host and moderate content.”  Recognizing that “platforms have been allowed to invoke Section 230 to escape liability even when they knew their services were being used for criminal activity”, the Attorney General stressed that the initial purposes of the 1996 DMCA have long ago been served.  

Accordingly, the first tranche of changes is focused on ensuring editorial decisions are being done objectively and in good faith – with a proposed definition of “good faith” actually baked into the proposed new Section 230.  Specifically, Section 230(c)(2) is amended to require platforms have an “objectively reasonable belief” that the speech they are removing falls within certain enumerated categories.

The second area of changes addresses growing illicit online content by limiting publisher immunity when an online platform (I) purposefully promotes, facilitates, or solicits third­ party content that would violate federal criminal law; (2) has actual knowledge that specific content it is hosting violates federal law; or (3) fails to remove unlawful content after receiving notice by way of a final court judgment.  See Proposed § 230(d).

And finally, the third major change amends Section 230(e) to expressly confirm that the immunity provided by Section 230 would not apply to civil enforcement actions brought by the federal government.  This change provides for an important federal enforcement tool against platforms should the need arise – just like with any other company in the United States.  See Proposed § 230(e).

A careful review of these changes evidences a long-overdue updating that hopefully begets bipartisan support despite the current schism between our two major political parties.   Indeed, given the lobbying might of Facebook, Google and other online platforms, any alteration of the immunities granted under Section 230 will require nothing less than true bipartisan support.

UPDATE: October 28, 2020

On October 28, 2020, the U.S. Senate held a hearing on the following topic: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” The Hearing was to “examine whether Section 230 of the Communications Decency Act has outlived its usefulness in today’s digital age. It will also examine legislative proposals to modernize the decades-old law, increase transparency and accountability among big technology companies for their content moderation practices, and explore the impact of large ad-tech platforms on local journalism and consumer privacy.”

Other than highlighting a pretty wild lockdown beard, the session provided little real ammo for either side of this debate. Perhaps in 2021, that dynamic may change.

Alleged cover-up leads to criminal complaint against former Uber CSO

In filing its August 20, 2020 criminal complaint against the former Uber CSO, the US Attorney for the Northern District of California issued a wake-up call to every CISO responding to a federal investigation of a data incident.  And, by stating in its press release, “we hope companies stand up and take notice”, the Justice Department has definitely thrown down a gauntlet against CISOs across the country.  

By way of background, Uber sustained a data breach in September of 2014 that was investigated by the FTC in 2016.  Uber designated its CSO – Joseph Sullivan, to provide testimony regarding the incident.  Within ten days of providing testimony to the FTC, Sullivan received word Uber was breached again but rather than update his testimony before the FTC he allegedly tried very hard to conceal the incident from the FTC.  Indeed, Sullivan allegedly went so far as to concoct a bug bounty program cover story and asked the hackers to sign an NDA as a condition of their getting $100,000 in bitcoin.

The Special Agent’s supporting affidavit swears that “there is probable cause to believe that the defendant engaged in a cover-up intended to obstruct the lawful functions and official proceedings of the Federal Trade Commission. . . . It is my belief that SULLIVAN further intended to spare Uber and SULLIVAN negative publicity and loss of users and drivers that would have stemmed from disclosure of the hack and data breach.”

In other words, a CSO allegedly spared his employer “negative publicity and loss of users” by inaccurately describing an incident and failing to disclose it in timely manner.  Even though the alleged conduct of Uber’s former CSO may have pushed the needle into the red zone, there are also potential arguments in his favor.  In coming up with one such counterargument, several Forrester analysts suggest:  “Sullivan did not inform the FTC during the sworn investigative hearing because he couldn’t have:  Sullivan learned of the 2016 breach 10 days later. To inform the FTC, Sullivan would have needed to reach out and inform them about a separate, new, but similar breach. There’s also some confusion as to whether Sullivan was under any legal obligation to do so.”

Whatever happens in this particular case, the fact remains CISOs sometime inadvertently play too close to the edge.  The underpinnings of an incident are whatever they are – no one can or should ever try to morph them into something different.  Good legal and IT counsel will mitigate loss and certain exposures but only with the assistance of CISOs and CSOs who recount events rather than fabricate them.  Not surprisingly given no company is immune to a breach, it’s only the cover-up that will ever hurt and not the incident itself. 

Schrems-II, Facebook-0

On July 16, 2020, the EU Court of Justice decided “Schrems II” and invalidated the EU Commission’s Decision 2016/1250 regarding the adequacy of the EU-U.S. Privacy Shield (“the Privacy Shield Decision”).  As described in the Press Release issued by the Court:

[T]he limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.

This case was the second one brought by Max Schrems against Facebook in its Irish domicile – which is why the case is now in the hands of the Irish Data Protection Commission. In rejecting the use of a Privacy Shield Ombudsperson who was independent from the Intelligence Community – the agreed-upon safeguard found in the Privacy Shield Decision, the Court of Justice ruled that such a mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.” 

Now that the Court has invalidated the European Commission’s adequacy decision for the EU-U.S. Privacy Shield, thousands of  US companies relying on such a mechanism will need to reevaluate their compliance efforts.  The US Commerce Department echoed today the same disappointment likely felt by these companies.  Reminding companies there is still a “US” component very much still intact in the “EU-US Privacy Shield”, the Secretary of Commerce also stated that “today’s decision does not relieve participating organizations of their Privacy Shield obligations.”

CCPA Enforcement Begins Today

Beginning on July 1, 2020, the California Attorney General’s office may start sending out warnings of potential CCPA violations and give notified businesses 30 days to correct those violations before facing possible fines or lawsuits.

In rejecting numerous requests to delay CCPA enforcement, Attorney General Xavier Becerra reasoned: “As families continue to move their lives increasingly online, it is essential for Californians to know their privacy options. Our office is committed to enforcing the law starting July 1.”

In November 2020, California voters may take a swipe at the AG’s efforts by approving a new ballot initiative – the California Privacy Rights Act, that creates a privacy enforcement agency some may consider “a woefully underfunded paper tiger” yet will still nevertheless have exclusive enforcement power over certain provisions of CCPA to the exclusion of the AG’s office.

Given the very long gestation period for the proposed CPRA – this ballot law would become effective January 1, 2023 and enforceable on July 1, 2023, the jury is still certainly out on whether its passage would ever directly benefit consumers or just lead to more lobbyist driven amendments by the California duopoly of Google and Facebook. As of right now, the Tech Lords of Stanford certainly remain in complete control.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

California AG Pushes New Global Opt-Out Privacy Setting

On June 2, 2020, the Office of the California Attorney General (“OAG”) submitted its final proposed regulations under the California Consumer Privacy Act (CCPA) The OAG press release suggests these final regulations clarify “important transparency and accountability mechanisms for businesses subject to the law.” A number of those reviewing these final regulations correctly point out that they have not changed much from the last draft.

The most striking feature of these proposed regulations, however, is actually found in the explanatory reasoning jointly filed by the AG. The OAG Statement of Reasons suggests the OAG may have, in effect, mandated more than what was expressly required under CCPA, namely an opt-out setting for the sale of personal information that can be managed by consumers on a global basis.

By way of background, consumers have long had the capability to send “Do Not Track” (DNT) header signals from their browsers – with privacy advocates long providing tutorials on how consumer-choice DNT tools could be implemented on browsers.  Given that a DNT signal is a machine-readable header and not an embedded cookie, i.e., a file placed by websites into a consumer’s computer in order to store privacy preferences, consumers can delete installed cookies without disrupting their global DNT signal.  Some companies such as Apple actually do not even respond to DNT signals because they claim that they do not “track its customers over time and across third party websites to provide targeted advertising.”

The OAG sets forth in § 999.315 the relevant “Requests to Opt-Out” language later interpreted by the OAG in its Statement of Reasons.

Section 999.315(c) of the OAG’s regulations reads: “A business’s methods for submitting requests to opt-out shall be easy for consumers to execute and shall require minimal steps to allow the consumer to opt-out. A business shall not utilize a method that is designed with the purpose or has the substantial effect of subverting or impairing a consumer’s decision to opt-out.” And, the final Subsection (d)(1) reads:  “Any privacy control developed in accordance with these regulations shall clearly communicate or signal that a consumer intends to opt-out of the sale of personal information.”

Previously, an EFF-led privacy coalition recommended the deletion of the following clause from § 999.315(d)(1):  “The privacy control shall require that the consumer affirmatively select their choice to opt-out and shall not be designed with any pre-selected settings.”  That recommendation was adopted by the OAG and the “affirmative selection” language was deleted – obviating the need for a potential website-by-website affirmative opt-out selection by consumers.

While the § 315(d)(1) recommendation was adopted, the OAG chose not to adopt the EFF coalition’s recommendation to add the following clause at the end of § 315(c):  “A business shall treat a “Do Not Track” browsing header as such a choice.”  By rejecting this suggested new language, the OAG chose not to limit the scope of any implementation technology. As reflected in the OAG’s Statement of Reasons, this rejection actually ends up being an even more meaningful nod in the direction of the EFF Coalition.

Specifically, the OAG recognized it’s goal was in imposing clear regulatory parameters while not imposing technological requirements that might be limiting on a company:

By requiring that a privacy control be designed to clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, the regulation sets clear parameters for what the control must communicate so as to avoid any ambiguous signals.  It does not prescribe a particular mechanism or technology; rather, it is technology-neutral to support innovation in privacy services to facilitate consumers’ exercise of their right to opt-out.  The regulation benefits both businesses and innovators who will develop such controls by providing guidance on the parameters of what must be communicated.  And because the regulation mandates that the privacy control clearly communicate that the consumer intends to opt-out of the sale of personal information, the consumer’s use of the control is sufficient to demonstrate that they are choosing to exercise their CCPA right.

More to the point, the OAG also explains

Subsection (d) requires a business that collects personal information online to treat user-enabled global privacy controls as a valid request to opt-out.  This subsection is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out.  Given the ease and frequency by which personal information is collected and sold when a consumer visits a website, consumers should have a similarly easy ability to request to opt-out globally.  This regulation offers consumers a global choice to opt-out of the sale of personal information, as opposed to going website by website to make individual requests with each business each time they use a new browser or a new device. (emphasis added).

Perhaps anticipating some push back, the OAG goes into detail regarding its authority by referencing prior experience with DNT requirements under the California Online Privacy Protection Act (Bus. & Prof. Code, § 22575 et seq.) (CalOPPA).  To that end, on May 21, 2014, the OAG previously released a set of recommendations to assist with compliance of CalOPPA’s DNT disclosures.   

The OAG justifies its approach as follows:

As the primary enforcer of [CalOPPA], the OAG has reviewed numerous privacy policies for compliance with CalOPPA, which requires the operator of an online service to disclose, among other things, how it responds to “Do Not Track” signals or other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about their online activities over time and across third-party websites or online services.  (Bus. & Prof. Code, § 22757, subd. (b)(5).)  The majority of businesses disclose that they do not comply with those signals, meaning that they do not respond to any mechanism that provides consumers with the ability to exercise choice over how their information is collected.  Accordingly, the OAG has concluded that businesses will very likely similarly ignore or reject a global privacy control if the regulation permits discretionary compliance.  The regulation is thus necessary to prevent businesses from subverting or ignoring consumer tools related to their CCPA rights and, specifically, the exercise of the consumer’s right to opt-out of the sale of personal information. Contrary to public comments that the user-enabled global privacy setting is outside of the scope of the OAG’s authority, subsection (d) is authorized by the CCPA because it furthers and is consistent with the language, intent, and purpose of the CCPA.  (emphasis added).

Not surprising given its technology neutral approach, the manner in which companies will comply with a global opt-out capability is not spelled out by the OAG.  Companies may address a global opt-out setting controlled by consumers by either taking on this obligation utilizing a new product or investing internally in developing a solution. Any such feature, however, will likely be tested by the OAG and courts. No matter how this new requirement is implemented, however, it is very likely the OAG will come out swinging given that the November 2020 ballot initiative spearheaded by Alastair Mactaggartthe California Privacy Rights Act, would create the “California Privacy Protection Agency” as a new enforcement arm and potential competition for the OAG.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

Ransomware Has Officially Become a D&O Problem

On April 30, 2020, ZDNet reported that there have been more than 1,000 SEC filings over the past 12 months listing ransomware as a risk factor – with more than 700 in 2020 alone.  These filings include annual reports (10K and 20F), quarterly reports (10Q), and registration forms (S1). 

Even the most sophisticated technology companies now insert the word “ransomware” into their Risk Factors section. See Alphabet, Inc., Form 10-Q, dated April 28, 2020, at 50  (“The availability of our products and services and fulfillment of our customer contracts depend on the continuing operation of our information technology and communications systems. Our systems are vulnerable to damage, interference, or interruption from terrorist attacks, natural disasters or pandemics (including COVID-19), the effects of climate change (such as sea level rise, drought, flooding, wildfires, and increased storm severity), power loss, telecommunications failures, computer viruses, ransomware attacks, computer denial of service attacks, phishing schemes, or other attempts to harm or access our systems.”).   

As reported by ZDNet, companies as varied as American Airlines, McDonald’s, Tupperware, and Pluralsight also list ransomware as a potential risk to their business. 

By inserting the word “ransomware” into a Risk Factors section, reporting companies may have elevated the relevant standard for companies who do not reference ransomware.  By way of background, in October 2011, the SEC began planting cyber risk disclosure seeds when it issued non-binding disclosure guidance regarding cybersecurity risks and incidents.  Back in 2011, the SEC wrote:  “Although no existing disclosure requirement explicitly refers to cybersecurity risks and cyber incidents, a number of disclosure requirements may impose an obligation on registrants to disclose such risks and incidents.” Seven years later, this non-binding guidance became binding.

On February 26, 2018, the SEC issued binding guidance that recognizes:  “Companies face an evolving landscape of cybersecurity threats in which hackers use a complex array of means to perpetrate cyber-attacks, including the use of stolen access credentials, malware, ransomware, phishing, structured query language injection attacks, and distributed denial-of-service attacks, among other means.”   By expressly listing ransomware two years ago in its Statement, the SEC was making it quite clear that the current threat landscape includes the risk of ransomware and that directors and officers have to address this likely risk.

More to the point, the Statement and Guidance on Public Company Cybersecurity Disclosures instructs “that the development of effective disclosure controls and procedures is best achieved when a company’s directors, officers, and other persons responsible for developing and overseeing such controls and procedures are informed about the cybersecurity risks and incidents that the company has faced or is likely to face.” 

Not surprisingly, the failure to disclose a prior ransomware attack would also be actionable.  See SEC Statement at 14 (“In meeting their disclosure obligations, companies may need to disclose previous or ongoing cybersecurity incidents or other past events in order to place discussions of these risks in the appropriate context.  For example, if a company previously experienced a material cybersecurity incident involving denial-of-service, it likely would not be sufficient for the company to disclose that there is a risk that a denial-of-service incident may occur.”).

If ransomware incidents were avoided altogether, however, there would be no liability attached to associated filings no matter what was communicated to the market. Moreover, even when attacks were not avoided, little disclosure risk would exist if the company applied best practices to avoid such an incident and provided an accurate accounting of what took place when an incident did take place. To that end, deploying proactive approaches considered state-of-the-art when dealing with ransomware risk will naturally mitigate against any potential SEC disclosure risk.

For example, there is at least one novel solution that can reduce ransomware attacks by anticipating when a compromised system’s ransomware package will be released and then neutralizing the ransomware threat before any ransomware release actually takes place.  By evaluating and deploying such cutting-edge solutions, companies will be well positioned to neutralize any potential shareholder claims – as well as satisfying the much more important task of protecting corporate data and other digital assets.  Thankfully, “it is never too late to begin importing a more robust security and privacy profile into an organization – which is the only real way to diminish the risk of a ransomware attack.”  As with most successful corporate endeavors, management buy-in will typically be the necessary first step.

Our Current Cyber Pandemic Will Also Subside

On April 17, 2020, it was reported that researchers at Finland’s Arctic Security found “the number of networks experiencing malicious activity was more than double in March in the United States and many European countries compared with January, soon after the virus was first reported in China. ”

Lari Huttunen at Arctic Security astutely pointed out why previously safe networks were now exposed: “In many cases, corporate firewalls and security policies had protected machines that had been infected by viruses or targeted malware . . . . Outside of the office, that protection can fall off sharply, allowing the infected machines to communicate again with the original hackers. “

Tom Kellerman – a cybersecurity thought leader, distills it this way: “There is a digitally historic event occurring in the background of this pandemic, and that is there is a cybercrime pandemic that is occurring.”

While there are certain internal ways of addressing cybersecurity threats arising from a viral pandemic, the exposures now faced by corporations become doubly damaging when the outside resources absolutely necessary to combat active threats are considered off-budget or not a critical enough priority. Smart companies generally survive stressful times by prioritizing with some foresight. Network security during a Cyber Pandemic should be a top priority no matter what size business.

During our Cyber Pandemic, companies recognizing and properly addressing the potential damage caused by threat actors will not only survive minor short-term hits to their bottom line caused by paying outside resources, they will likely be the ones coming on top after both Pandemics subside. There is definitely a light at the end of the tunnel for those willing to take the ride – just continue using trusted vehicles to get you there.

Addressing COVID-19 Cybersecurity Threats

When implementing COVID-19 business continuity plans, companies should take into consideration security threats from cybercriminals looking to exploit fear, uncertainty and doubt – better known as FUD.  Fear can drive a thirst for the latest information and may lead employees to seek online information in a careless fashion – leaving best practices by the wayside.

According to Reinsurance News, there has already been “a surge of coronavirus-related cyber attacks”.  Many phishing attacks “have either claimed to have an attached list of people with the virus or have even asked the victim to make a bitcoin payment for it.” Not all employees are accustomed to the risks from a corporate-wide work from home (WFH) policy given the previous lack of intersection between work and personal computers. 

One cyber security firm released information outlining these WFH risks. And,  another security provider offers a common-sense refresher:  “If you get an email that looks like it is from the WHO (World Health Organization) and you don’t normally get emails from the WHO, you should be cautious.” In addition to recommendations made by security consultants, there are privacy-forward recommendations that will necessarily mitigate against phishing exploits.  For example, WFH employees should be steered towards privacy browsers such as Brave and Firefox to avoid fingerprinting and search engines such as Duckduckgo for private searches.  A comprehensive listing of privacy-forward online tools is found at PrivacyTools.IO.    

Criminals have already exploited the current FUD by creating very convincing COVID-19-related links.   As reported by Brian Krebs, several Russian language cybercrime forums now sell a “digital Coronavirus infection kit” that uses the Hopkins interactive map of real-time infections as part of a Java-based malware deployment scheme. The kit only costs $200 if the buyer has a Java code signing certificate and $700 if the buyer uses the seller’s certificate. 

At a very basic level, WFH employees should be reminded not to click on sources of information other than clean URLs such as CDC.Gov or open unsolicited attachments even if they appear coming from a known associate.  Now that banks, hotels, and health providers are  sending emails alerting their clients of newly-implemented COVID-19 procedures, it is especially easy to succumb to spear phishing exploits – which is the hallmark of state-sponsored groups.  As recently reported, government-backed hacking groups from China, North Korea, and Russia have begun using COVID-19-based phishing lures to infect victims with malware and gain infrastructure access.  These recent attacks primarily targeted users in countries outside the US but there should be little doubt more groups will focus on the US in the coming weeks. Until ramped up testing demonstrates that the COVID-19 risk has passed, companies are well advised to focus some of their security diligence on these targeted attacks.

This does not mean employees need to be fed yet more FUD – this time regarding network security, without some good news. Employees can be reminded of the fact a decade ago we survived another pandemic. Specifically, between April 2009 and April 2010, there were 60.8 million cases, 274,304 hospitalizations, and 12,469 deaths in the United States caused by the Swine Flu. Globally, the Swine Flu infected between 700 million and 1.4 billion people, resulting in 150,000 to 575,000 deaths. Moreover, the young were a vector for Swine Flu yet are not for COVID-19. And, a large band of 25 – 35 year olds are better in two days – hardly a bad cold, for COVID-19 whereas there was no such band for the Swine Flu. On the downside, COVID-19 has a more efficient transmission mechanism than Swine Flu and we are better suited to develop influenza vaccines than we are for coronavirus vaccines.

UPDATE: April 23, 2020

The CDC reports in its latest published statistics there were 802,583 reported cases of COVID-19 and 44,575 associated deaths. Without a doubt, this pandemic is certainly much worse that the Swine Flu pandemic as previously reported by the CDC. Moreover, the current “panic pandemic” certainly shows no indications of subsiding.

Whether the governmental measures taken actually ratcheted up the body count or caused them to diminish is left for historians and clinicians to analyze. The hard fact remains the body count keeps going up and the U.S. economy is still on lock down as of April 23, 2020.

UPDATE: May 1, 2020

On April 30, 2020, it was reported Tonya Ugoretz, deputy Assistant Director of the FBI Cyber Division, stated the FBI’s Internet Crime Complaint Center (IC3) is currently receiving between 3,000 and 4,000 cybersecurity complaints daily – IC3 normally averages 1,000 daily complaints.

UPDATE: May 6, 2020

On May 5, 2020, a joint alert from the United States Department of Homeland Security Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Cyber Security Centre warned of APTs targeting healthcare and essential services.

The alert warned of “ongoing activity by APT groups against organizations involved in both national and international COVID-19 responses.”  This May 5, 2020 alert follows an April 8, 2020 Alert that warned in broader terms of malicious cyber actors exploiting COVID-19.

APTs are conducted by nation-state actors given the level of resources and money needed to launch such an attack.  Moreover, they generally take between eight and nine months to plan and coordinate before launching.  It is particularly disheartening that these recent attacks include those launched by state-backed Chinese hackers known as APT 41.  As one cybersecurity firm points out in a recently-released white paper:  “APT41’s involvement is impossible to deny.” 

Distilled to its essence, the uncovered APT41 attacks mean that before COVID-19 was even on US shores, Chinese state-actors were planning attacks targeting the healthcare and pharmaceutical sectors.  One can only hope the cyberattacks were not coordinated alongside the spread of the virus – a virus that only became public months after a coordinated attack would have been first planned.

A Statutory “Right of Compensation”

The below was excerpted and modified from written testimony submitted to the New York Senate.

According to the World Economic Forum (“WEF”), “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Rethinking Personal Data:  Strengthening Trust, at 7, World Economic Forum Report (May 2012).  A subsequent paper from the WEF suggests “claims of personal data being ‘a new asset class’ are the strongest for . . . the inferred data [Institutions] possess about individuals on the basis that they invested the time, energy and resources in creating it.” Rethinking Personal Data: A New Lens for Strengthening Trust, at 17, World Economic Forum Report (May 2014). 

Surveillance advertising prevents those privacy rights underlying these assets from being easily managed – it is not up to consumers whether they will be fed an ad based on previous website visits or purchases it will just happen.  Indeed, according to a survey of 1,000 persons conducted by Ipsos Public Affairs and released by Microsoft in January 2013, forty-five percent of respondents felt they had little or no control over the personal information companies gather about them while they are browsing the Web or using online services.   Indeed, even before the prevalence of online surveillance advertising, consumers faced privacy diminution from widespread tracking efforts.  See Steve Bibas, A Contractual Approach to Data Privacy, 17 Harv. J. Law & Public Policy 591 (Spring 1994) (“Although the ready availability of information helps us to trust others and coordinate actions, it also lessens our privacy. George Orwell presciently expressed our fear of losing all privacy to an omniscient Big Brother. Computers today track our telephone calls, credit-card spending, plane flights, educational and employment records, medical histories, and more.  Someone with free access to this information could piece together a coherent picture of our actions.”). 

Current law, however, does not prevent someone from collecting publicly available information to create a comprehensive consumer profile – nor should there be any such law.  Similarly, there should not be the right to opt out of having publicly recorded information sold or shared.   The same rules, however, should not apply to personal information that is not collected solely from public sources.

This does not mean companies should have the unfettered right to use personal data. To address the market disparity between owners of personal data “assets” and those companies who monetize this data, Alastair Mactaggart launched in 2017 a California ballot initiative that led to the enactment of the California Consumer Privacy Act (CCPA).  Now that CCPA has come online – albeit in a weakened form due to the amendments signed into the law, Mr. Mactaggart is pushing a new ballot initiative that presumably will strengthen CCPA.      

While the EU by way of its General Data Protection Regulation (GDPR) ultimately looks to protect the EU and its residents from countries with weaker data processing safeguards, the path begun by CCPA – and continuing with Mactaggart’s latest ballot initiative – The California Privacy Rights and Enforcement Act  of 2020 (Mactaggart 2020 Ballot Initiative), seeks to protect individuals on a more fundamental level by giving them novel statutory rights.   For example, so long as companies are compliant with GDPR’s processing legal requirements and have a legal basis to conduct such processing, data subjects are left without any real financial recourse.  Under GDPR, data subjects may be provided with specific rights, including the right to erasure, processing, portability, access and correction, etc., yet these rights are in no way transferable or of any value apart from GDPR’s regulatory scheme.

CCPA apparently has as its core mission the bringing of transparency to the use of consumer data.  Before the enactment of CCPA, California was like all other states in that its residents could not readily learn what personal information a business had collected about them, how such information was used, and how to prevent such use from taking place in the future.  Despite similar protections in GDPR, the fact that GDPR precludes processing of data unless certain requirements are met, represents a fundamental difference between the US and EU approaches.  Under the EU approach, subject to certain exceptions a company can continue processing data as long as this GDPR data processing regime is followed.  Under the CCPA approach, consumers have a say on whether processing takes place no matter what level of compliance as long as financial triggers exist for the use of the data.   

State legislatures should study the Mactaggart 2020 Ballot Initiative given its correction of defects and weaknesses found in CCPA while still pushing forward CCPA’s consumer-first mandate.  This does not mean, however, the Mactaggart 2020 Ballot Initiative is not without flaws.  The suggestion that a browser-based solution can easily address the “Do Not Sell” requirement is flawed given the 12-month wait period before a company can again request consent after a “Do Not Sell” request.  When taking into consideration VPN usage and the transient nature of a browser’s tracking tools, a future viable solution will require much more of a comprehensive technology framework – all the while without requiring users to create a new account.  Cal. Civ. Code § 1798.135(a)(1)–(2) (precluding companies from requiring consumers to create a new account as a means of enforcing their right to bar sales of data).

Moreover, the CCPA’s Sisyphean task of creating a “Do Not Sell My Personal Information” link that is both “clear and conspicuous” and yet found on every page that collects personal information is not rectified in the Mactaggart 2020 Ballot Initiative.  With most compliant sites, links have simply been added to every footer for those site visitors with a California IP address – even if generated by a VPN, or they are simply directed to a CCPA-specific page. A footer link is hardly “clear and conspicuous” yet that coupled with a linked page having a “Do Not Sell My Persona Information” button is now the regulatory-accepted compliance tool for this requirement.

CCPA recognizes that the consumer data problem relates to the vast amounts of information in the hands of few data merchants.  There are also Congressional efforts focused on combating large bad actors by instilling more transparency in the data collection process.  Specifically, U.S. Sens. Mark R. Warner (D-VA), Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have introduced “the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, bipartisan legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.”  The bill expressly focuses on only those technology platforms with over 100 million monthly active users.  As stated in the press release, “Sens. Warner and Hawley have partnered on the DASHBOARD Act, legislation to require data harvesting companies such as social media platforms to disclose how they are monetizing consumer data, as well as the Do Not Track Act, which would allow users to opt out of non-essential data collection, modeled after the Federal Trade Commission’s (FTC) “Do Not Call” list.”

As with CCPA, the Mactaggart 2020 Ballot Initiative, and other state initiatives, federal initiatives generally focus on transparency – helping consumers understand what and how data is collected; control – allowing consumers reject usage of personal data; and accountability – providing adequate data security and compliance coupled with consumer consents.  Ultimately, these are the three pillars of any privacy law worth enacting. There is one more pillar, however, that has not gotten much attention yet is equally important.

A Right of Compensation

Lacking in current federal and state privacy laws and bills is the statutory pronouncement that consumers have an actual property right derived from their personal data.  Providing for a specific statutory property right that is fixed and delineated in a privacy law will make transparency, control and accountability much easier to enforce. 

The privacy community has long toyed with ascribing property rights to personal data.  See Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1379 (2000) (“One answer to the question “Why ownership?” then, is that it seems we simply cannot help ourselves. Property talk is just how we talk about matters of great importance. In particular, it is how we talk about the allocation of rights in things, and personally-identified information seems “thingified” (or detached from self) in ways that other sorts of private matters—intimate privacy, for example—are not. On this view, the “propertization” of the informational privacy debate is a matter of course; it merely testifies to the enormous power of property thinking in shaping the rules and patterns by which we live.”).

Some have voiced preemptive opposition to any data ownership approach.  See generally Sarah Jeong, We don’t allow people to sell their kidneys. We shouldn’t let them sell the details of their lives, either, The New York Times (July 5, 2019) (“Legally vesting ownership in data isn’t a new idea. It’s often been kicked around as a way to strengthen privacy. But the entire analogy of owning data, like owning a house or a car, falls apart with a little scrutiny. A property right is alienable — once you sell your house, it’s gone. But the most fundamental human rights are inalienable, often because the rights become meaningless once they are alienable. What’s the point of life and liberty if you can sell them?”); Mark MacCarthy, Privacy Is Not A Property Right In Personal Information, Forbes (November 2, 2018) (“Some commentators new to the privacy debate are quick to offer what they think is a clever idea: assign property rights over personal information to the user and let the marketplace decide what happens next. Whether this idea is meritorious has big implications for how we think about things like data portability and consent.  Turns out it’s wrong.”). 

As referenced in Mark MacCarthy’s opinion piece, the notion of personal data as property conflicts with the reality, for example, that medical information can simultaneously potentially be owned by patients, medical schools, pharmacies, doctors, pharmaceutical companies, EMR software vendors, advertising companies and Internet service providers. Opponents of data property rights also wonder how property ownership rights will be allocated – for associated payments or veto power, to the constituent owners.

It can also be argued that personal data continually percolates uncontrolled around the world and constitutes a “social good” that can never be owned by individuals.  For example, it is the underpinning of a good deal of medical research that ends up curing disease.  Information concerning a consumer’s interaction with others presumably also allows participants to these interactions to also claim ownership of related inferred data to themselves.  Moreover, it is easy to argue the First Amendment should bar the creation of a data property regime given it might potentially stifle speech between parties.

The “social good” argument is likely the one with the strongest appeal.  For example, on November 11, 2019 The Wall Street Journal exposed Google’s “Project Nightingale” and its resulting company access to the health information maintained by Ascension – one of the nation’s leading health systems.  In a November 11, 2019 blog post, Google explained that this arrangement was to support Ascension “with technology that helps them to deliver better care to patients across the United States.”  What is noticeably absent from the blog post is whether Google will also obtain access to patient medical records in a deidentified or other manner.  This is noteworthy given last year researchers at Google announced a way to predict a person’s blood pressure, age, and smoking status simply from an image of their retina.  In order to do so, however, Google first had to analyze retinal images from 284,335 patients.  Given health research is obviously a “social good” the use or sale of deidentified protected health information (PHI) has long been an accepted use of medical data.  Oregon’s failed Senate Bill 703 would have been the first in the nation to require specific consent for the sale of deidentified PHI that is now currently sold each year for billions. 

No matter how they are ultimately couched, all of the paternalistic arguments against individuals having property rights in their data still miss the mark.  First, simply because a privacy right may be perceived as “inalienable” – as it is under the California Constitution, does not mean there cannot be transferable “compensation units” derived from such rights.  Indeed, certain inalienable rights, e.g., right to freedom, right to property, etc., are routinely suspended during a trial and after conviction based on the voluntary commission of a crime.  This unfortunately happens every day throughout the country.  There is no reason a person should be precluded from voluntarily transforming certain ascribed rights into fungible ownership interests for a set duration and upon a specific set of circumstances.  No one currently corrals persons living on the streets claiming they need to assert their right to privacy despite the fact outdoor sleeping is obviously a knowing waiver of a right to privacy.   Similarly, persons every day voluntarily join affinity clubs to obtain rewards while trading away unknown personal data in an unknown surveillance arrangement.  Such conduct certainly does not mean the inalienable “right to privacy” was shredded up and destroyed by such individuals. 

The fact that multiple parties may claim ownership rights in the same personal data also does not negate the fact an ownership regime can viably exist – only that it will require careful coordination and adequate technology to implement.  Moreover, any argument based on the First Amendment also misses the mark in the same way no one has a First Amendment right to produce a copyright-protected play without proper consent from the writer.

Providing consumers with the ability and “statutory right to trade one’s personal data” – even if the fair market value of such data might be actually quite minimal, is the actual specific ownership right that should be statutorily created.  Noted academics long ago suspected this might be the correct path to take.  See Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 Stan. L. Rev. 1373, 1391 (2000) (“A successful data privacy regime is precisely one that guarantees individuals the right to trade their personal information for perceived benefits, and that places the lowest transaction cost barriers in the way of consensual trades. If individuals choose to trade their personal data away without placing restrictions on secondary or tertiary uses, surely it is their business. On this view, choice rather than ownership is (or should be) the engine of privacy policy. What matters most is that personal data is owned at the end of the day in the manner the parties have agreed.”) (emphasis added); Id. at 1383 (“A relational approach to personally-identified data might, but need not, assign “ownership” or control of exchange based on possession.”); Richard A. Posner, The Right of Privacy, 12 Ga. L. Rev. 393, 394 (Spring 1977) (“People invariably possess information, including facts about themselves and contents of communications, that they will incur costs to conceal. Sometimes such information is of value to others: that is, others will incur costs to discover it. Thus we have two economic goods, “privacy” and “prying.” . . . An alternative [economic analysis of privacy] is to regard privacy and prying as intermediate rather than final goods, instrumental rather than ultimate values. Under this approach, people are assumed not to desire or value privacy or prying in themselves but to use these goods as inputs into the production of income or some other broad measure of utility or welfare.”) (emphasis added).

Current efforts at creating a statutory privacy regime can actually be considered precursors to a statutory “transactional property” approach.  Under CCPA:  “A business may offer financial incentives, including payments to consumers as compensation, for the collection of personal information, the sale of personal information, or the deletion of personal information.”  Cal. Civ. Code § 1798.125(b)(1).   Indeed, the healthcare privacy regime of HIPAA long understood the possibility PHI might be sold by a covered entity.  See 45 CFR § 164.508(a)(4)(i) (“Notwithstanding any provision of this subpart, other than the transition provisions in § 164.532, a covered entity must obtain an authorization for any disclosure of protected health information which is a sale of protected health information, as defined in § 164.501 of this subpart. (ii) Such authorization must state that the disclosure will result in remuneration to the covered entity.”).  Moreover, HIPAA even anticipates state statutes having greater protections.  See 45 CFR § 160.203 (There is an express exemption under HIPAA for State law when that “State law relates to the privacy of health information and is more stringent than a standard, requirement, or implementation specification adopted” under HIPAA).

A transactional property approach empowers consumers without placing unnecessary barriers on the “social good” use of data – it is even the trigger for certain of CCPA’s consumer rights.  Consumers could either choose to accept certain new statutory protections, i.e., the right to delete, or lease their data based on an economic model that would allow for the transparency needed to determine whether the data is even able to be sold.  If data is not actually salable, consumers should be limited in how they can prevent companies from using their data given the countervailing social good inherent in the free exchange of consumer data.   If there is no existing viable market for the consumer data in question, there should not be any associated requirement that a company pay any set amount for such data or be precluded from using such data in a deidentified format.  In other words, the burdens claimed by opponents of a property approach would be mitigated – consumers would only be given a piece of the pie and not the whole pie and any purported “veto power” would never really come into existence.  Moreover, a regulatory framework that allows market dynamics dictate the applicability of protections afforded to consumers is likely the fairest approach to both consumers and data merchants alike. 

Similar to the way the Mactaggart 2020 Ballot Initiative proposes the creation of a new California agency, namely the California Privacy Protection Agency (CPPA) which would cost $10 million to implement, it is suggested that a public benefit corporation ensure the necessary framework get implemented.  In other words, unlike in California where CPPA currentlly only buttresses the enforcement and regulatory work done by the California Attorney General’s Office, a public Data Protection Corporation (DPC) would coordinate with the private sector to ensure the requirements of a privacy law are viable and can come to life.  Simply put, the creation of the DPC will ensure the  current compliance problems visited on those companies subject to CCPA never come to life.  There is analogous precedent for the creation of the DPC found in the environmental arena.

No one can dispute one primary purpose of an environmental law is to either prevent potential toxins from infiltrating land, water and air or to remove and properly dispose of the pollutants if already released.  Addressing improperly used consumer data similarly needs a massive cleanup effort and can take a page from how environmental concerns were previously addressed in New York.  To that end, in 1970 the New York State Environmental Facilities Corporation (EFC) was created by the New York State Environmental Facilities Corporation Act.   

As a public benefit corporation of the State, EFC is a corporate entity separate and apart from the State.  State law empowers the EFC to provide financing for certain environmental projects as well as “render technical advice and assistance to private entities, state agencies and local government units on sewage treatment and collection, pollution control, recycling, hazardous waste abatement, solid waste disposal and other related subjects.”  Indeed, as stated by the EFC on its website, its mission is to provide “expert technical assistance for environmental projects in New York State. . . . We promote innovative environmental technologies and practices in all of our programs.”

Similarly, the DPC would provide technical assistance in conformance with the enacting law’s mandate to protect consumer data.  At a basic level, there is never the need to grant access to all data for all purposes to all companies interested in consumer data.   Whether by evaluating current zero-knowledge proof solutions – where a verifier has “zero knowledge of” information unnecessary for an actual verification, or determining the feasibility of certain self-sovereign identity solutions, the DPC can ultimately provide the necessary “secret sauce” for a successful privacy law.    Statutory efforts to legislate on privacy will forever be hamstrung if implementation technology remains an afterthought that will presumably simply sort itself out after a law is passed.  The goal of the DPC would be to ensure there are adequate technical means available to execute on the legislation passed – not to pick technology sides or inadvertently delay private sector efforts at technology development.  

To sum up, the four major components needed in a successful privacy law should begin with the creation of a statutory “Right of Compensation” and end with the means to effectuate such a right: 

  1. Creation of a “transactional property right” in consumer data giving rise to a new Right of Compensation;
  2. Development of a compliance framework that would only apply to companies maintaining significant amounts of consumer data;
  3. Insertion of rights and obligations that focus on the three established privacy pillars of transparency, control and accountability; and
  4. Creation of a “Data Protection Corporation” – a public corporation largely tasked with ensuring that what is statutorily required is feasible from both a technological and market perspective.