All posts by Paul E. Paray

The DeFi End Game

A skilled chess player will tell you the best way to study chess at a high level is to first study endgames and truly learn the power of each piece.  Memorizing book openings generally comes last.  If one wants to learn about the insurance industry, first take a job in the claims department.  In a similar way, students of disruptive technologies benefit from first learning their “end game”.  

Blockchain is one disruptive technology that still has not fully discovered its business sea legs.  The purported proxy for blockchain – Bitcoin, recently hit all-time highs so naturally on January 3, 2021 a forecaster placed a ten-year target of $1 million on this speculative asset.   Every good bubble requires inflating and the very speculative Bitcoin bubble currently being massively inflated by hedge fund money is no different.   

Bitcoin’s bubble ascension does not mean, however, the seismic blockchain and distributed ledger technology (DLT) shifts taking place over the past five years in the financial industry have been illusory or should be ignored.  As previously recognized, “acceptance of blockchain technology by the financial industry will be indelible proof those mistakes of 1995 made by retail sales and marketing companies will not be repeated by the financial industry.” 

Over the past several years, financial titans have reluctantly come out swinging in favor of convertible virtual currency (CVC) transactions.  For example, most US PayPal customers now have the ability to buy, sell and hold four different cryptocurrencies – BTC, ETH, LTC, and BCH, and use them as a funding source with the company’s 26 million merchants.  Presently, PayPal’s maximum dollar amount for weekly CVC purchases is $20,000 but even that relatively high consumer amount will likely change upwards as Paypal moves up the financial transaction food chain – with Paypal’s Venmo next in line.

The largest bank in the United States – J.P. Morgan Chase, launched its JPM Coin in 2019, and in October 2020 set up an entirely new business, Onyx, as an umbrella for its blockchain and CVC initiatives – including JPM Coin.  According to Jamie Dimon, Chairman and CEO of J.P. Morgan:  “Onyx is at the forefront of a major shift in the financial services industry. This new business unit reflects J.P. Morgan’s commitment to innovation as we continue to build cutting-edge technology that delivers a better, faster and more inclusive financial system.” On December 10, 2020, J.P. Morgan announced it completed a live, blockchain-based intraday repo transaction using JPM Coin.  And, Visa has filed a patent application for what may seem perfunctory, namely recording digital currencies on a blockchain.

Apart from these blockchain-based efforts, there is a whole category of blockchain initiatives that will forever fundamentally alter the broader financial sector – to the likely chagrin of PayPal, J.P. Morgan, and Visa. The banner name for these new blockchain and DLT initiatives is “DeFi”, or decentralized finance.

In December 2019, the entire Total Value Locked (TVL) in the DeFi market was worth less than $700 million, by the end of December 2020 it grew to $14 billion, and as of January 5, 2021 the total TVL in DeFi was at over $19 billion and growing – representing a staggering growth trajectory.  The TVL in the DeFi market represents all DeFi projects but is largely driven by the lending platform MakerDAO – a decentralized credit platform supporting Dai, a stablecoin pegged to the US dollar.  Decentralized exchanges (DEXes) such as Uniswap largely make up the remaining bulk of projects.  DEXes enforce trading rules and execute trades without charging the high fees normally associated with alternative investment trades.   

A commitment of $19 billion to DeFi initiatives may seem miniscule compared to, for example, the over $6 trillion in foreign exchange trades conducted each day.   On the other hand, each DeFi transaction potentially empowers individuals while at the same time weakening the grip over the monetary system currently held by central banks and finance intermediaries – a true game changer by any measure.

Generally relying on the public Ethereum blockchain platform, most DeFi projects deploy smart contracts to automate what previously required human intervention – obviating the need for central authorities such as banks or intermediaries.  DeFi Pulse nicely showcases the benefits of DeFi by describing it as “money Legos” and giving the following example:

Compound is a money market or, in other words, a lending service on Ethereum. When you supply DAI to Compound, you receive cDAI tokens which represent both your DAI in Compound and any interest you’ve earned from lending. Since cDAI is a token, you can send, receive, or even use cDAI in other smart contracts. Money Legos in action: ETH into MakerDAO to mint DAI tokens, DAI being supplied to Compound, cDAI tokens can be used in other DApps.  For example, you can swap ETH for cDAI on a DEX and instantly start earning interest for just holding cDAI. And because you choose how you interact with smart contracts on the blockchain, you can use a DEX aggregator like DEX.AG to compare and trade at the best prices across all the popular DEXes, all within seconds.

In 2021, crowdfunding will help fund some of the DeFi startups looking to eventually disintermediate the more traditional financial firms these startups would otherwise approach for financing.   As of November 2020, online platforms can raise up to $5 million in seed capital in a State-preempted manner – with previous platforms raising hundreds of millions of dollars using the prior SEC Regulation Crowdfunding cap of $1.07 million.  Even though a typical crowdfunding online platform itself breaks away from traditional centralized banking platforms its success is not relevant for purposes of the DeFi initiatives potentially opened up by Regulation Crowdfunding.  What may be more relevant are the new ideas coming to market without the latent influence of legacy financing.  

Before widespread adoption of any DeFi product is even feasible, however, regulatory scrutiny will be needed to protect consumers onboarding these new DeFi applications.   Given that a CVC wallet is the exit ramp for many DeFi initiatives, it is no surprise that has been an area of regulatory interest.  For example, the US Treasury’s Financial Crimes Enforcement Network (‘‘FinCEN’’) recently proposed a rule that would require banks and money service businesses to file a report with FinCEN containing information related to a customer, their CVC transaction, and counterparty (including name and physical address) “if a counterparty to the transaction is using an unhosted or otherwise covered wallet and the transaction is greater than $10,000.” FinCEN is issuing regulations on transactions using digital currency wallets because the growth of individual CVC transactions will continue unabated.  

While providing a suggested Token Safe Harbor Proposal, SEC Commissioner Hester M. Peirce offered an excellent analysis of the “regulatory Catch 22” faced by decentralized networks looking to comport with SEC regulatory law. In addition to Commissioner Peirce’s forward thinking, the SEC also recently set free its FinHub as a separate office to assist blockchain and DLT innovators.  

Despite these technology-forward initiatives, the SEC continues placing an exclamation point on its regulatory reach. For example, the SEC last month shook the Ripple world by claiming in a lawsuit Ripple’s XRP token –  used by financial institutions around the globe, was an unregistered security.  It also ended the year by filing a Cease and Desist Order against ShipChain on similar grounds. These sort of efforts convey US regulators still corralling the blockchain stallion – albeit primarily through the Howey door. Disruptive DeFi initiatives should remain undeterred.

More urgent concerns for the DeFi community are coding bugs, double-spend exploits, traditional hacks, and any number of faulty implemented software functions caused when smart contracts fail to undergo adequate audits.  Despite only losing $50 million in 2020, malicious actors will certainly begin seeing a larger target over DeFi’s head as its growth continues.  Moreover, given most DeFi projects run on Ethereum, there are future threats not even widely discussed – such as those potentially arising from miners who map out transactions on a blockchain for a fee and who are no longer satisfied with just receiving their fees.

All of these potential risks – whether regulatory, technological, malicious, or competitive, however, remain dwarfed by the potential upside found in a successful, widely-adopted DeFi application or protocol.  One likely key to success is to replicate what companies such as PayPal chose to do – take a widely used existing tool and deploy into it a profitable new way that allows for flexibility with actual autonomy and consumer self-determination.  DeFi will ultimately go nowhere if it only brings into the fold insiders stuck in Moore’s early adopter phase.  

Moreover, no open-source project can ascend until a large enough market believes the tradeoffs between ease of use, financial benefits, and utility ring strongly in its favor.  For example, despite having a strong web server market position, a Linux desktop will never really threaten Microsoft’s foothold until the relevant commercial and consumer markets believe a Linux desktop truly meets all of their needs. 

Similarly, DeFi will never gain a foothold reaching above the “PayPalJPMVisa” mountain peak until at least one DeFi application checks all the relevant boxes for a sizable enough market.  It may be a decade before a DeFi project reaches that vantage point – with the classic Amazon vs. Sears endgame likely being studied along the way. 

Apple’s Consumer Data Aspirations

In a November 19, 2020 letter to various non-profit groups, Apple reaffirmed its commitment to the App Tracking Transparency (ATT) permission feature first announced in June 2020:   “We developed ATT for a single reason:  because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.”  Slated for release in 2021, the ATT feature requires permission before certain data is accessed by advertisers, namely the identifier for advertisers (IDFA).  Using the ATT feature, consumers will allow or reject tracking on an app-by-app basis.

The IDFA groups different users by similar search or browsing activity in an effort to limit advertisers from reverse engineering personally identifiable information. As described by Apple:   “We create segments, which are groups of people who share similar characteristics, and use these groups for delivering targeted ads. Information about you may be used to determine which segments you’re assigned to, and thus, which ads you receive. To protect your privacy, targeted ads are delivered only if more than 5,000 people meet the targeting criteria.”

When touting its alleged “privacy forward” ATT feature, Apple threw down yet another privacy gauntlet against Facebook:  “Facebook executives have made clear their intent is to collect as much data as possible across both first and third party products to develop and monetize detailed profiles of their users, and this disregard for user privacy continues to expand to include more of their products.”  Letter, dated November 19, 2020.

in a November 20, 2020 statement sent to Business Insider, Facebook counterpunched:  “The truth is Apple has expanded its business into advertising and through its upcoming iOS 14 changes is trying to move the free internet into paid apps and services where they profit. . . They claim it’s about privacy, but it’s about profit. . . This is all part of a transformation of Apple’s business away from innovative hardware products to data-driven software and media.”  

In other words, Facebook suggested that Apple plans on using its dominant market position to prioritize its own data collection efforts while making it difficult for competitors to use the same data.   Two months earlier, Facebook informed its business partners that it would “not collect the identifier for advertisers (IDFA) on our own apps on iOS 14 devices. . . . We may revisit this decision as Apple offers more guidance.”

Surprisingly, Facebook may actually have a point or two regarding Apple’s aspirations.  On November 16, 2020, a group led by privacy activist Max Schrems filed complaints in Germany and Spain over Apple’s online tracking tool claiming a breach of the EU’s e-Privacy Directive.   

According to the German Complaint

Apple defines the IDFA as “an alphanumeric string unique to each device, that you [the third party app developer] only use for advertising. Specific uses are for frequency capping, attribution, conversion events, estimating the number of unique users, advertising fraud detection, and debugging”.  [This IDFA] is “is very similar to a cookie: Apple and third parties (e.g. applications providers) can access this piece of information stored on the users’ device to track their behaviour, elaborate consumption preferences and provide relevant advertising. . . In practice, the IDFA is like a “digital license plate”. Every action of the user can be linked to the “license plate” and used to build a rich profile about the user. Such profile can later be used to target personalised advertisements, in-app purchases, promotions etc. When compared to traditional internet tracking IDs, the IDFA is simply a “tracking ID in a mobile phone” instead of a tracking ID in a browser cookie.

According to Reuters, Apple immediately disputed these claims, stating they were “factually inaccurate”.   Apple curiously also said to Reuters that it “does not access or use the IDFA on a user’s device for any purpose”.  Such a statement is curious only because on its face it means nothing when one considers the fact Apple allows “segmented” use and access to this “license plate” data.   By creating an “identifier for advertisers” form of digital “license plate”, Apple most certainly uses the IDFA by proxy every time one of its ad partners uses it.

Moreover, days before its public Facebook spat, Apple was called out by a cybersecurity expert for perceived privacy shortcomings in Gatekeeper – the Apple system used for managing third-party application security.  Pointing to flaws in how Gatekeeper relays and stores unencrypted information, Jeffrey Paul concluded:  “Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. . . . This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns.”

According to a November 15, 2020 editorial in Apple Insider, these perceived risks were illusory.   According to the editorial, “there’s not really much utility in knowing just what app is being launched, realistically speaking.”  And to boot, “ISPs could have that data if they wanted to without the limited info that Apple’s Gatekeeper may provide.”  

By claiming others could gather even more data and that the data in question does not have “much utility”, the editorial did not provide any real refutation of Jeffrey Paul’s basic concerns. Instead, the writer for Apple Insider hopes for the best:  “There’s not even the prospect of Apple pulling a Google and using this data, as Apple has been a voracious defender of user privacy for many years, and it is unlikely to make such a move.”  In other words, just trust Apple to do the right thing.

The very next day Apple actually did do the right thing and stopped collecting IP addresses related to Gatekeeper’s developer checks – likely in difference to Jeffrey Paul’s research.  The  Apple Support Update released on November 16, 2020 states:  “To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.  In addition, over the the [sic] next year we will introduce several changes to our security checks:   A new encrypted protocol for Developer ID certificate revocation checks; Strong protections against server failure; [and] A new preference for users to opt out of these security protections.”  These new safeguards address the exact issues raised by Jeffrey Paul in his blog.

Apple’s aspirations regarding consumer data control will likely cause it to continue butting heads with social media platforms guarding their data oligarchies and privacy advocates protecting consumers. As the world’s largest market cap company, however, Apple may be uniquely positioned to take on such challenges.  Unfortunately, governmental intervention may be the only viable check on Apple should the company ever fully stray from its prior data privacy commitments. Given the current dysfunctional political environment, Apple likely has a long runway should regulators ever come knocking.

Ransomware Groups Declare War on US Hospitals

A recent phase of the ongoing two-pronged cyber war between Russia/Iran/North Korea and China against the United States has taken an ugly turn.  The Russian faction has launched various sophisticated ransomware attacks against healthcare providers and hospital systems across the United States.  

As stated in an October 28, 2020 Alert from the Cybersecurity & Infrastructure Security Agency (CISA), there is “credible information of an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.”  In addition to the CISA Alert, cybersecurity firms battling this latest threat have shared how these latest attacks are perpetrated.

Our current healthcare cyber battle is further complicated given an October 1, 2020 Advisory from U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) reminding ransomware victims against conducting business with those on the OFAC list – including specific ransomware groups such as the Russia-based group behind the Dridex malware.  The OFAC advisory is likely driven by the FBI – which has long advocated against victims making ransomware payments.  No matter what the motivation, however, OFAC has exacerbated the current crisis given the OFAC Advisory warns the primary civil combatants against making violative ransomware payments, namely companies “providing cyber insurance, digital forensics and incident response, and financial services that may involve processing ransom payments (including depository institutions and money services businesses).”

Over the past several years, the cybersecurity community has seen a tremendous uptick in the deployment of ransomware – even leading to board level scrutiny.   No different from SQL injection exploits that were commonly warned against so many years ago yet still remain an exposure for so many websites, ransomware will not go away anytime soon.  The necessary cyber defensive skillset is far from fully disbursed to potential victims.  For example, indicators of compromise (IOCs) shared with the cybersecurity community would likely be ignored by most IT staff given they do not even have the means of searching internally for IOCs within their network.

Taking into consideration the old adage:  “If you fail to plan, you plan to fail,” healthcare providers and hospital systems should immediately seek out specialized cybersecurity experts who are currently fighting this battle before it is too late.

Platform Immunity at Risk?

On September 23, 2020, the Department of Justice released its proposed changes to Section 230 of the DMCA – the first serious attempt at reigning in the immunity rights enjoyed by the duopoly of Facebook and Google.  In his cover letter, the Attorney General wrote:  “I am pleased to present for consideration by Congress a legislative proposal to modernize and clarify the immunity that 47 U.S.C. § 230 provides to online platforms that host and moderate content.”  Recognizing that “platforms have been allowed to invoke Section 230 to escape liability even when they knew their services were being used for criminal activity”, the Attorney General stressed that the initial purposes of the 1996 DMCA have long ago been served.  

Accordingly, the first tranche of changes is focused on ensuring editorial decisions are being done objectively and in good faith – with a proposed definition of “good faith” actually baked into the proposed new Section 230.  Specifically, Section 230(c)(2) is amended to require platforms have an “objectively reasonable belief” that the speech they are removing falls within certain enumerated categories.

The second area of changes addresses growing illicit online content by limiting publisher immunity when an online platform (I) purposefully promotes, facilitates, or solicits third­ party content that would violate federal criminal law; (2) has actual knowledge that specific content it is hosting violates federal law; or (3) fails to remove unlawful content after receiving notice by way of a final court judgment.  See Proposed § 230(d).

And finally, the third major change amends Section 230(e) to expressly confirm that the immunity provided by Section 230 would not apply to civil enforcement actions brought by the federal government.  This change provides for an important federal enforcement tool against platforms should the need arise – just like with any other company in the United States.  See Proposed § 230(e).

A careful review of these changes evidences a long-overdue updating that hopefully begets bipartisan support despite the current schism between our two major political parties.   Indeed, given the lobbying might of Facebook, Google and other online platforms, any alteration of the immunities granted under Section 230 will require nothing less than true bipartisan support.

UPDATE: October 28, 2020

On October 28, 2020, the U.S. Senate held a hearing on the following topic: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” The Hearing was to “examine whether Section 230 of the Communications Decency Act has outlived its usefulness in today’s digital age. It will also examine legislative proposals to modernize the decades-old law, increase transparency and accountability among big technology companies for their content moderation practices, and explore the impact of large ad-tech platforms on local journalism and consumer privacy.”

Other than highlighting a pretty wild lockdown beard, the session provided little real ammo for either side of this debate. Perhaps in 2021, that dynamic may change.

Alleged cover-up leads to criminal complaint against former Uber CSO

In filing its August 20, 2020 criminal complaint against the former Uber CSO, the US Attorney for the Northern District of California issued a wake-up call to every CISO responding to a federal investigation of a data incident.  And, by stating in its press release, “we hope companies stand up and take notice”, the Justice Department has definitely thrown down a gauntlet against CISOs across the country.  

By way of background, Uber sustained a data breach in September of 2014 that was investigated by the FTC in 2016.  Uber designated its CSO – Joseph Sullivan, to provide testimony regarding the incident.  Within ten days of providing testimony to the FTC, Sullivan received word Uber was breached again but rather than update his testimony before the FTC he allegedly tried very hard to conceal the incident from the FTC.  Indeed, Sullivan allegedly went so far as to concoct a bug bounty program cover story and asked the hackers to sign an NDA as a condition of their getting $100,000 in bitcoin.

The Special Agent’s supporting affidavit swears that “there is probable cause to believe that the defendant engaged in a cover-up intended to obstruct the lawful functions and official proceedings of the Federal Trade Commission. . . . It is my belief that SULLIVAN further intended to spare Uber and SULLIVAN negative publicity and loss of users and drivers that would have stemmed from disclosure of the hack and data breach.”

In other words, a CSO allegedly spared his employer “negative publicity and loss of users” by inaccurately describing an incident and failing to disclose it in timely manner.  Even though the alleged conduct of Uber’s former CSO may have pushed the needle into the red zone, there are also potential arguments in his favor.  In coming up with one such counterargument, several Forrester analysts suggest:  “Sullivan did not inform the FTC during the sworn investigative hearing because he couldn’t have:  Sullivan learned of the 2016 breach 10 days later. To inform the FTC, Sullivan would have needed to reach out and inform them about a separate, new, but similar breach. There’s also some confusion as to whether Sullivan was under any legal obligation to do so.”

Whatever happens in this particular case, the fact remains CISOs sometime inadvertently play too close to the edge.  The underpinnings of an incident are whatever they are – no one can or should ever try to morph them into something different.  Good legal and IT counsel will mitigate loss and certain exposures but only with the assistance of CISOs and CSOs who recount events rather than fabricate them.  Not surprisingly given no company is immune to a breach, it’s only the cover-up that will ever hurt and not the incident itself. 

Schrems-II, Facebook-0

On July 16, 2020, the EU Court of Justice decided “Schrems II” and invalidated the EU Commission’s Decision 2016/1250 regarding the adequacy of the EU-U.S. Privacy Shield (“the Privacy Shield Decision”).  As described in the Press Release issued by the Court:

[T]he limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.

This case was the second one brought by Max Schrems against Facebook in its Irish domicile – which is why the case is now in the hands of the Irish Data Protection Commission. In rejecting the use of a Privacy Shield Ombudsperson who was independent from the Intelligence Community – the agreed-upon safeguard found in the Privacy Shield Decision, the Court of Justice ruled that such a mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.” 

Now that the Court has invalidated the European Commission’s adequacy decision for the EU-U.S. Privacy Shield, thousands of  US companies relying on such a mechanism will need to reevaluate their compliance efforts.  The US Commerce Department echoed today the same disappointment likely felt by these companies.  Reminding companies there is still a “US” component very much still intact in the “EU-US Privacy Shield”, the Secretary of Commerce also stated that “today’s decision does not relieve participating organizations of their Privacy Shield obligations.”

CCPA Enforcement Begins Today

Beginning on July 1, 2020, the California Attorney General’s office may start sending out warnings of potential CCPA violations and give notified businesses 30 days to correct those violations before facing possible fines or lawsuits.

In rejecting numerous requests to delay CCPA enforcement, Attorney General Xavier Becerra reasoned: “As families continue to move their lives increasingly online, it is essential for Californians to know their privacy options. Our office is committed to enforcing the law starting July 1.”

In November 2020, California voters may take a swipe at the AG’s efforts by approving a new ballot initiative – the California Privacy Rights Act, that creates a privacy enforcement agency some may consider “a woefully underfunded paper tiger” yet will still nevertheless have exclusive enforcement power over certain provisions of CCPA to the exclusion of the AG’s office.

Given the very long gestation period for the proposed CPRA – this ballot law would become effective January 1, 2023 and enforceable on July 1, 2023, the jury is still certainly out on whether its passage would ever directly benefit consumers or just lead to more lobbyist driven amendments by the California duopoly of Google and Facebook. As of right now, the Tech Lords of Stanford certainly remain in complete control.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

California AG Pushes New Global Opt-Out Privacy Setting

On June 2, 2020, the Office of the California Attorney General (“OAG”) submitted its final proposed regulations under the California Consumer Privacy Act (CCPA) The OAG press release suggests these final regulations clarify “important transparency and accountability mechanisms for businesses subject to the law.” A number of those reviewing these final regulations correctly point out that they have not changed much from the last draft.

The most striking feature of these proposed regulations, however, is actually found in the explanatory reasoning jointly filed by the AG. The OAG Statement of Reasons suggests the OAG may have, in effect, mandated more than what was expressly required under CCPA, namely an opt-out setting for the sale of personal information that can be managed by consumers on a global basis.

By way of background, consumers have long had the capability to send “Do Not Track” (DNT) header signals from their browsers – with privacy advocates long providing tutorials on how consumer-choice DNT tools could be implemented on browsers.  Given that a DNT signal is a machine-readable header and not an embedded cookie, i.e., a file placed by websites into a consumer’s computer in order to store privacy preferences, consumers can delete installed cookies without disrupting their global DNT signal.  Some companies such as Apple actually do not even respond to DNT signals because they claim that they do not “track its customers over time and across third party websites to provide targeted advertising.”

The OAG sets forth in § 999.315 the relevant “Requests to Opt-Out” language later interpreted by the OAG in its Statement of Reasons.

Section 999.315(c) of the OAG’s regulations reads: “A business’s methods for submitting requests to opt-out shall be easy for consumers to execute and shall require minimal steps to allow the consumer to opt-out. A business shall not utilize a method that is designed with the purpose or has the substantial effect of subverting or impairing a consumer’s decision to opt-out.” And, the final Subsection (d)(1) reads:  “Any privacy control developed in accordance with these regulations shall clearly communicate or signal that a consumer intends to opt-out of the sale of personal information.”

Previously, an EFF-led privacy coalition recommended the deletion of the following clause from § 999.315(d)(1):  “The privacy control shall require that the consumer affirmatively select their choice to opt-out and shall not be designed with any pre-selected settings.”  That recommendation was adopted by the OAG and the “affirmative selection” language was deleted – obviating the need for a potential website-by-website affirmative opt-out selection by consumers.

While the § 315(d)(1) recommendation was adopted, the OAG chose not to adopt the EFF coalition’s recommendation to add the following clause at the end of § 315(c):  “A business shall treat a “Do Not Track” browsing header as such a choice.”  By rejecting this suggested new language, the OAG chose not to limit the scope of any implementation technology. As reflected in the OAG’s Statement of Reasons, this rejection actually ends up being an even more meaningful nod in the direction of the EFF Coalition.

Specifically, the OAG recognized it’s goal was in imposing clear regulatory parameters while not imposing technological requirements that might be limiting on a company:

By requiring that a privacy control be designed to clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, the regulation sets clear parameters for what the control must communicate so as to avoid any ambiguous signals.  It does not prescribe a particular mechanism or technology; rather, it is technology-neutral to support innovation in privacy services to facilitate consumers’ exercise of their right to opt-out.  The regulation benefits both businesses and innovators who will develop such controls by providing guidance on the parameters of what must be communicated.  And because the regulation mandates that the privacy control clearly communicate that the consumer intends to opt-out of the sale of personal information, the consumer’s use of the control is sufficient to demonstrate that they are choosing to exercise their CCPA right.

More to the point, the OAG also explains

Subsection (d) requires a business that collects personal information online to treat user-enabled global privacy controls as a valid request to opt-out.  This subsection is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out.  Given the ease and frequency by which personal information is collected and sold when a consumer visits a website, consumers should have a similarly easy ability to request to opt-out globally.  This regulation offers consumers a global choice to opt-out of the sale of personal information, as opposed to going website by website to make individual requests with each business each time they use a new browser or a new device. (emphasis added).

Perhaps anticipating some push back, the OAG goes into detail regarding its authority by referencing prior experience with DNT requirements under the California Online Privacy Protection Act (Bus. & Prof. Code, § 22575 et seq.) (CalOPPA).  To that end, on May 21, 2014, the OAG previously released a set of recommendations to assist with compliance of CalOPPA’s DNT disclosures.   

The OAG justifies its approach as follows:

As the primary enforcer of [CalOPPA], the OAG has reviewed numerous privacy policies for compliance with CalOPPA, which requires the operator of an online service to disclose, among other things, how it responds to “Do Not Track” signals or other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about their online activities over time and across third-party websites or online services.  (Bus. & Prof. Code, § 22757, subd. (b)(5).)  The majority of businesses disclose that they do not comply with those signals, meaning that they do not respond to any mechanism that provides consumers with the ability to exercise choice over how their information is collected.  Accordingly, the OAG has concluded that businesses will very likely similarly ignore or reject a global privacy control if the regulation permits discretionary compliance.  The regulation is thus necessary to prevent businesses from subverting or ignoring consumer tools related to their CCPA rights and, specifically, the exercise of the consumer’s right to opt-out of the sale of personal information. Contrary to public comments that the user-enabled global privacy setting is outside of the scope of the OAG’s authority, subsection (d) is authorized by the CCPA because it furthers and is consistent with the language, intent, and purpose of the CCPA.  (emphasis added).

Not surprising given its technology neutral approach, the manner in which companies will comply with a global opt-out capability is not spelled out by the OAG.  Companies may address a global opt-out setting controlled by consumers by either taking on this obligation utilizing a new product or investing internally in developing a solution. Any such feature, however, will likely be tested by the OAG and courts. No matter how this new requirement is implemented, however, it is very likely the OAG will come out swinging given that the November 2020 ballot initiative spearheaded by Alastair Mactaggartthe California Privacy Rights Act, would create the “California Privacy Protection Agency” as a new enforcement arm and potential competition for the OAG.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

Ransomware Has Officially Become a D&O Problem

On April 30, 2020, ZDNet reported that there have been more than 1,000 SEC filings over the past 12 months listing ransomware as a risk factor – with more than 700 in 2020 alone.  These filings include annual reports (10K and 20F), quarterly reports (10Q), and registration forms (S1). 

Even the most sophisticated technology companies now insert the word “ransomware” into their Risk Factors section. See Alphabet, Inc., Form 10-Q, dated April 28, 2020, at 50  (“The availability of our products and services and fulfillment of our customer contracts depend on the continuing operation of our information technology and communications systems. Our systems are vulnerable to damage, interference, or interruption from terrorist attacks, natural disasters or pandemics (including COVID-19), the effects of climate change (such as sea level rise, drought, flooding, wildfires, and increased storm severity), power loss, telecommunications failures, computer viruses, ransomware attacks, computer denial of service attacks, phishing schemes, or other attempts to harm or access our systems.”).   

As reported by ZDNet, companies as varied as American Airlines, McDonald’s, Tupperware, and Pluralsight also list ransomware as a potential risk to their business. 

By inserting the word “ransomware” into a Risk Factors section, reporting companies may have elevated the relevant standard for companies who do not reference ransomware.  By way of background, in October 2011, the SEC began planting cyber risk disclosure seeds when it issued non-binding disclosure guidance regarding cybersecurity risks and incidents.  Back in 2011, the SEC wrote:  “Although no existing disclosure requirement explicitly refers to cybersecurity risks and cyber incidents, a number of disclosure requirements may impose an obligation on registrants to disclose such risks and incidents.” Seven years later, this non-binding guidance became binding.

On February 26, 2018, the SEC issued binding guidance that recognizes:  “Companies face an evolving landscape of cybersecurity threats in which hackers use a complex array of means to perpetrate cyber-attacks, including the use of stolen access credentials, malware, ransomware, phishing, structured query language injection attacks, and distributed denial-of-service attacks, among other means.”   By expressly listing ransomware two years ago in its Statement, the SEC was making it quite clear that the current threat landscape includes the risk of ransomware and that directors and officers have to address this likely risk.

More to the point, the Statement and Guidance on Public Company Cybersecurity Disclosures instructs “that the development of effective disclosure controls and procedures is best achieved when a company’s directors, officers, and other persons responsible for developing and overseeing such controls and procedures are informed about the cybersecurity risks and incidents that the company has faced or is likely to face.” 

Not surprisingly, the failure to disclose a prior ransomware attack would also be actionable.  See SEC Statement at 14 (“In meeting their disclosure obligations, companies may need to disclose previous or ongoing cybersecurity incidents or other past events in order to place discussions of these risks in the appropriate context.  For example, if a company previously experienced a material cybersecurity incident involving denial-of-service, it likely would not be sufficient for the company to disclose that there is a risk that a denial-of-service incident may occur.”).

If ransomware incidents were avoided altogether, however, there would be no liability attached to associated filings no matter what was communicated to the market. Moreover, even when attacks were not avoided, little disclosure risk would exist if the company applied best practices to avoid such an incident and provided an accurate accounting of what took place when an incident did take place. To that end, deploying proactive approaches considered state-of-the-art when dealing with ransomware risk will naturally mitigate against any potential SEC disclosure risk.

For example, there is at least one novel solution that can reduce ransomware attacks by anticipating when a compromised system’s ransomware package will be released and then neutralizing the ransomware threat before any ransomware release actually takes place.  By evaluating and deploying such cutting-edge solutions, companies will be well positioned to neutralize any potential shareholder claims – as well as satisfying the much more important task of protecting corporate data and other digital assets.  Thankfully, “it is never too late to begin importing a more robust security and privacy profile into an organization – which is the only real way to diminish the risk of a ransomware attack.”  As with most successful corporate endeavors, management buy-in will typically be the necessary first step.

Our Current Cyber Pandemic Will Also Subside

On April 17, 2020, it was reported that researchers at Finland’s Arctic Security found “the number of networks experiencing malicious activity was more than double in March in the United States and many European countries compared with January, soon after the virus was first reported in China. ”

Lari Huttunen at Arctic Security astutely pointed out why previously safe networks were now exposed: “In many cases, corporate firewalls and security policies had protected machines that had been infected by viruses or targeted malware . . . . Outside of the office, that protection can fall off sharply, allowing the infected machines to communicate again with the original hackers. “

Tom Kellerman – a cybersecurity thought leader, distills it this way: “There is a digitally historic event occurring in the background of this pandemic, and that is there is a cybercrime pandemic that is occurring.”

While there are certain internal ways of addressing cybersecurity threats arising from a viral pandemic, the exposures now faced by corporations become doubly damaging when the outside resources absolutely necessary to combat active threats are considered off-budget or not a critical enough priority. Smart companies generally survive stressful times by prioritizing with some foresight. Network security during a Cyber Pandemic should be a top priority no matter what size business.

During our Cyber Pandemic, companies recognizing and properly addressing the potential damage caused by threat actors will not only survive minor short-term hits to their bottom line caused by paying outside resources, they will likely be the ones coming on top after both Pandemics subside. There is definitely a light at the end of the tunnel for those willing to take the ride – just continue using trusted vehicles to get you there.