On February 16, 2021, The Sedona Conference (TSC) – a nonpartisan, nonprofit research and educational institute “dedicated to the advanced study of law and policy in the areas of antitrust law, complex litigation and intellectual property rights”, posted its “Commentary on a Reasonable Security Test“. TSC is well known for previously helping Courts around the country determine the proper contours of e-discovery.
The Sedona Conference Reasonable Security Test consists of “B2 – B1 < (P x H)1 – (P x H)2” where B represents the burden, P represents the probability of harm, H represents the magnitude of harm, subscript 1 represents the controls (or lack thereof) at the time the information steward allegedly had unreasonable security in place, and subscript 2 represents the alternative or supplementary control. 22 SEDONA CONF. J.at 360.
TSC’s Commentary should be studied for numerous reasons, including the fact TSC applies it to actual recent enforcement actions and provides solid arguments for its judicial application. No different than its highly cited e-discovery initiatives, this new TSC approach may very well be relied on by courts tackling the important question of what constitutes reasonable security in the context of a data breach litigation or enforcement action.
On January 28, 2021, the National Cybersecurity Alliance encouraged individuals this Data Privacy Day to “Own Your Privacy” by “holding organizations responsible for keeping individuals’ personal information safe from unauthorized access and ensuring fair, relevant and legitimate data collection and processing.” Indeed, the NCSA recognizes “[p]ersonal information, such as your purchase history, IP address, or location, has tremendous value to businesses – just like money.”
The NCSA “data as money” perspective is not a new concept. In fact, it was hoped that Data Privacy Day 2016 would usher in a system for consumers to easily monetize their private data – a hope that has yet to materialize five years later. Still, in the same way a bank protects money, there can be no adequate privacy without adequate security.
Richard Clarke – a security advisor to four U.S. presidents, properly recognized in 2014: “Privacy and security are two sides of the same coin.” The ransomware epidemic of 2020 should inform everyone why Data Privacy Day 2021 solidly places privacy and security on the same level. There can be little respect for the privacy rights of consumers – whether monetized or not, without an adequate effort at securing such data. Some companies such as Microsoft – last year’s champion of Data Privacy Day, recognize the need to continually push the security envelope in order to properly protect consumer privacy rights. Accordingly, these companies go the extra distance and often work hand-in-hand with law enforcement to take down online criminal enterprises such as Emotet.
Going forward in 2021, companies safeguarding consumer data must recognize that the lines have blurred between nation state APT attacks – focused on the slow espionage of large companies, and criminal enterprises looking for quick financial hits. For example, the lateral movement hallmarks of an APT attack are now routinely used during Ryuk ransomware exploits. Moreover, the recent SolarWinds Orion Platform exploit highlights the need to focus on supply chains when protecting consumer data.
Focused security efforts would quickly stop being left on corporate “to do” lists if there was an applicable federal law in place for companies nationwide – not just the hybrid privacy/security state laws now applicable to only some companies. Unfortunately, despite high hopes in 2019, there was little bipartisan push for a federal privacy law these past few years. That dynamic might change in 2021.
Former California Attorney General Kamala Harris’s 2012 annual privacy report opens with the words: “California has the strongest consumer privacy laws in the country.” During her tenure, California enjoyed “a constitutionally guaranteed right to privacy, over seventy privacy-related laws on the books, and multiple regulatory agencies set up to enforce these laws.” As the new year progresses, the current Vice President may very well prod Congress for the sort of California “privacy pride” she once enjoyed on a state level. Given the current one-party rule, there is certainly no longer any excuse available to politicians looking to continue kicking the “federal privacy law can” around Capital Hill.
A skilled chess player will tell you the best way to study chess at a high level is to first study endgames and truly learn the power of each piece. Memorizing book openings generally comes last. If one wants to learn about the insurance industry, first take a job in the claims department. In a similar way, students of disruptive technologies benefit from first learning their “end game”.
Blockchain is one disruptive technology that still has not fully discovered its business sea legs. The purported proxy for blockchain – Bitcoin, recently hit all-time highs so naturally on January 3, 2021 a forecaster placed a ten-year target of $1 million on this speculative asset. Every good bubble requires inflating and the very speculative Bitcoin bubble currently being massively inflated by hedge fund money is no different.
The largest bank in the United States – J.P. Morgan Chase, launched its JPM Coin in 2019, and in October 2020 set up an entirely new business, Onyx, as an umbrella for its blockchain and CVC initiatives – including JPM Coin. According to Jamie Dimon, Chairman and CEO of J.P. Morgan: “Onyx is at the forefront of a major shift in the financial services industry. This new business unit reflects J.P. Morgan’s commitment to innovation as we continue to build cutting-edge technology that delivers a better, faster and more inclusive financial system.” On December 10, 2020, J.P. Morgan announced it completed a live, blockchain-based intraday repo transaction using JPM Coin. And, Visa has filed a patent application for what may seem perfunctory, namely recording digital currencies on a blockchain.
Apart from these blockchain-based efforts, there is a whole category of blockchain initiatives that will forever fundamentally alter the broader financial sector – to the likely chagrin of PayPal, J.P. Morgan, and Visa. The banner name for these new blockchain and DLT initiatives is “DeFi”, or decentralized finance.
In December 2019, the entire Total Value Locked (TVL) in the DeFi market was worth less than $700 million, by the end of December 2020 it grew to $14 billion, and as of January 5, 2021 the total TVL in DeFi was at over $19 billion and growing – representing a staggering growth trajectory. The TVL in the DeFi market represents all DeFi projects but is largely driven by the lending platform MakerDAO – a decentralized credit platform supporting Dai, a stablecoin pegged to the US dollar. Decentralized exchanges (DEXes) such as Uniswap largely make up the remaining bulk of projects. DEXes enforce trading rules and execute trades without charging the high fees normally associated with alternative investment trades.
A commitment of $19 billion to DeFi initiatives may seem miniscule compared to, for example, the over $6 trillion in foreign exchange trades conducted each day. On the other hand, each DeFi transaction potentially empowers individuals while at the same time weakening the grip over the monetary system currently held by central banks and finance intermediaries – a true game changer by any measure.
Compound is a money market or, in other words, a lending service on Ethereum. When you supply DAI to Compound, you receive cDAI tokens which represent both your DAI in Compound and any interest you’ve earned from lending. Since cDAI is a token, you can send, receive, or even use cDAI in other smart contracts. Money Legos in action: ETH into MakerDAO to mint DAI tokens, DAI being supplied to Compound, cDAI tokens can be used in other DApps. For example, you can swap ETH for cDAI on a DEX and instantly start earning interest for just holding cDAI. And because you choose how you interact with smart contracts on the blockchain, you can use a DEX aggregator like DEX.AG to compare and trade at the best prices across all the popular DEXes, all within seconds.
Before widespread adoption of any DeFi product is even feasible, however, regulatory scrutiny will be needed to protect consumers onboarding these new DeFi applications. Given that a CVC wallet is the exit ramp for many DeFi initiatives, it is no surprise that has been an area of regulatory interest. For example, the US Treasury’s Financial Crimes Enforcement Network (‘‘FinCEN’’) recently proposed a rule that would require banks and money service businesses to file a report with FinCEN containing information related to a customer, their CVC transaction, and counterparty (including name and physical address) “if a counterparty to the transaction is using an unhosted or otherwise covered wallet and the transaction is greater than $10,000.” FinCEN is issuing regulations on transactions using digital currency wallets because the growth of individual CVC transactions will continue unabated.
All of these potential risks – whether regulatory, technological, malicious, or competitive, however, remain dwarfed by the potential upside found in a successful, widely-adopted DeFi application or protocol. One likely key to success is to replicate what companies such as PayPal chose to do – take a widely used existing tool and deploy into it a profitable new way that allows for flexibility with actual autonomy and consumer self-determination. DeFi will ultimately go nowhere if it only brings into the fold insiders stuck in Moore’s early adopter phase.
Moreover, no open-source project can ascend until a large enough market believes the tradeoffs between ease of use, financial benefits, and utility ring strongly in its favor. For example, despite having a strong web server market position, a Linux desktop will never really threaten Microsoft’s foothold until the relevant commercial and consumer markets believe a Linux desktop truly meets all of their needs.
Similarly, DeFi will never gain a foothold reaching above the “PayPalJPMVisa” mountain peak until at least one DeFi application checks all the relevant boxes for a sizable enough market. It may be a decade before a DeFi project reaches that vantage point – with the classic Amazon vs. Sears endgame likely being studied along the way.
In a November 19, 2020 letter to various non-profit groups, Apple reaffirmed its commitment to the App Tracking Transparency (ATT) permission feature first announced in June 2020: “We developed ATT for a single reason: because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.” Slated for release in 2021, the ATT feature requires permission before certain data is accessed by advertisers, namely the identifier for advertisers (IDFA). Using the ATT feature, consumers will allow or reject tracking on an app-by-app basis.
The IDFA groups different users by similar search or browsing activity in an effort to limit advertisers from reverse engineering personally identifiable information. As described by Apple: “We create segments, which are groups of people who share similar characteristics, and use these groups for delivering targeted ads. Information about you may be used to determine which segments you’re assigned to, and thus, which ads you receive. To protect your privacy, targeted ads are delivered only if more than 5,000 people meet the targeting criteria.”
When touting its alleged “privacy forward” ATT feature, Apple threw down yet another privacy gauntlet against Facebook: “Facebook executives have made clear their intent is to collect as much data as possible across both first and third party products to develop and monetize detailed profiles of their users, and this disregard for user privacy continues to expand to include more of their products.” Letter, dated November 19, 2020.
in a November 20, 2020 statement sent to Business Insider, Facebook counterpunched: “The truth is Apple has expanded its business into advertising and through its upcoming iOS 14 changes is trying to move the free internet into paid apps and services where they profit. . . They claim it’s about privacy, but it’s about profit. . . This is all part of a transformation of Apple’s business away from innovative hardware products to data-driven software and media.”
In other words, Facebook suggested that Apple plans on using its dominant market position to prioritize its own data collection efforts while making it difficult for competitors to use the same data. Two months earlier, Facebook informed its business partners that it would “not collect the identifier for advertisers (IDFA) on our own apps on iOS 14 devices. . . . We may revisit this decision as Apple offers more guidance.”
Apple defines the IDFA as “an alphanumeric string unique to each device, that you [the third party app developer] only use for advertising. Specific uses are for frequency capping, attribution, conversion events, estimating the number of unique users, advertising fraud detection, and debugging”. [This IDFA] is “is very similar to a cookie: Apple and third parties (e.g. applications providers) can access this piece of information stored on the users’ device to track their behaviour, elaborate consumption preferences and provide relevant advertising. . . In practice, the IDFA is like a “digital license plate”. Every action of the user can be linked to the “license plate” and used to build a rich profile about the user. Such profile can later be used to target personalised advertisements, in-app purchases, promotions etc. When compared to traditional internet tracking IDs, the IDFA is simply a “tracking ID in a mobile phone” instead of a tracking ID in a browser cookie.
According to Reuters, Apple immediately disputed these claims, stating they were “factually inaccurate”. Apple curiously also said to Reuters that it “does not access or use the IDFA on a user’s device for any purpose”. Such a statement is curious only because on its face it means nothing when one considers the fact Apple allows “segmented” use and access to this “license plate” data. By creating an “identifier for advertisers” form of digital “license plate”, Apple most certainly uses the IDFA by proxy every time one of its ad partners uses it.
Moreover, days before its public Facebook spat, Apple was called out by a cybersecurity expert for perceived privacy shortcomings in Gatekeeper – the Apple system used for managing third-party application security. Pointing to flaws in how Gatekeeper relays and stores unencrypted information, Jeffrey Paul concluded: “Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. . . . This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns.”
According to a November 15, 2020 editorial in Apple Insider, these perceived risks were illusory. According to the editorial, “there’s not really much utility in knowing just what app is being launched, realistically speaking.” And to boot, “ISPs could have that data if they wanted to without the limited info that Apple’s Gatekeeper may provide.”
By claiming others could gather even more data and that the data in question does not have “much utility”, the editorial did not provide any real refutation of Jeffrey Paul’s basic concerns. Instead, the writer for Apple Insider hopes for the best: “There’s not even the prospect of Apple pulling a Google and using this data, as Apple has been a voracious defender of user privacy for many years, and it is unlikely to make such a move.” In other words, just trust Apple to do the right thing.
The very next day Apple actually did do the right thing and stopped collecting IP addresses related to Gatekeeper’s developer checks – likely in difference to Jeffrey Paul’s research. The Apple Support Update released on November 16, 2020 states: “To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs. In addition, over the the [sic] next year we will introduce several changes to our security checks: A new encrypted protocol for Developer ID certificate revocation checks; Strong protections against server failure; [and] A new preference for users to opt out of these security protections.” These new safeguards address the exact issues raised by Jeffrey Paul in his blog.
Apple’s aspirations regarding consumer data control will likely cause it to continue butting heads with social media platforms guarding their data oligarchies and privacy advocates protecting consumers. As the world’s largest market cap company, however, Apple may be uniquely positioned to take on such challenges. Unfortunately, governmental intervention may be the only viable check on Apple should the company ever fully stray from its prior data privacy commitments. Given the current dysfunctional political environment, Apple likely has a long runway should regulators ever come knocking.
A recent phase of the ongoing two-pronged cyber war between Russia/Iran/North Korea and China against the United States has taken an ugly turn. The Russian faction has launched various sophisticated ransomware attacks against healthcare providers and hospital systems across the United States.
Taking into consideration the old adage: “If you fail to plan, you plan to fail,” healthcare providers and hospital systems should immediately seek out specialized cybersecurity experts who are currently fighting this battle before it is too late.
Accordingly, the first tranche of changes is focused on ensuring editorial decisions are being done objectively and in good faith – with a proposed definition of “good faith” actually baked into the proposed new Section 230. Specifically, Section 230(c)(2) is amended to require platforms have an “objectively reasonable belief” that the speech they are removing falls within certain enumerated categories.
The second area of changes addresses growing illicit online content by limiting publisher immunity when an online platform (I) purposefully promotes, facilitates, or solicits third party content that would violate federal criminal law; (2) has actual knowledge that specific content it is hosting violates federal law; or (3) fails to remove unlawful content after receiving notice by way of a final court judgment. SeeProposed § 230(d).
And finally, the third major change amends Section 230(e) to expressly confirm that the immunity provided by Section 230 would not apply to civil enforcement actions brought by the federal government. This change provides for an important federal enforcement tool against platforms should the need arise – just like with any other company in the United States. SeeProposed § 230(e).
A careful review of these changes evidences a long-overdue updating that hopefully begets bipartisan support despite the current schism between our two major political parties. Indeed, given the lobbying might of Facebook, Google and other online platforms, any alteration of the immunities granted under Section 230 will require nothing less than true bipartisan support.
UPDATE: October 28, 2020
On October 28, 2020, the U.S. Senate held a hearing on the following topic: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” The Hearing was to “examine whether Section 230 of the Communications Decency Act has outlived its usefulness in today’s digital age. It will also examine legislative proposals to modernize the decades-old law, increase transparency and accountability among big technology companies for their content moderation practices, and explore the impact of large ad-tech platforms on local journalism and consumer privacy.”
Other than highlighting a pretty wild lockdown beard, the session provided little real ammo for either side of this debate. Perhaps in 2021, that dynamic may change.
By way of background, Uber sustained a data breach in September of 2014 that was investigated by the FTC in 2016. Uber designated its CSO – Joseph Sullivan, to provide testimony regarding the incident. Within ten days of providing testimony to the FTC, Sullivan received word Uber was breached again but rather than update his testimony before the FTC he allegedly tried very hard to conceal the incident from the FTC. Indeed, Sullivan allegedly went so far as to concoct a bug bounty program cover story and asked the hackers to sign an NDA as a condition of their getting $100,000 in bitcoin.
The Special Agent’s supporting affidavit swears that “there is probable cause to believe that the defendant engaged in a cover-up intended to obstruct the lawful functions and official proceedings of the Federal Trade Commission. . . . It is my belief that SULLIVAN further intended to spare Uber and SULLIVAN negative publicity and loss of users and drivers that would have stemmed from disclosure of the hack and data breach.”
In other words, a CSO allegedly spared his employer “negative publicity and loss of users” by inaccurately describing an incident and failing to disclose it in timely manner. Even though the alleged conduct of Uber’s former CSO may have pushed the needle into the red zone, there are also potential arguments in his favor. In coming up with one such counterargument, several Forrester analysts suggest: “Sullivan did not inform the FTC during the sworn investigative hearing because he couldn’t have: Sullivan learned of the 2016 breach 10 days later. To inform the FTC, Sullivan would have needed to reach out and inform them about a separate, new, but similar breach. There’s also some confusion as to whether Sullivan was under any legal obligation to do so.”
Whatever happens in this particular case, the fact remains CISOs sometime inadvertently play too close to the edge. The underpinnings of an incident are whatever they are – no one can or should ever try to morph them into something different. Good legal and IT counsel will mitigate loss and certain exposures but only with the assistance of CISOs and CSOs who recount events rather than fabricate them. Not surprisingly given no company is immune to a breach, it’s only the cover-up that will ever hurt and not the incident itself.
[T]he limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.
This case was the second one brought by Max Schrems against Facebook in its Irish domicile – which is why the case is now in the hands of the Irish Data Protection Commission. In rejecting the use of a Privacy Shield Ombudsperson who was independent from the Intelligence Community – the agreed-upon safeguard found in the Privacy Shield Decision, the Court of Justice ruled that such a mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.”
Now that the Court has invalidated the European Commission’s adequacy decision for the EU-U.S. Privacy Shield, thousands of US companies relying on such a mechanism will need to reevaluate their compliance efforts. The US Commerce Department echoed today the same disappointment likely felt by these companies. Reminding companies there is still a “US” component very much still intact in the “EU-US Privacy Shield”, the Secretary of Commerce also stated that “today’s decision does not relieve participating organizations of their Privacy Shield obligations.”
Beginning on July 1, 2020, the California Attorney General’s office may start sending out warnings of potential CCPA violations and give notified businesses 30 days to correct those violations before facing possible fines or lawsuits.
In rejecting numerous requests to delay CCPA enforcement, Attorney General Xavier Becerra reasoned: “As families continue to move their lives increasingly online, it is essential for Californians to know their privacy options. Our office is committed to enforcing the law starting July 1.”
In November 2020, California voters may take a swipe at the AG’s efforts by approving a new ballot initiative – the California Privacy Rights Act, that creates a privacy enforcement agency some may consider “a woefully underfunded paper tiger” yet will still nevertheless have exclusive enforcement power over certain provisions of CCPA to the exclusion of the AG’s office.
Given the very long gestation period for the proposed CPRA – this ballot law would become effective January 1, 2023 and enforceable on July 1, 2023, the jury is still certainly out on whether its passage would ever directly benefit consumers or just lead to more lobbyist driven amendments by the California duopoly of Google and Facebook. As of right now, the Tech Lords of Stanford certainly remain in complete control.
UPDATE: November 4, 2020
On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote. As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor: “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”
Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added: “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”
There is no denying this was a momentous vote. On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.
The most striking feature of these proposed regulations, however, is actually found in the explanatory reasoning jointly filed by the AG. The OAG Statement of Reasons suggests the OAG may have, in effect, mandated more than what was expressly required under CCPA, namely an opt-out setting for the sale of personal information that can be managed by consumers on a global basis.
By way of background, consumers have long had the capability to send “Do Not Track” (DNT) header signals from their browsers – with privacy advocates long providing tutorials on how consumer-choice DNT tools could be implemented on browsers. Given that a DNT signal is a machine-readable header and not an embedded cookie, i.e., a file placed by websites into a consumer’s computer in order to store privacy preferences, consumers can delete installed cookies without disrupting their global DNT signal. Some companies such as Apple actually do not even respond to DNT signals because they claim that they do not “track its customers over time and across third party websites to provide targeted advertising.”
The OAG sets forth in § 999.315 the relevant “Requests to Opt-Out” language later interpreted by the OAG in its Statement of Reasons.
Section 999.315(c) of the OAG’s regulations reads: “A business’s methods for submitting requests to opt-out shall be easy for consumers to execute and shall require minimal steps to allow the consumer to opt-out. A business shall not utilize a method that is designed with the purpose or has the substantial effect of subverting or impairing a consumer’s decision to opt-out.” And, the final Subsection (d)(1) reads: “Any privacy control developed in accordance with these regulations shall clearly communicate or signal that a consumer intends to opt-out of the sale of personal information.”
Previously, an EFF-led privacy coalition recommended the deletion of the following clause from § 999.315(d)(1): “The privacy control shall require that the consumer affirmatively select their choice to opt-out and shall not be designed with any pre-selected settings.” That recommendation was adopted by the OAG and the “affirmative selection” language was deleted – obviating the need for a potential website-by-website affirmative opt-out selection by consumers.
While the § 315(d)(1) recommendation was adopted, the OAG chose not to adopt the EFF coalition’s recommendation to add the following clause at the end of § 315(c): “A business shall treat a “Do Not Track” browsing header as such a choice.” By rejecting this suggested new language, the OAG chose not to limit the scope of any implementation technology. As reflected in the OAG’s Statement of Reasons, this rejection actually ends up being an even more meaningful nod in the direction of the EFF Coalition.
Specifically, the OAG recognized it’s goal was in imposing clear regulatory parameters while not imposing technological requirements that might be limiting on a company:
By requiring that a privacy control be designed to clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, the regulation sets clear parameters for what the control must communicate so as to avoid any ambiguous signals. It does not prescribe a particular mechanism or technology; rather, it is technology-neutral to support innovation in privacy services to facilitate consumers’ exercise of their right to opt-out. The regulation benefits both businesses and innovators who will develop such controls by providing guidance on the parameters of what must be communicated. And because the regulation mandates that the privacy control clearly communicate that the consumer intends to opt-out of the sale of personal information, the consumer’s use of the control is sufficient to demonstrate that they are choosing to exercise their CCPA right.
Subsection (d) requires a business that collects personal information online to treat user-enabled global privacy controls as a valid request to opt-out. This subsection is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out. Given the ease and frequency by which personal information is collected and sold when a consumer visits a website, consumers should have a similarly easy ability to request to opt-out globally. This regulation offers consumers a global choice to opt-out of the sale of personal information, as opposed to going website by website to make individual requests with each business each time they use a new browser or a new device. (emphasis added).
As the primary enforcer of [CalOPPA], the OAG has reviewed numerous privacy policies for compliance with CalOPPA, which requires the operator of an online service to disclose, among other things, how it responds to “Do Not Track” signals or other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about their online activities over time and across third-party websites or online services. (Bus. & Prof. Code, § 22757, subd. (b)(5).) The majority of businesses disclose that they do not comply with those signals, meaning that they do not respond to any mechanism that provides consumers with the ability to exercise choice over how their information is collected. Accordingly, the OAG has concluded that businesses will very likely similarly ignore or reject a global privacy control if the regulation permits discretionary compliance. The regulation is thus necessary to prevent businesses from subverting or ignoring consumer tools related to their CCPA rights and, specifically, the exercise of the consumer’s right to opt-out of the sale of personal information. Contrary to public comments that the user-enabled global privacy setting is outside of the scope of the OAG’s authority, subsection (d) is authorized by the CCPA because it furthers and is consistent with the language, intent, and purpose of the CCPA. (emphasis added).
Not surprising given its technology neutral approach, the manner in which companies will comply with a global opt-out capability is not spelled out by the OAG. Companies may address a global opt-out setting controlled by consumers by either taking on this obligation utilizing a new product or investing internally in developing a solution. Any such feature, however, will likely be tested by the OAG and courts. No matter how this new requirement is implemented, however, it is very likely the OAG will come out swinging given that the November 2020 ballot initiative spearheaded by Alastair Mactaggart – the California Privacy Rights Act, would create the “California Privacy Protection Agency” as a new enforcement arm and potential competition for the OAG.
UPDATE: November 4, 2020
On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote. As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor: “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”
Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added: “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”
There is no denying this was a momentous vote. On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.