Category Archives: Behavioral Advertising

Proposed New York Privacy Law Making Progress

On May 24, 2021, Senator Thomas’ S6701 – the proposed New York Privacy Act, had its third reading before the Senate.  As recounted in its Legislative Intent section:  “Algorithms quietly make decisions with critical consequences for New York consumers, often with no human accountability.  Behavioral advertising generates profits by turning people into products and their activity into assets. New York consumers deserve more notice and more control over their data and their digital privacy.”  

To that end, the proposed law will provide New York consumers with certain new rights, including  “clear notice of how their data is being used, processed and shared; the ability  to  access  and obtain a copy of their data in a commonly used electronic format, with the ability to transfer  it  between  services;  the ability  to  correct  inaccurate  data and to delete their data; and the ability to challenge certain automated decisions.”  

If passed, this bill will become one of the strongest – if not strongest, consumer privacy law in the country and deserves to be carefully watched.  Even though this bill may still be lacking a progressive Right of Compensation, the proposed law includes a private right of action coupled with a consumer agency enforcement mechanism – a groundbreaking backstop that will protect consumers much more so than those few currently enacted consumer privacy laws lacking in a private right of action. 

Facebook’s Dominance in India May End in 2021

On April 19, 2021, arguments will be heard in a 2019 New Delhi action brought by World Phone Internet Services Private Limited, against Facebook, WhatsApp, the Government of India, and the regulator tasked with enforcing Internet telephony regulations in India.   World Phone is a licensed Indian provider of Internet telephony services.

India currently holds the honor of having the most Facebook and WhatsApp users worldwide.  Specifically, Facebook reached near 400 million users in India several months ago – which accounts for 28.4% of the entire country’s population.  And, WhatsApp is well beyond 400 million users given it last publicly disclosed that number three years ago.  Indeed, according to the Ministry of Electronics and Information Technology, WhatsApp now has 530 million users in the country.  

World Phone’s 2019 Petition alleges that Facebook messenger and WhatsApp are illegal services given they provide Indians with VoIP services without having the requisite underlying licenses or paying the required license fees and service taxes.  According to the Petition, licensed providers “have to adhere to various statutory regulations such as Quality of Service Regulations, Tariff Regulations and Consumer Protection Regulations. They also need to ensure emergency services, confidentiality of customer, privacy of communication, undergo regular audits and ensure proper lawful monitoring and interception.”  Facebook and WhatsApp comply with none of these regulatory requirements despite providing regulated services.  

Moreover, the Petition references the pertinent regulations that provide for “an amount of up to Rs. 50 Crore as penalty for any security breach caused due to any inadvertent inadequacy in the precautions taken by the licensee. If the security breach is caused as a result of a deliberate fault on the part of the licensee, then the penalty is an amount of Rs. 50 Crores for each breach. Besides penalty, criminal proceedings may also be initiated against the licensee. These measures keep the TSPs on their toes and ensure they adhere to the security and privacy requirements while providing Internet Telephony.”  Despite breaches that would have triggered these provisions, Facebook and WhatsApp have seen no regulatory enforcement actions filed against them.  

World Phone previously filed a similar legal action against Microsoft given its Skype product – India’s then dominant unlicensed VoIP service, caused World Phone harm by improperly competing without a license.  That action, however, was filed in the United States.  In a decision by Chief Judge Freda Wolfson of the District Court of New Jersey, the action was dismissed in May 2014 with World Phone explicitly directed to seek relief in India.  TI Investment Services, LLC, and World Phone Internet Services, Pvt. Ltd v. Microsoft Corporation, 23 F. Supp. 3d 451, 472 (D. N.J. 2014) (“If Plaintiffs wish to renew their suit, they should do so in the jurisdiction where they are alleged to have competed with Defendant, to have complied with regulatory laws, and to have suffered injury, and that is India.”). 

World Phone never needed to file suit in India given the subsequent appeal was settled between the parties.  Thereafter, Microsoft voluntarily chose to withdrew its unlicensed Skype services in India.  See NeoWin (October 6, 2014) (“Skype is either changing, or being forced to change, its strategy in India. The Microsoft service will no longer offer landline and mobile calls for Indian residents starting November 10th. This change came pretty much out of the blue and was announced by Skype on one of their support channels. . . Neither Microsoft nor Skype has offered any reason for this weird change but the company has offered to refund users who will be affected by this announcement.”); PC World (October 6, 2014) (“Skype appears to bow to Indian rules, ends in-country calls to local networks”); SIP Trunking Report (October 6, 2014) (“Some might argue the change has something to do with regulations that actually prohibit the use of VoIP services such as Skype to make calls on phones using the Internet.  . . . Since the law does not appear to have changed, some other consideration is at play.”).

In an Affidavit filed on July 20, 2020, there were two arguments made in opposition to World Phone’s application.  The first argument was that the Petition could not be decided because it was transferred to the Indian Supreme Court with other petitions involving Facebook and WhatsApp.  On its face, this argument made no sense given that the Transfer Order attached to the Affidavit did not list the World Phone Petition so the action was clearly not transferred.  Also, the transferred actions solely involve privacy issues.  Despite the fact those other matters also demonstrate the “digital colonialism” of Facebook and WhatsApp given they show how Indian users are treated differently from Europeans, they remain inapplicable to the World Phone Petition.

The second argument relied on a 2017 affidavit previously filed that claims the current regulatory body is “currently examining” over-the-top (OTT) services.  First, the services subject to the World Phone Petition are Internet telephony services and not mere OTT services.  And, despite it now being 2021, the agency still failed to address even the OTT services issues raised.  In fact, taking advantage of this longstanding lack of enforcement, WhatsApp is now moving aggressively to take advantage of its Indian market dominance in Internet telephony by moving into the desktop market.  

To defend against the World Phone Petition, Facebook and WhatApp hired two of the top attorneys in India – Mukul Rohatgi and Kapil Sibal.  Mukul Rohatgi – who is Facebook’s counsel, was in 2010 considered one of India’s top 10 lawyers.  He was also the 14th  Attorney General for IndiaKapil Sibal – who represents WhatsApp, formerly served as the head of various ministries over the years – beginning with the Ministry of Science & Technology, then the Ministry of Human Resource Development followed by the Ministry of Communications & IT, and the Ministry of Law & Justice.  To date, neither attorney has formally filed any papers with the Court.

No matter what is eventually filed by Facebook or WhatsApp, World Phone’s argument could not be simpler – there are no “checks and balances” available to protect Indian citizens from the digital colonization of Facebook and WhatsApp so its Petition is likely all that stands between Facebook and WhatsApp executing on its apparent digital colonialization plan and ultimate “data oligarch” control of the Indian population.

If successful, World Phone would cause the cessation of unlicensed Facebook messenger and WhatsApp services in India as well as the imposition of penalties for prior non-compliance.   To the extent Facebook chooses not to play regulatory ball, it may end up doing what it has done in China since 2009, namely just go dark.

UPDATE: April 22, 2021

On April 22, 2021, Justice Navin Chawla – the Justice who previously was hearing the World Phone case, ruled against Facebook and WhatsApp and dismissed their pleas challenging an Order from the Competition Commission of India (CCI) directing a probe into WhatsApp’s new privacy policy. Justice Chawla previously reserved judgment on the case.

A new Justice in the World Phone case – Justice Prathiba M. Singh, ruled on April 19, 2021 that Facebook and WhatsApp were required to provide a responsive affidavit within six weeks and World Phone had four weeks thereafter to respond. Moreover, a new hearing date of August 26, 2021 was set by the Court. For the very first time, Facebook and WhatsApp will now be required to articulate a defense to a case that on its face is indefensible.

Data Privacy Day 2021

On January 28, 2021, the National Cybersecurity Alliance encouraged individuals this Data Privacy Day to “Own Your Privacy” by “holding organizations responsible for keeping individuals’ personal information safe from unauthorized access and ensuring fair, relevant and legitimate data collection and processing.”  Indeed, the NCSA recognizes “[p]ersonal information, such as your purchase history, IP address, or location, has tremendous value to businesses – just like money.”

The NCSA “data as money” perspective is not a new concept.  In fact, it was hoped that Data Privacy Day 2016 would usher in a system for consumers to easily monetize their private data – a hope that has yet to materialize five years later.   Still, in the same way a bank protects money, there can be no adequate privacy without adequate security.

Richard Clarke – a security advisor to four U.S. presidents, properly recognized in 2014:  “Privacy and security are two sides of the same coin.”  The ransomware epidemic of 2020 should inform everyone why Data Privacy Day 2021 solidly places privacy and security on the same level. There can be little respect for the privacy rights of consumers – whether monetized or not, without an adequate effort at securing such data.  Some companies such as Microsoft – last year’s champion of Data Privacy Day, recognize the need to continually push the security envelope in order to properly protect consumer privacy rights. Accordingly, these companies go the extra distance and often work hand-in-hand with law enforcement to take down online criminal enterprises such as Emotet.

Going forward in 2021, companies safeguarding consumer data must recognize that the lines have blurred between nation state APT attacks – focused on the slow espionage of large companies, and criminal enterprises looking for quick financial hits.  For example, the lateral movement hallmarks of an APT attack are now routinely used during Ryuk ransomware exploits.  Moreover, the recent SolarWinds Orion Platform exploit highlights the need to focus on supply chains when protecting consumer data.

Focused security efforts would quickly stop being left on corporate “to do” lists if there was an applicable federal law in place for companies nationwide – not just the hybrid privacy/security state laws now applicable to only some companies.  Unfortunately, despite high hopes in 2019, there was little bipartisan push for a federal privacy law these past few years.  That dynamic might change in 2021.  

Former California Attorney General Kamala Harris’s 2012 annual privacy report opens with the words:  “California has the strongest consumer privacy laws in the country.”  During her tenure, California enjoyed “a constitutionally guaranteed right to privacy, over seventy privacy-related laws on the books, and multiple regulatory agencies set up to enforce these laws.”   As the new year progresses, the current Vice President may very well prod Congress for the sort of California “privacy pride” she once enjoyed on a state level. Given the current one-party rule, there is certainly no longer any excuse available to politicians looking to continue kicking the “federal privacy law can” around Capital Hill.

Apple’s Consumer Data Aspirations

In a November 19, 2020 letter to various non-profit groups, Apple reaffirmed its commitment to the App Tracking Transparency (ATT) permission feature first announced in June 2020:   “We developed ATT for a single reason:  because we share your concerns about users being tracked without their consent and the bundling and reselling of data by advertising networks and data brokers.”  Slated for release in 2021, the ATT feature requires permission before certain data is accessed by advertisers, namely the identifier for advertisers (IDFA).  Using the ATT feature, consumers will allow or reject tracking on an app-by-app basis.

The IDFA groups different users by similar search or browsing activity in an effort to limit advertisers from reverse engineering personally identifiable information. As described by Apple:   “We create segments, which are groups of people who share similar characteristics, and use these groups for delivering targeted ads. Information about you may be used to determine which segments you’re assigned to, and thus, which ads you receive. To protect your privacy, targeted ads are delivered only if more than 5,000 people meet the targeting criteria.”

When touting its alleged “privacy forward” ATT feature, Apple threw down yet another privacy gauntlet against Facebook:  “Facebook executives have made clear their intent is to collect as much data as possible across both first and third party products to develop and monetize detailed profiles of their users, and this disregard for user privacy continues to expand to include more of their products.”  Letter, dated November 19, 2020.

in a November 20, 2020 statement sent to Business Insider, Facebook counterpunched:  “The truth is Apple has expanded its business into advertising and through its upcoming iOS 14 changes is trying to move the free internet into paid apps and services where they profit. . . They claim it’s about privacy, but it’s about profit. . . This is all part of a transformation of Apple’s business away from innovative hardware products to data-driven software and media.”  

In other words, Facebook suggested that Apple plans on using its dominant market position to prioritize its own data collection efforts while making it difficult for competitors to use the same data.   Two months earlier, Facebook informed its business partners that it would “not collect the identifier for advertisers (IDFA) on our own apps on iOS 14 devices. . . . We may revisit this decision as Apple offers more guidance.”

Surprisingly, Facebook may actually have a point or two regarding Apple’s aspirations.  On November 16, 2020, a group led by privacy activist Max Schrems filed complaints in Germany and Spain over Apple’s online tracking tool claiming a breach of the EU’s e-Privacy Directive.   

According to the German Complaint

Apple defines the IDFA as “an alphanumeric string unique to each device, that you [the third party app developer] only use for advertising. Specific uses are for frequency capping, attribution, conversion events, estimating the number of unique users, advertising fraud detection, and debugging”.  [This IDFA] is “is very similar to a cookie: Apple and third parties (e.g. applications providers) can access this piece of information stored on the users’ device to track their behaviour, elaborate consumption preferences and provide relevant advertising. . . In practice, the IDFA is like a “digital license plate”. Every action of the user can be linked to the “license plate” and used to build a rich profile about the user. Such profile can later be used to target personalised advertisements, in-app purchases, promotions etc. When compared to traditional internet tracking IDs, the IDFA is simply a “tracking ID in a mobile phone” instead of a tracking ID in a browser cookie.

According to Reuters, Apple immediately disputed these claims, stating they were “factually inaccurate”.   Apple curiously also said to Reuters that it “does not access or use the IDFA on a user’s device for any purpose”.  Such a statement is curious only because on its face it means nothing when one considers the fact Apple allows “segmented” use and access to this “license plate” data.   By creating an “identifier for advertisers” form of digital “license plate”, Apple most certainly uses the IDFA by proxy every time one of its ad partners uses it.

Moreover, days before its public Facebook spat, Apple was called out by a cybersecurity expert for perceived privacy shortcomings in Gatekeeper – the Apple system used for managing third-party application security.  Pointing to flaws in how Gatekeeper relays and stores unencrypted information, Jeffrey Paul concluded:  “Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. . . . This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns.”

According to a November 15, 2020 editorial in Apple Insider, these perceived risks were illusory.   According to the editorial, “there’s not really much utility in knowing just what app is being launched, realistically speaking.”  And to boot, “ISPs could have that data if they wanted to without the limited info that Apple’s Gatekeeper may provide.”  

By claiming others could gather even more data and that the data in question does not have “much utility”, the editorial did not provide any real refutation of Jeffrey Paul’s basic concerns. Instead, the writer for Apple Insider hopes for the best:  “There’s not even the prospect of Apple pulling a Google and using this data, as Apple has been a voracious defender of user privacy for many years, and it is unlikely to make such a move.”  In other words, just trust Apple to do the right thing.

The very next day Apple actually did do the right thing and stopped collecting IP addresses related to Gatekeeper’s developer checks – likely in difference to Jeffrey Paul’s research.  The  Apple Support Update released on November 16, 2020 states:  “To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.  In addition, over the the [sic] next year we will introduce several changes to our security checks:   A new encrypted protocol for Developer ID certificate revocation checks; Strong protections against server failure; [and] A new preference for users to opt out of these security protections.”  These new safeguards address the exact issues raised by Jeffrey Paul in his blog.

Apple’s aspirations regarding consumer data control will likely cause it to continue butting heads with social media platforms guarding their data oligarchies and privacy advocates protecting consumers. As the world’s largest market cap company, however, Apple may be uniquely positioned to take on such challenges.  Unfortunately, governmental intervention may be the only viable check on Apple should the company ever fully stray from its prior data privacy commitments. Given the current dysfunctional political environment, Apple likely has a long runway should regulators ever come knocking.

Platform Immunity at Risk?

On September 23, 2020, the Department of Justice released its proposed changes to Section 230 of the DMCA – the first serious attempt at reigning in the immunity rights enjoyed by the duopoly of Facebook and Google.  In his cover letter, the Attorney General wrote:  “I am pleased to present for consideration by Congress a legislative proposal to modernize and clarify the immunity that 47 U.S.C. § 230 provides to online platforms that host and moderate content.”  Recognizing that “platforms have been allowed to invoke Section 230 to escape liability even when they knew their services were being used for criminal activity”, the Attorney General stressed that the initial purposes of the 1996 DMCA have long ago been served.  

Accordingly, the first tranche of changes is focused on ensuring editorial decisions are being done objectively and in good faith – with a proposed definition of “good faith” actually baked into the proposed new Section 230.  Specifically, Section 230(c)(2) is amended to require platforms have an “objectively reasonable belief” that the speech they are removing falls within certain enumerated categories.

The second area of changes addresses growing illicit online content by limiting publisher immunity when an online platform (I) purposefully promotes, facilitates, or solicits third­ party content that would violate federal criminal law; (2) has actual knowledge that specific content it is hosting violates federal law; or (3) fails to remove unlawful content after receiving notice by way of a final court judgment.  See Proposed § 230(d).

And finally, the third major change amends Section 230(e) to expressly confirm that the immunity provided by Section 230 would not apply to civil enforcement actions brought by the federal government.  This change provides for an important federal enforcement tool against platforms should the need arise – just like with any other company in the United States.  See Proposed § 230(e).

A careful review of these changes evidences a long-overdue updating that hopefully begets bipartisan support despite the current schism between our two major political parties.   Indeed, given the lobbying might of Facebook, Google and other online platforms, any alteration of the immunities granted under Section 230 will require nothing less than true bipartisan support.

UPDATE: October 28, 2020

On October 28, 2020, the U.S. Senate held a hearing on the following topic: “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” The Hearing was to “examine whether Section 230 of the Communications Decency Act has outlived its usefulness in today’s digital age. It will also examine legislative proposals to modernize the decades-old law, increase transparency and accountability among big technology companies for their content moderation practices, and explore the impact of large ad-tech platforms on local journalism and consumer privacy.”

Other than highlighting a pretty wild lockdown beard, the session provided little real ammo for either side of this debate. Perhaps in 2021, that dynamic may change.

Schrems-II, Facebook-0

On July 16, 2020, the EU Court of Justice decided “Schrems II” and invalidated the EU Commission’s Decision 2016/1250 regarding the adequacy of the EU-U.S. Privacy Shield (“the Privacy Shield Decision”).  As described in the Press Release issued by the Court:

[T]he limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.

This case was the second one brought by Max Schrems against Facebook in its Irish domicile – which is why the case is now in the hands of the Irish Data Protection Commission. In rejecting the use of a Privacy Shield Ombudsperson who was independent from the Intelligence Community – the agreed-upon safeguard found in the Privacy Shield Decision, the Court of Justice ruled that such a mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.” 

Now that the Court has invalidated the European Commission’s adequacy decision for the EU-U.S. Privacy Shield, thousands of  US companies relying on such a mechanism will need to reevaluate their compliance efforts.  The US Commerce Department echoed today the same disappointment likely felt by these companies.  Reminding companies there is still a “US” component very much still intact in the “EU-US Privacy Shield”, the Secretary of Commerce also stated that “today’s decision does not relieve participating organizations of their Privacy Shield obligations.”

CCPA Enforcement Begins Today

Beginning on July 1, 2020, the California Attorney General’s office may start sending out warnings of potential CCPA violations and give notified businesses 30 days to correct those violations before facing possible fines or lawsuits.

In rejecting numerous requests to delay CCPA enforcement, Attorney General Xavier Becerra reasoned: “As families continue to move their lives increasingly online, it is essential for Californians to know their privacy options. Our office is committed to enforcing the law starting July 1.”

In November 2020, California voters may take a swipe at the AG’s efforts by approving a new ballot initiative – the California Privacy Rights Act, that creates a privacy enforcement agency some may consider “a woefully underfunded paper tiger” yet will still nevertheless have exclusive enforcement power over certain provisions of CCPA to the exclusion of the AG’s office.

Given the very long gestation period for the proposed CPRA – this ballot law would become effective January 1, 2023 and enforceable on July 1, 2023, the jury is still certainly out on whether its passage would ever directly benefit consumers or just lead to more lobbyist driven amendments by the California duopoly of Google and Facebook. As of right now, the Tech Lords of Stanford certainly remain in complete control.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

California AG Pushes New Global Opt-Out Privacy Setting

On June 2, 2020, the Office of the California Attorney General (“OAG”) submitted its final proposed regulations under the California Consumer Privacy Act (CCPA) The OAG press release suggests these final regulations clarify “important transparency and accountability mechanisms for businesses subject to the law.” A number of those reviewing these final regulations correctly point out that they have not changed much from the last draft.

The most striking feature of these proposed regulations, however, is actually found in the explanatory reasoning jointly filed by the AG. The OAG Statement of Reasons suggests the OAG may have, in effect, mandated more than what was expressly required under CCPA, namely an opt-out setting for the sale of personal information that can be managed by consumers on a global basis.

By way of background, consumers have long had the capability to send “Do Not Track” (DNT) header signals from their browsers – with privacy advocates long providing tutorials on how consumer-choice DNT tools could be implemented on browsers.  Given that a DNT signal is a machine-readable header and not an embedded cookie, i.e., a file placed by websites into a consumer’s computer in order to store privacy preferences, consumers can delete installed cookies without disrupting their global DNT signal.  Some companies such as Apple actually do not even respond to DNT signals because they claim that they do not “track its customers over time and across third party websites to provide targeted advertising.”

The OAG sets forth in § 999.315 the relevant “Requests to Opt-Out” language later interpreted by the OAG in its Statement of Reasons.

Section 999.315(c) of the OAG’s regulations reads: “A business’s methods for submitting requests to opt-out shall be easy for consumers to execute and shall require minimal steps to allow the consumer to opt-out. A business shall not utilize a method that is designed with the purpose or has the substantial effect of subverting or impairing a consumer’s decision to opt-out.” And, the final Subsection (d)(1) reads:  “Any privacy control developed in accordance with these regulations shall clearly communicate or signal that a consumer intends to opt-out of the sale of personal information.”

Previously, an EFF-led privacy coalition recommended the deletion of the following clause from § 999.315(d)(1):  “The privacy control shall require that the consumer affirmatively select their choice to opt-out and shall not be designed with any pre-selected settings.”  That recommendation was adopted by the OAG and the “affirmative selection” language was deleted – obviating the need for a potential website-by-website affirmative opt-out selection by consumers.

While the § 315(d)(1) recommendation was adopted, the OAG chose not to adopt the EFF coalition’s recommendation to add the following clause at the end of § 315(c):  “A business shall treat a “Do Not Track” browsing header as such a choice.”  By rejecting this suggested new language, the OAG chose not to limit the scope of any implementation technology. As reflected in the OAG’s Statement of Reasons, this rejection actually ends up being an even more meaningful nod in the direction of the EFF Coalition.

Specifically, the OAG recognized it’s goal was in imposing clear regulatory parameters while not imposing technological requirements that might be limiting on a company:

By requiring that a privacy control be designed to clearly communicate or signal that the consumer intends to opt-out of the sale of personal information, the regulation sets clear parameters for what the control must communicate so as to avoid any ambiguous signals.  It does not prescribe a particular mechanism or technology; rather, it is technology-neutral to support innovation in privacy services to facilitate consumers’ exercise of their right to opt-out.  The regulation benefits both businesses and innovators who will develop such controls by providing guidance on the parameters of what must be communicated.  And because the regulation mandates that the privacy control clearly communicate that the consumer intends to opt-out of the sale of personal information, the consumer’s use of the control is sufficient to demonstrate that they are choosing to exercise their CCPA right.

More to the point, the OAG also explains

Subsection (d) requires a business that collects personal information online to treat user-enabled global privacy controls as a valid request to opt-out.  This subsection is forward-looking and intended to encourage innovation and the development of technological solutions to facilitate and govern the submission of requests to opt-out.  Given the ease and frequency by which personal information is collected and sold when a consumer visits a website, consumers should have a similarly easy ability to request to opt-out globally.  This regulation offers consumers a global choice to opt-out of the sale of personal information, as opposed to going website by website to make individual requests with each business each time they use a new browser or a new device. (emphasis added).

Perhaps anticipating some push back, the OAG goes into detail regarding its authority by referencing prior experience with DNT requirements under the California Online Privacy Protection Act (Bus. & Prof. Code, § 22575 et seq.) (CalOPPA).  To that end, on May 21, 2014, the OAG previously released a set of recommendations to assist with compliance of CalOPPA’s DNT disclosures.   

The OAG justifies its approach as follows:

As the primary enforcer of [CalOPPA], the OAG has reviewed numerous privacy policies for compliance with CalOPPA, which requires the operator of an online service to disclose, among other things, how it responds to “Do Not Track” signals or other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about their online activities over time and across third-party websites or online services.  (Bus. & Prof. Code, § 22757, subd. (b)(5).)  The majority of businesses disclose that they do not comply with those signals, meaning that they do not respond to any mechanism that provides consumers with the ability to exercise choice over how their information is collected.  Accordingly, the OAG has concluded that businesses will very likely similarly ignore or reject a global privacy control if the regulation permits discretionary compliance.  The regulation is thus necessary to prevent businesses from subverting or ignoring consumer tools related to their CCPA rights and, specifically, the exercise of the consumer’s right to opt-out of the sale of personal information. Contrary to public comments that the user-enabled global privacy setting is outside of the scope of the OAG’s authority, subsection (d) is authorized by the CCPA because it furthers and is consistent with the language, intent, and purpose of the CCPA.  (emphasis added).

Not surprising given its technology neutral approach, the manner in which companies will comply with a global opt-out capability is not spelled out by the OAG.  Companies may address a global opt-out setting controlled by consumers by either taking on this obligation utilizing a new product or investing internally in developing a solution. Any such feature, however, will likely be tested by the OAG and courts. No matter how this new requirement is implemented, however, it is very likely the OAG will come out swinging given that the November 2020 ballot initiative spearheaded by Alastair Mactaggartthe California Privacy Rights Act, would create the “California Privacy Protection Agency” as a new enforcement arm and potential competition for the OAG.

UPDATE:  November 4, 2020

On November 3, 2020 – despite a significant late push by data oligarchs such as Google, the CPRA ballot initiative won by 56% of the vote.  As stated by Alastair Mactaggart, Chair of Californians for Consumer Privacy and the Prop 24 sponsor:  “With tonight’s historic passage of Prop 24, the California Privacy Rights Act, we are at the beginning of a journey that will profoundly shape the fabric of our society by redefining who is in control of our most personal information and putting consumers back in charge of their own data.”  

Former Presidential candidate, Andrew Yang – who was the Chair of the Board of Advisors for Californians for Consumer Privacy, added:  “I look forward to ushering in a new era of consumer privacy rights with passage of Prop 24, the California Privacy Rights Act. . . . It will sweep the country and I’m grateful to Californians for setting a new higher standard for how our data is treated.”

There is no denying this was a momentous vote.  On the other hand, a lot can happen by the CPRA enforcement date of January 1, 2023 – including passage of a law via standard lobbying channels or a new ballot initiative launched by the data oligarchs either with either one trimming the gains made this last election cycle.

Our Current Cyber Pandemic Will Also Subside

On April 17, 2020, it was reported that researchers at Finland’s Arctic Security found “the number of networks experiencing malicious activity was more than double in March in the United States and many European countries compared with January, soon after the virus was first reported in China. ”

Lari Huttunen at Arctic Security astutely pointed out why previously safe networks were now exposed: “In many cases, corporate firewalls and security policies had protected machines that had been infected by viruses or targeted malware . . . . Outside of the office, that protection can fall off sharply, allowing the infected machines to communicate again with the original hackers. “

Tom Kellerman – a cybersecurity thought leader, distills it this way: “There is a digitally historic event occurring in the background of this pandemic, and that is there is a cybercrime pandemic that is occurring.”

While there are certain internal ways of addressing cybersecurity threats arising from a viral pandemic, the exposures now faced by corporations become doubly damaging when the outside resources absolutely necessary to combat active threats are considered off-budget or not a critical enough priority. Smart companies generally survive stressful times by prioritizing with some foresight. Network security during a Cyber Pandemic should be a top priority no matter what size business.

During our Cyber Pandemic, companies recognizing and properly addressing the potential damage caused by threat actors will not only survive minor short-term hits to their bottom line caused by paying outside resources, they will likely be the ones coming on top after both Pandemics subside. There is definitely a light at the end of the tunnel for those willing to take the ride – just continue using trusted vehicles to get you there.

Addressing COVID-19 Cybersecurity Threats

When implementing COVID-19 business continuity plans, companies should take into consideration security threats from cybercriminals looking to exploit fear, uncertainty and doubt – better known as FUD.  Fear can drive a thirst for the latest information and may lead employees to seek online information in a careless fashion – leaving best practices by the wayside.

According to Reinsurance News, there has already been “a surge of coronavirus-related cyber attacks”.  Many phishing attacks “have either claimed to have an attached list of people with the virus or have even asked the victim to make a bitcoin payment for it.” Not all employees are accustomed to the risks from a corporate-wide work from home (WFH) policy given the previous lack of intersection between work and personal computers. 

One cyber security firm released information outlining these WFH risks. And,  another security provider offers a common-sense refresher:  “If you get an email that looks like it is from the WHO (World Health Organization) and you don’t normally get emails from the WHO, you should be cautious.” In addition to recommendations made by security consultants, there are privacy-forward recommendations that will necessarily mitigate against phishing exploits.  For example, WFH employees should be steered towards privacy browsers such as Brave and Firefox to avoid fingerprinting and search engines such as Duckduckgo for private searches.  A comprehensive listing of privacy-forward online tools is found at PrivacyTools.IO.    

Criminals have already exploited the current FUD by creating very convincing COVID-19-related links.   As reported by Brian Krebs, several Russian language cybercrime forums now sell a “digital Coronavirus infection kit” that uses the Hopkins interactive map of real-time infections as part of a Java-based malware deployment scheme. The kit only costs $200 if the buyer has a Java code signing certificate and $700 if the buyer uses the seller’s certificate. 

At a very basic level, WFH employees should be reminded not to click on sources of information other than clean URLs such as CDC.Gov or open unsolicited attachments even if they appear coming from a known associate.  Now that banks, hotels, and health providers are  sending emails alerting their clients of newly-implemented COVID-19 procedures, it is especially easy to succumb to spear phishing exploits – which is the hallmark of state-sponsored groups.  As recently reported, government-backed hacking groups from China, North Korea, and Russia have begun using COVID-19-based phishing lures to infect victims with malware and gain infrastructure access.  These recent attacks primarily targeted users in countries outside the US but there should be little doubt more groups will focus on the US in the coming weeks. Until ramped up testing demonstrates that the COVID-19 risk has passed, companies are well advised to focus some of their security diligence on these targeted attacks.

This does not mean employees need to be fed yet more FUD – this time regarding network security, without some good news. Employees can be reminded of the fact a decade ago we survived another pandemic. Specifically, between April 2009 and April 2010, there were 60.8 million cases, 274,304 hospitalizations, and 12,469 deaths in the United States caused by the Swine Flu. Globally, the Swine Flu infected between 700 million and 1.4 billion people, resulting in 150,000 to 575,000 deaths. Moreover, the young were a vector for Swine Flu yet are not for COVID-19. And, a large band of 25 – 35 year olds are better in two days – hardly a bad cold, for COVID-19 whereas there was no such band for the Swine Flu. On the downside, COVID-19 has a more efficient transmission mechanism than Swine Flu and we are better suited to develop influenza vaccines than we are for coronavirus vaccines.

UPDATE: April 23, 2020

The CDC reports in its latest published statistics there were 802,583 reported cases of COVID-19 and 44,575 associated deaths. Without a doubt, this pandemic is certainly much worse that the Swine Flu pandemic as previously reported by the CDC. Moreover, the current “panic pandemic” certainly shows no indications of subsiding.

Whether the governmental measures taken actually ratcheted up the body count or caused them to diminish is left for historians and clinicians to analyze. The hard fact remains the body count keeps going up and the U.S. economy is still on lock down as of April 23, 2020.

UPDATE: May 1, 2020

On April 30, 2020, it was reported Tonya Ugoretz, deputy Assistant Director of the FBI Cyber Division, stated the FBI’s Internet Crime Complaint Center (IC3) is currently receiving between 3,000 and 4,000 cybersecurity complaints daily – IC3 normally averages 1,000 daily complaints.

UPDATE: May 6, 2020

On May 5, 2020, a joint alert from the United States Department of Homeland Security Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Cyber Security Centre warned of APTs targeting healthcare and essential services.

The alert warned of “ongoing activity by APT groups against organizations involved in both national and international COVID-19 responses.”  This May 5, 2020 alert follows an April 8, 2020 Alert that warned in broader terms of malicious cyber actors exploiting COVID-19.

APTs are conducted by nation-state actors given the level of resources and money needed to launch such an attack.  Moreover, they generally take between eight and nine months to plan and coordinate before launching.  It is particularly disheartening that these recent attacks include those launched by state-backed Chinese hackers known as APT 41.  As one cybersecurity firm points out in a recently-released white paper:  “APT41’s involvement is impossible to deny.” 

Distilled to its essence, the uncovered APT41 attacks mean that before COVID-19 was even on US shores, Chinese state-actors were planning attacks targeting the healthcare and pharmaceutical sectors.  One can only hope the cyberattacks were not coordinated alongside the spread of the virus – a virus that only became public months after a coordinated attack would have been first planned.