All posts by Paul E. Paray

Sins of our Marketers: SMS, the Telephone Consumer Protection Act, and Strict Liability

In continuing a trend that took hold nearly four years ago in Satterfield v. Simon & Schuster, Inc., 569 F. 3d 946 (9th Cir. 2009), a putative class action was filed on January 25, 2013 alleging that unsolicited SMS texts give rise to statutory damages under the Telephone Consumer Protection Act (TCPA).  Although the suit may have been brought against a big box retailer the allegations are based on the conduct of “a mobile technology company whose identity is currently unknown.”

Under the TCPA, it is unlawful to make “any call (other than a call made for emergency purposes or made with the prior express consent of the called party) using any automatic telephone dialing system [ATDS] . . . [to any] cellular telephone service.” 47 U.S.C. Sec. 227(b)(1)(A).  Although the TCPA was enacted years before SMS was a reality, the FTC, as well as courts in California and Chicago, have interpreted the undefined term “any call” to include  SMS texts so long as the SMS text was sent using an ATDS.

Courts have already ruled that “the TCPA is essentially a strict liability statute which imposes liability for erroneous unsolicited faxes.” Alea London Ltd. v. Am. Home Services, 638 F.3d 768, 776 (11th Cir. 2011) (citation omitted).  See also Universal Underwriters Ins. Co. v. Lou Fusz Auto. Network, Inc., 401 F.3d 876, 882 (8th Cir. 2005) (“The Act makes no exception for senders who mistakenly believe that recipients’ permission or invitation existed.”).  This means that class action counsel need only demonstrate that the SMS messages went out unsolicited via an ATDS and statutory damages will likely follow.

As now being pressed in the Hill putative class action filed on January 25, 2013, this strict liability for unsolicited SMS messages may also extend from the actual sender, i.e., marketer, to the retailer.  Several years ago, the FTC responded to a request for public comments filed by the FCC regarding the following two questions:   “First, does a call placed by an entity that markets a seller’s goods and services qualify as a call made on behalf of, and initiated by, the seller, even if the seller does not physically place the call?; and second, what should determine whether a telemarketing call is made “on behalf of” a seller, thus triggering liability under the TCPA?”

The FTC answered with a vigorous defense of its view that “the plain meaning of the law and its regulations supports holding sellers liable for calls made for the seller’s benefit.”  Given the FTC was merely responding to the FCC’s request for comments and given the FCC has yet to release its final ruling, it remains to be seen whether the courts will ultimately side with the FTC view.  Indeed, several courts have explicitly rejected the FTC position regarding vicarious strict liability.  See e.g.Mey v. Pinnacle Security, LLC, No. 5:11CV47, slip op. at  (N.D.W.Va. Sept. 12, 2012) (“In the Spring of 2011, the FCC released a public notice requesting comment on the issue of strict “on behalf of” liability under §227(b)(3), and this Court has not received information that a ruling has yet been issued on the matter.  26 FCC Rcd 5040. . . . Accordingly, this Court finds that the TCPA does not provide strict “on behalf of” liability under § 277(b)(3).”) (citing Thomas v. Taco Bell Corp., 2012 U.S. Dist. LEXIS 107097, No. SACV 09-01097-CJC (C. D. Calif. June 25, 2012).

Whether or not the FTC is ultimately vindicated by the courts on this issue, it is clear that the FTC is not oblivious to the mechanics of mobile marketing.   For example, the FTC has found that a one-time text message confirming a consumer’s request that no further text messages be sent was not violative of the TCPA.  Notwithstanding any current temporary safe harbor that may exist, the takeaway remains that firms may be on the hook for what their marketing, promotional, and advertising firms are doing when it comes to SMS campaigns.

Given the FTC’s stated desire to visit on innocent retailers the sins of their marketers and the difficulty to insure against this risk, it is obviously more important than ever for those who rely on SMS campaigns to always verify appropriate consent and obtain suitable contractual indemnifications.

First Amendment Does Not Save NJ Teacher from Postings Firing

In a January 11, 2013 ruling, the New Jersey Appellate Division upheld the administrative dismissal of a first grade teacher.  She had argued that the First Amendment precluded her firing — which was based on two Facebook postings.  In the Matter of the Tenure Hearing of Jennifer O’Brien, (NJ App. Div. January 11, 2013).  One of her statements was, “I’m not a teacher — I’m a warden for future criminals!”

O’Brien said she posted the statement that her students were “future criminals” because of “their behaviors, not because of their race or ethnicity.”  She also stated that “six or seven of her students had behavioral problems, which had an adverse impact on the classroom environment.”  Id. at 4 – 5.

In finding that she failed to establish her Facebook postings were protected speech, the Appellate Division found that “even if O’Brien’s comments were on a matter of public concern, her right to express those comments was outweighed by the district’s interest in the efficient operation of its schools.”  Id. at 11.

This ruling sits in contrast to the NLRB’s frequent warnings regarding the sanctity of worker postings — especially when the postings pertain to workplace conditions.  The cringe-worthy nature of these postings, the fact they were directed at first graders, and the deference accorded administrative proceedings certainly all made it easy for the Appellate Division to rule as it did.  In other words, employers should not take great comfort in this ruling when evaluating whether to discipline employees for inflammatory postings.

New Jersey Fast Tracks Employer Social Media Bill

New Jersey is ready to have the harshest law aimed at preventing employers from delving into the social media postings of employees.  In what is considered lightning speed for New Jersey legislative action, the New Jersey Assembly fast-tracked a bill in May that was approved in June by the Assembly 76-1 and by the Senate in October by a 38-0 margin.  The bill – A2878–  is now poised for signature by Governor Christie by the end of the year.

If it is signed by the Governor, it will be the toughest of the similar laws on the books in Maryland, California and Illinois.  All of these laws are aimed primarily at prohibiting employers from asking for social media passwords.   If enacted, New Jersey’s law would also preclude employers from asking if an employee or prospective employee even has a social media account.  And, any agreement to waive this protection would be deemed void pursuant to the law.   There are also civil penalties for any violation with the penalties beginning at $1,000 for an initial violation and increasing to $2,500 for each additional violation.

The New Jersey law would obviously generate issues for an employer who is looking to comply while still ensuring a secure work environment for its employees.  To that end, the new law would not bar company policies curtailing the use of employer-issued electronic communications devices during work hours.   Not surprisingly, it is the blurring of private vs. public social media usage which portends to be a major driver of any future civil litigation.  What may end up being the most important factor regarding how much litigation this new law would create, however, is the fact reasonable attorney fees may also be recoverable under the statute.  Without the financial incentive of a class action or statutory fees, there would be few attorneys willing to bring actions based on $1,000 violations.

UPDATE – February 21, 2013

The bill has still not been signed into law — so much for being fast tracked!  Rather than agree to several Senate changes to the bill and then pass along to the Governor for signature, the Assembly has chosen to sit on the bill.  A good discussion regarding the latest status of this proposed law can be found in Law360.

UPDATE – March 25, 2013

On March 21, 2013, the bill passed the Assembly by a whopping 75 – 2 vote and is now on the Governor’s desk.

UPDATE – May 7, 2013

On May 6, 2013, Governor Christie conditionally vetoed the bill.  In his statement, he suggested that the bill would have been over broad in reach and gave the following example of an unintended consequence of such breadth:

[U]nder this bill, an employer interviewing a candidate for a marketing job would be prohibited from asking about the candidate’s use of social networking so as to gauge the candidate’s technological skills and media savvy. Such a relevant and innocuous inquiry would, under this bill, subject an employer to protracted litigation.

The Governor also vetoed that part of the bill that would have allowed for a private right of action.  He felt any dispute was better resolved by the state labor commissioner.  According to the bill’s sponsor, the Assembly will likely adopt Governor Christie’s suggestions in order to have the bill signed into law.  In effect, the most controversial aspect of the bill was just removed.  While some New Jersey businesses may be breathing a sigh of relief, the plaintiff’s bar is certainly no longer excited about this bill.

October is National Cyber Security Awareness Month

National Cyber Security Awareness Month is being sponsored by the Department of Homeland Defense as well as the National Cyber Security Alliance and the Multi-State Information Sharing and Analysis Center.   In a Presidential Proclamation, President Obama called “upon the people of the United States to recognize the importance of cybersecurity and to observe this month with activities, events, and trainings that will enhance our national security and resilience.”  Many of the same corporations and universities who promote Privacy Day in January also promote NCSAM in October.

According to the FBI, since the first NCSAM was celebrated nine years ago the network security threat has continued to grow even more complex and sophisticated — “Just 12 days ago, in fact, FBI Director Robert Mueller said that ‘cyber security may well become our highest priority in the years to come.'”

There is no denying the obvious good in promoting security awareness and diligence.  It is hoped, however, that a month devoted to “cyber security awareness” does not inadvertently dilute the more important message that security diligence is something that should be done every day of the year.   On the other hand, to the extent NCSAM’s “Stop.Think.Connect.” message touches even one small business owner in Des Moines and makes her less likely to fall victim to a phishing exploit in the future, NCSAM will be a success.

The Privacy Tug of War

According to the World Economic Forum, “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Given the inherent value of this new asset class, it’s no surprise there has been an ongoing tug of war regarding how consumers should be compensated for access to their personal data.

In a March 2003 Wired article titled, “Who’s Winning Privacy Tug of War?“, the author suggests that “[c]onsumers appear to have become weary of the advertising bombardment, no matter how targeted to their tastes those ads may be.”  And, the “tit-for-tat tactic on the Web” that requires users to provide certain personal information in exchange for product or other information may be much less than a perfect marketing model given these marketing preference databases “are polluted with lies.”

Fast forward a decade or so and companies are still trying to figure out the Privacy Tug of War rules of engagement.  In a report released on September 19, 2012, UK think tank Demos released a report it considered “the most in-depth research to date on the public’s attitudes toward the sharing of information.”   Not surprisingly, Demos found that in order to maximize the potential value of customer data, there needs to be “a certain level of trust established and a fair value exchange.”   The firm found that only 19 percent of those surveyed understand the value of their data, and the benefits of sharing it.

The surveys, workshops and other research tools referenced in the Demos report all point towards a “crisis of confidence” which may “lead to people sharing less information and data, which would have detrimental results for individuals, companies and the economy.”   Demos offers up a possible solution to this potential crisis:

The solution is to ensure individuals have more control over what, when and how they share information. Privacy is not easily defined. It is a negotiated concept that changes with technology and culture. It needs continually updating as circumstances and values change, which in turn requires democratic deliberation and a dialogue between the parties involved.

It is hard to have any meaningful deliberations when no one is charting a clear path to victory in the Privacy Tug of War — nor is there any consensus regarding whether it is preferable to even have such a path.   Some on the privacy circuit have suggested we must create better privacy metrics and offer tools to use those metrics to measure whether a company’s privacy protections are “satisfactory”.   Consumers right now can rely on sites such as Clickwrapped to score the online privacy policies of major online brands.   Certification services such as TRUSTe provide insight regarding the online privacy standards of thousands of websites.   If they don’t like what they see, consumers can always “opt out” and use services such as that of start-up Safe Shepherd to remove “your family’s personal info from websites that sell it.”

Unfortunately, no commercially available privacy safeguard, testing service or certification can ever move fast enough to address technological advances that erode consumer privacy given such advances will always launch unabated — and undetected — for a period of time.  Not unlike Moore’s Law regarding the doubling of transistor computing power every two years, it appears that consumer privacy diminishes in some direct proportion to new technological advances.  Consumer privacy expectations should obviously be guided accordingly.   Unlike with Moore’s Law, however, there is no uniform technology, product, or privacy metric that can be benchmarked as it is in the computer industry.

This does not mean we are powerless to follow technology trends and quantify an associated privacy impact.  For example, the Philip Dick/Steven Spielberg Minority Report vision of the future where public iris scanning offers up customized advertisements to people walking around a mall has already taken root in at least one issued iris-scanning patent that is jointly owned by the federal government and a start-up looking to serve ads suggested using facial recognition techniques.  In direct reaction to EU criticism of Facebook’s own facial recognition initiative, Facebook temporarily suspended its “tag-suggest” feature.  This automatic facial recognition system recognized and suggested names for those people included in photographs uploaded to Facebook – without first obtaining the consent of those so recognized and tagged.

Closely monitoring technological advances that may impact privacy rights — whether the body diagnostics of Mc10 and ingested medical sensors from Proteus, the latest in Big Data analytics, or a new EHR system that seamlessly ties such innovations together — becomes the necessary first step towards understanding how to partake in the Privacy Tug of War.

Unlike the PC industry that is tied to Moore’s Law, our government’s unbounded funding is an active participant in developing privacy-curtailing technological advances.  For example, the FBI is currently undergoing a billion-dollar upgrade creating its Next Generation Identification Program which will deploy the latest in facial recognition technologies.   As recognized by CMU Professor Alessandro Acquisti, this “combination of face recognition, social networks data and data mining can significantly undermine our current notions and expectations of privacy and anonymity.”

Not surprisingly, there has been some push back on such government initiatives.    For example, on September 25, 2012, the ACLU filed suit against several government agencies under the Freedom of Information Act seeking seeking records on their use and funding of automatic license plate readers (APLRs).  According to the Complaint, “ALPRs are cameras mounted on stationary objects (e.g., telephone poles and the underside of bridges) or on patrol cars [and] photograph the license plate of each vehicle that passes, capturing information on up to thousands of cars per minute.”   The ACLU suggests that APLRs “pose a serious threat to innocent Americans’ privacy.”

The imminent unleashing of unmanned aircraft systems – commonly known as “drones” – sets in motion another technological advance that should raise serious concerns for just about anyone.  Signed by President Obama in February 2012, The FAA Modernization and Reform Act of 2012, among other things, requires that the Federal Aviation Administration accelerate the use of drone flights:

Not later than 270 days after the date of enactment of this Act, the Secretary of Transportation, in consultation with representatives of the aviation industry, Federal agencies that employ unmanned aircraft systems technology in the national airspace system, and the unmanned aircraft systems industry, shall develop a comprehensive plan to safely accelerate the integration of civil unmanned aircraft systems into the national airspace system.

As recognized by the Government Accountability Office in a September 14, 2012 Report, even though “[m]any [privacy] stakeholders believe that there should be federal regulations” to protect the privacy of individuals from drone usage, “it is not clear what entity should be responsible for addressing privacy concerns across the federal government.”

This is not an insignificant failing given according to this same report, commercial and government drone expenditures could top $89.1 billion over the next decade ($28.5 billion for R&D and $60.6 billion for procurement).  Interestingly, the necessary comprehensive plan to accelerate integration of civil drones into our national airspace systems will be due on November 10, 2012 – right after elections.   According to an Associated Press-National Constitution Center poll, 36 percent of those polled say they “strongly oppose” or “somewhat oppose” police use of drones.   This somewhat muted response is likely driven by the fact most polled just do not understand the capabilities of these drones and just how pervasive they will become in the coming years.

The technology advance that may have the greatest impact on privacy rights does not take to the skies but is actually found in most pockets and purses.   The same survey referenced above found that 43 percent of those polled (the highest percentage) primarily use a mobile device alone rather than a landline or a combination of mobile device and landline — with 34 percent of those polled not even having a landline in their home.   Not surprisingly, companies have been aggressively tapping into the Big Data treasure trove available from mobile device usage.   Some politicians have taken notice and are already drawing lines in the digital sand.

Under the Mobile Device Privacy Act introduced by Congressman Edward J. Markey, anyone who sells a mobile service, device, or app must inform customers if their product contains monitoring software — with statutory penalties ranging from $1,000 per unintentional violation to $3,000 per intentional violation.   This new bill addresses only a single transgression of the personal-data-orgy now being enjoyed by so many different companies up and down the mobile device communication and tech food chain.   As evidenced by the current patent landscape — including an issued Google patent that involves serving ads based on a mobile device’s environmental sounds — and the now well-known GPS capabilities of mobile devices, the privacy Battle of Midway will likely be fought around mobile devices. Companies with a stake in the Privacy Tug of War — as well as those professionals who advise such companies — will only be adequately prepared if they recognize that this battle may ultimately have no clear winners or losers — only willing participants.

World Intellectual Property Day

Happy World Intellectual Property Day!

To increase IP awareness around the world, member states of the World Intellectual Property Organization (WIPO) chose April 26  the day when the WIPO Convention came into force in 1970  as World IP Day.  According to WIPO, World IP Day celebrates innovation and creativity and how intellectual property fosters and encourages them. To celebrate this day, what follows is a discussion of four significant US court rulings decided in April 2012 each involving one of the major IP domains:   patent, trademark, copyright and trade secret.

Communications Involving Patent Settlements are Discoverable

On April 9, 2012, the United States Court of Appeals for the Federal Circuit ruled that communications involving reasonable royalty rates and damage calculations were discoverable.   Specifically, the Federal Circuit ruled that such communications that may underlie settlement agreements were not worthy of creating a new federal privilege.   In re  MSTG, Inc., No. 996 (Fed. Cir. April 9, 2012).   There was previously an open question as to whether settlement discussions were privileged and not subject to disclosure.  The Sixth Circuit in Goodyear Tire & Rubber Co. v. Chiles Power Supply, Inc., 332 F.3d 976, 979-83 (6th Cir. 2003) adopted a settlement privilege while such a privilege was rejected by the Seventh Circuit in In re General Motors Corp. Engine Interchange Litigation, 594 F.2d 1106, 1124 n.20 (7th Cir. 1979).

In rejecting MSTG’s request to create a settlement privilege that would protect the reasonable royalty rate discussions had with other defendants, the Federal Circuit distinguished Fed R. Evid. 408. According to the court, Fed. R. Evid. 408, only addresses the inadmissibility of settlement discussions (for purposes of showing the validity or amount of a claim) and does not expressly prohibit the discovery of such material.   Id. at 11 – 12.   Finding there was no good reason to create a new privilege under the circumstances, the Federal Circuit found communications underlying settlement discussions to be fair game at least so long the requests otherwise comport with the rules of discovery.

Given the In re MSTG, Inc. decision, future patent plaintiffs will now have to contend with the possibility of disclosures being made on sensitive settlement discussions. This decision is noteworthy given that settlements are sometimes done for strategic reasons that may not be directly tied the relative worth of the settled patents – one settlement against a competitor may yield very different results as against another competitor.  Moreover, it may make it more difficult to settle patent disputes if a patent holder feels it needs to establish a certain record it can use in future disputes. This is further complicated by the fact patent litigation may eventually reach new heights with the September 2011 passage of the Leahy-Smith America Invents Act and the current status of patent portfolios as a competitive currency for very large corporations.   Microsoft’s $1.1 billion purchase of 925 AOL patents and Facebook’s subsequent purchase of 650 of these Microsoft/AOL patents for $550 million are illustrative of this competitive currency approach to patents.  No matter how the patent litigation landscape changes down the road, plaintiffs now need to take a structured and strategic approach to settlement discussions given what is said in one case can very well impact the results of future litigation.

Keyword Trademark Cases Remain Viable

In this latest of a long line of cases against Google for keyword trademark infringement, a surprise appellate decision was handed down on April 9, 2012.   Rosetta Stone Ltd. v. Google, Inc., No. 10-2007 (4th Cir. April 9, 2012), reversing, Rosetta Stone Ltd. v. Google Inc., 730 F. Supp. 2d 531 (E.D. Va. 2010).   In reversing portions of the lower court’s summary judgment grant in favor of Google, the Fourth Circuit reinstated plaintiff’s direct infringement, contributory infringement and dilution trademark claims.   In reviving the direct infringement claim which only involved a likelihood of confusion analysis, the court ruled that even well-educated, seasoned Internet consumers are confused by the nature of Google’s sponsored links and are sometimes even unaware that sponsored links are, in actuality, advertisements.   At the summary judgment stage, we cannot say on this record that the consumer sophistication factor favors Google as a matter of law.   Id. at 24 – 25.   In fact, the Court noted, such uncertainty may constitute “quintessential actual confusion evidence.”  Id. at 22.  The Fourth Circuit relied on various internal Google studies analyzing consumer confusion in connection with sponsored links, including studies that concluded “the likelihood of confusion remains high when trademark terms are used in the title or body of a sponsored link appearing on a search results page and 94% of consumers were confused at least once.”  Id. at 21.

This decision stands in sharp contrast to other decisions that have ruled on this particular likelihood of confusion issue. Previously, courts have found that in an age of sophisticated Internet users, it makes little sense to continue with the notion that users will be confused between sponsored results with trademark-protected keywords and standard search results or even by domain names containing trademarked words.  See Network Automation, Inc., v. Advanced System Concepts, Inc., 638 F.3d 1137, 1152 (9th Cir. 2011).

The contributory infringement claim was revived given Rosetta Stone provided Google with approximately 200 instances of counterfeit products found on sponsored links.  This was deemed sufficient to raise a question of fact regarding Google’s knowledge of identified individuals using sponsored links to infringe Rosetta Stone’s marks.  Rosetta Stone Ltd. v. Google, Inc., Slip Op. at 30.  The Fourth Circuit also reversed summary judgment on the dilution claim given the lower court applied the wrong standard when applying available defenses to a dilution claim under the Lanham Act. Id. at 39 – 41.  This and other technical errors made by the lower court claim may be a short-term victory for Rosetta Stone given on remand the court will ultimately determine whether Rosetta Stone’s brand was famous in 2004 – if it was not, the dilution claim is lost.  Id. at 47.  This may be a difficult burden for Rosetta Stone since the court recognized the brand actually became more famous in the years after 2004.  Given the dilution reversal was based largely on technical deficiencies in how the lower court interpreted the fair use defense, the Fourth Circuit missed an opportunity to opine on the more interesting question of whether or not Rosetta Stone could even bring a dilution claim as against Google given there is a very real question as to whether Google sufficiently used the Rosetta Stone marks in commerce.  Id. at 39-40.

The ultimate significance of this case may eventually pivot outside of the search engine context.  For example, despite the solid body of law that continues to sanction keyword marketing, contextual advertisers may benefit from reevaluating their use of keyword triggers associated with famous marks.   And, likelihood of confusion inquiries may reach a new realm with augmented reality devices such as Google’s Project Glass given advertisers may be able to physically guide users towards products and services based on verbal commands and trademark usage all the while without a single trademark being displayed.

DMCA Safe Harbor Provisions Raise Copyright Infringement Questions of Fact

On April 5, 2012, the Second Circuit reinstated Viacom’s long-running copyright infringement action against YouTube.  Viacom Intl., Inc. v. YouTube, Inc.,�Nos. 10-3270-cv, 10-3342-cv (2nd Cir. April 5, 2012).   In its ruling, the court offered an analysis regarding the complete safe harbor framework available to online service providers under the Digital Millennium Copyright Act (DMCA), 17 U.S.C.  512.  It also reaffirmed that the DMCA safe harbor provisions can protect a defendant from all affirmative claims for copyright infringement, including claims for direct infringement, vicarious liability, and contributory liability.

At its most basic, the Second Circuit found that existing questions of fact regarding YouTube’s level of knowledge precluded summary judgment.  Viacom’s five-year suit for direct and secondary copyright infringement previously came to a halt when the trial court found that YouTube was protected by the DMCA’s safe harbor provision given it had insufficient notice of the particular infringements in suit.  Viacom Intl., Inc. v. YouTube, Inc.,718 F. Supp. 2d 514, 529 (S.D.N.Y. 2010).  Under 512(c)(1)(A), safe harbor protection is available only if the service provider:

(i) does not have actual knowledge that the material or an activity using the material on the system or network is infringing;

(ii) in the absence of such actual knowledge, is not aware of facts or circumstances from which infringing activity is apparent; or

(iii) upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to, the material

Viacom Intl., Inc. v. YouTube, Inc., Slip Op at 15 (citing 17 U.S.C.  512(c)(1)(A)).  The lower court held that the actual knowledge and the “facts and circumstances” requirements both refer to knowledge of specific and identifiable infringements and not mere general awareness of infringing activity.  Viacom Intl., Inc. v. YouTube, Inc., 718 F. Supp. 2d at 523.  Although it affirmed this ruling, the Second Circuit further distinguished as follows:

The difference between actual and red flag knowledge is thus not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or subjectively knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement objectively obvious to a reasonable person.

Viacom Intl., Inc. v. YouTube, Inc., Slip Op at 17.  Parting company with the lower court, the Second Circuit found that the current state of facts raised triable questions of fact regarding these two tests.  Id at 20 – 22.  The remand was to determine specific instances of knowledge or awareness and whether such instances mirror the actual clips-in-suit.  Id. at 22.

The Second Circuit also offered the doctrine of “willful blindness” a concept not referenced in the DMCA as yet another means of demonstrating actual knowledge or awareness of specific instances of infringement.   To that end, it remanded for further fact-finding and resolution regarding whether YouTube made a “deliberate effort to avoid guilty knowledge.” Id.at 24.

In addition to the above DMCA knowledge provisions, the DMCA provides that an eligible service provider must “not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity.” Id. at 24 (citing 17 U.S.C.  512(c)(1)(B)).  After reviewing this “right and ability to control” test, the Second Circuit rejected the lower court’s view that a service provider must actually know of a particular case of infringement before it can control it.  Id. at 25.   Rather, the Second Circuit chose to agree with other courts that have determined a finding of liability only requires something more than the ability to remove or block access to materials posted on a service provider’s website.  Id. at 27 (citations omitted).   And, this “something more” involves “exerting substantial influence on the activities of users” so a remand was necessitated to flesh out this standard and determine whether YouTube satisfied it.  Id. at 28 – 29.

Although in its decision the Second Circuit has provided solid authority on a wide range of DMCA safe harbor interpretive issues, the decision may ultimately provide content owners and online service providers with some potential future problems to the extent the ruling leaves the summary judgment door unpredictably ajar for future litigants.

Theft of Trade Secrets Not Necessarily a Federal Offense

On April 11, 2012, the Second Circuit overturned the eight-year sentence imposed on a computer programmer for the theft of trade secrets under the Economic Espionage Act of 1996, 18 U.S.C. 1832(a)(2) & (4) (EEA) and transportation of stolen property in interstate commerce under the National Stolen Property Act, 18 U.S.C. 2314 (NSPA).  United States v. Aleynikov, No. 11-1126 (2d Cir. April 11, 2012).   The NSPA makes it a crime to “transport, transmit, or transfer in interstate or foreign commerce any goods, wares, merchandise, securities or money, of the value of $5,000 or more, knowing the same to have been stolen, converted or taken by fraud.  18 U.S.C. 2314. The statute does not define the terms “goods, wares, or merchandise.”

The EEA makes it a crime for someone to “convert a trade secret, that is related to or included in a product that is produced for or placed in interstate or foreign commerce, to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly. . . steals, or without authorization appropriates, takes, carries away, or conceals, or by fraud, artifice, or deception obtains such information…” 18 U.S.C. 1832(a).

Although the defendant computer programmer was convicted of stealing computer source code from his former employer, the Second Circuit strictly construed both of these two federal laws when tossing the convictions.  Id. at 10. First, the court determined the defendant was wrongly charged with theft of property because the intangible code did not qualify as a physical object that was “produced for or placed in interstate or foreign commerce under the NSPA.”  Id. at 14 – 15.  Declining “to stretch or update statutory words of plain and ordinary meaning in order to better accommodate the digital age”, the Second Circuit held that because the defendant did not “assume physical control” over anything when he took the source code, and because “he did not thereby deprive [his employer] of its use, [defendant] did not violate the [NSPA].”  Id. at 18.  And, given that the stolen code was neither “produced for nor placed in interstate or foreign commerce given the employer had no intention of selling its HFT system or licensing it to anyone”, the EEA was not violated. Id. at 27.

The failure of the EEA to address defendant’s conduct here is problematic given the EEA was “passed after the Supreme Court and the Tenth Circuit said the NSPA did not cover intellectual property.”   Id. at 2 (Calabresi, J., concurring) (citations omitted).  The statute was apparently expressly meant to pick up the theft of intellectual property such as proprietary source code.   The concurrence by Judge Calabresi suggests that Congress should jump in to rectify this apparently significant hole in the EEA:  “While the legislative history can be read to create some ambiguity as to how broad a reach the EEA was designed to have, it is hard for me to conclude that Congress, in this law, actually meant to exempt the kind of behavior in which Aleynikov engaged. . . . I wish to express the hope that Congress will return to the issue and state, in appropriate language, what I believe they meant to make criminal in the EEA.”  Id. at 2 (Calabresi, J., concurring)

If nothing else, this decision reaffirms the need for companies to be proactive in the defense of their trade secrets.  Until Congress fixes the EEA, it is just not enough to assume that criminal conduct such as the theft of source code will rise to a federal offense.

Basketball, Julius Caesar, and Privacy

March Madness and murdered dictators aside, next month may be memorable for significant new privacy polices and obligations coming online — especially those for vendors holding sensitive information of a Massachusetts resident.  Given the expiration of a two-year grace period, Massachusetts will require effective March 1, 2012 that all service provider contracts include provisions requiring that the service provider  implement and maintain security measures for personal information that is consistent with the Standards for the Protection of Personal Information of Residents of the Commonwealth, 201 CMR 17.00.

A service provider must comply with this regulation if it “receives, stores, maintains, processes, or otherwise has access to personal information” of Massachusetts residents, e.g., social security numbers, driver license numbers, and financial account information,  in connection with the provision of goods or services or in connection with employment.  For compliance purposes, it does not matter whether the service provider actually maintains a place of business in Massachusetts.    In addition, those companies who are subject to the regulation must oversee service providers by taking reasonable steps to select and retain service providers who are compliant.  Penalties for non-compliance can be enforced through the Massachusetts Consumer Protection Statute and include penalties under that law as well as possible civil penalty of up to $5,000 for each violation, plus reasonable costs of investigation and attorney’s fees.

On the consumer side, starting March 1, 2012, Google’s new privacy policy will bring together its various privacy documents into a single umbrella privacy policy.  After being implemented, logged in users will be treated as a single user across  all Google products.   Concern over the way Google’s new policy would grant the data aggregator control over user data and allegedly “hold hostage” consumer personal information has caused attorney generals from around the country to reach out to Google.    Not one to miss out on the fun, one EU regulator has chimed in claiming that it is “deeply concerned” about the new Google policy.  And, EPIC even filed suit to enforce a FTC settlement in its effort to stop the March privacy change — a lawsuit that was dismissed on February 24, 2012.   Given it will likely be implemented in a few days, consumers wanting to avoid some of the potential privacy sting of these changes can heed some advice from the EFF.

Finally, on March 7, 2012, HHS is scheduled to publish in the Federal Register its final proposed rule regarding what constitutes “meaningful use” of EHR sufficient to trigger incentive payments under the HIITECH Act.  A draft of the proposed rule is currently  available.   It remains to be seen whether this push for EHR usage will ultimately add or subtract to healthcare data breaches.  

As it stands, a HIPAA covered entity must provide notice to the HHS Secretary “without unreasonable delay and in no case later than 60 days from discovery of the breach” impacting 500 or more individuals.  To assist in reporting, there is even an online means of disclosing breaches.  The current list of all such disclosed breaches is publicly available; and not surprisingly, incidents have been steadily increasing as per an analysis done by OCR of breaches occurring in 2009 and 2010. 

The annual OCR report indicates that larger breaches occurred “as a result of theft, error, or a failure to take adequate care of protected health information.”  OCR Report at 9.  It is not difficult to imagine efforts to obtain governmental incentive payments by achieving meaningful EHR usage — as the term will be further refined in March — may actually  cause an uptick in breaches.    Despite having a requirement that every EHR Module be certified to a “privacy and security” certification criteria — which will ultimately be determined by the HHS Secretary, these incentive payments will continue to be tied to usage and not necessarily verifiable compliance with a security standard.   Given that HITECH’s financial incentives remain based on usage and not protection, “sticks” such as reductions in Medicare payments and stiff HITECH fines will continue to be the only real governmental incentive to maintain adequate protection.   It would be nice if HHS, instead, developed a financial incentive or reward program for those firms who go the extra distance (as per NIST standards) when providing security.  Maybe such a program will make the agenda after the OCR releases a few more breach reports.

Data Privacy Day 2012

Deserving of a fairly large yawn, the International Data Privacy Day came on a Saturday this year.  The US sponsors — who are basically large tech companies — can hardly be faulted for failing to elevate today to true holiday status.  In Europe, the festivities are equally lame.  Last year, it was not much different.

Why was January 28th even chosen to celebrate privacy?  Well, because it is generally recognized that the first stab at a statutory privacy scheme came into being on 28 January 1981 when the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data was passed by the Council of Europe.  The purpose of this convention was to secure for residents respect for “rights and fundamental freedoms, and in particular his right to privacy, with regard to automatic processing of personal data relating to him.”

It was actually in 1965 — 16 years earlier — when the US Supreme Court, in Griswold v. Connecticut, 381 U.S. 479 (1965), formally recognized that every US citizen enjoys a constitutional “zone of privacy” by way of the Bill of Rights. Indeed, probably the best known judicial wording on the subject was written in 1928 when Justice Brandeis wrote in a dissent:

The protection guaranteed by the Amendments is much broader in scope. The makers of our Constitution undertook to secure conditions favorable to the pursuit of happiness. They recognized the significance of man’s spiritual nature, of his feelings, and of his intellect. They knew that only a part of the pain, pleasure and satisfactions of life are to be found in material things. They sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations. They conferred, as against the Government, the right to be let alone — the most comprehensive of rights, and the right most valued by civilized men. To protect that right, every unjustifiable intrusion by the Government upon the privacy of the individual, whatever the means employed, must be deemed a violation of the Fourth Amendment.

Olmstead v. Unites States, 277 U.S. 438 (1928) (Brandeis, J., dissenting)

Fast forward to January 23, 2012 and the case of United States v. Jones is decided by the Supreme Court.  It is the Court’s first look at how the Fourth Amendment applies to police use of GPS technology.  This fractured decision — only serving up a majority to agree with the view that the defendant’s Fourth Amendment rights were violated when a GPS device was attached to his jeep for 28 days — does provide an interesting glimpse into future rulings even though many relevant questions were left unanswered by the Court.

For example, Justice Sotomayor asks rhetorically:

it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties.  This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks…Perhaps, as JUSTICE ALITO notes, some people may find the tradeoff of privacy for convenience worthwhile, or come to accept this diminution of privacy as inevitable, post, at 10, and perhaps not.

Justice Sotomayor may one day get the opportunity to expand on her dicta.  Although it is uncertain when that may happen, what is certain is that the privacy landscape will be quite different by the time Data Privacy Day 2013 rolls around.

EU Data Breach Notification in 24 Hours?

On January 25, 2012, the European Union will announce a comprehensive reform of its data protection rules.  This proposed shift will likely toughen existing data-protection requirements and, according to one published report, will include a new rule requiring companies to disclose data breaches within 24 hours of the breach – in effect leapfrogging the toughest existing breach notification laws of the United States.   The EU’s initial Data Protection Directive does not even have a breach notification requirement.

The proposed retooling of the 1995 Directive will also likely prod national data-protection authorities within the 27-member EU to assess administrative sanctions and fines.  Interestingly, an EU conference will be held in Washington, D.C. on March 19, 2012 to obtain feedback from US stakeholders.  One issue that will likely be aired at this D.C. conference is a potential new EU privacy right “to be forgotten” — a hot topic at the most recent International Conference of Data Protection and Privacy Commissioners.   Viviane Reding, Vice-President of the European Commission and EU Justice Commissioner, has recently publicly called for such a right:  “I also want to create a right to be forgotten, which will build on existing rules to better cope with privacy risks online. If an individual no longer wants their personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system.”

Although the proposed new directive framework to be announced on January 25, 2012 will take some time to be implemented by EU member countries and then enforced by the respective member Data Protection and Privacy Commissioners, it is clear that the EU privacy world will soon be changing in a dramatic way.  Those firms processing personal data within the EU are well advised to take notice and prepare for potential new obligations and privacy requirements.

Update:  January 25, 2012
The new proposed set of rules will indeed morph into “big news” if ultimately passed.  First of all, the 1995 Directive will be repealed in favor of a consistent approach for all member states.   In fact, these new rules might also  impact US businesses to the extent they process EU protected data and have a EU presence.  In a nod to harmonization hawks, only one member state would have authority to regulate a particular business even if the data was processed among several member states — jurisdiction will ultimately be determined by domicile or where the bulk of processing takes place.

As reported, the proposed new rules do, indeed, have a proposed notification provision that requires notification “without undue delay and, where feasible, not later than 24 hours of becoming aware of [the breach]”.   And, there is also a new “right to be forgotten” that is created via these new rules.  Top fines that can be levied for non-compliance of these rules can reach up to 2% of a firm’s gross worldwide turnover (“revenue”).

There are other noteworthy changes so it is definitely worth taking the time to fully review this proposed comprehensive reform of EU data protection rules found at the European Commission website.

Third Circuit Agrees Standing is Lacking in Breach Case

The United States Court of Appeals for the Third Circuit, in Reilly v. Ceridian Corporation, 2011 U.S. App. LEXIS 24561, 3 (3d Cir., December 12, 2011), found that “allegations of an increased risk of identity theft resulting from a security breach” were insufficient to secure Article III standing.  In so doing, the court affirmed the dismissal of claims brought by former employees of a NJ law firm after the firm’s payroll processor was breached.

Recognizing that “a number of courts have had occasion to decide whether the ‘risk of future harm’ posed by data security breaches confers standing on persons whose information may have been accessed”, the Third Circuit sided with those courts finding that plaintiffs lack standing because the harm caused is too speculative.   Specifically, the court did not consider an intrusion that penetrated a firewall and potentially allowed access to employee payroll data sufficient to meet the Article III requirement of an “actual or imminent” injury.  No misuse was alleged so no harm was found.

As well, the Third Circuit rejected the notion that time and money expenditures to monitor financial information conferred plaintiffs with standing.  Id. at 5 (“That a plaintiff has willingly incurred costs to protect against an alleged increased risk of identity theft is not enough to demonstrate a ‘concrete and particularized’ or ‘actual or imminent’ injury.”).  See also In re Michaels Stores PIN Pad Litigation, Slip Op. at 14 (N.D. Ill November 23, 2011) (reasoning that “individuals cannot create standing by voluntarily incurring costs in response to a defendant’s act.  Accordingly, Plaintiffs cannot rely on the increased risk of identity theft or the costs of credit monitoring services to satisfy the ICFA’s injury requirement.”).

The Third Circuit’s decision stands in sharp contrast to those decisions that stretched hard to find a cognizable harm sufficient to trigger constitutional standing as well as a recent ruling from the First Circuit reversing a dismissal because costs associated with credit card reissuance fees and ID theft insurance were deemed sufficient to constitute an injury.

There is now a growing body of law that has sprung from public data breaches that can be used by either side of the class action table.  The key metric will be how such decisions can be tooled by plaintiff’s counsel to defer dismissal.   Given the potential use of cy pres settlements, defense counsel need to cut off the discovery beast before it grows out of control and gives rise to such settlement discussions.  All plaintiff’s counsel needs to do is hope for a sympathetic judge before the wheel is spun.