Privacy and Civil Liberties Oversight Board will conduct a public hearing on July 9, 2013

Announced in a public notice published on August 28, 2013, the Privacy and Civil Liberties Oversight Board (“the Board”) will conduct a public hearing on July 9, 2013.  According to this notice, “invited experts, academics and advocacy organizations” will discuss “surveillance programs operated pursuant to Section 215 of the USA PATRIOT Act and Section 702 of Foreign Intelligence Surveillance Act.”  Members of the public are invited to participate.   The Washington, D.C. location of the event has yet to be determined.

By way of background, the Board consists of five members appointed by the President and confirmed by the Senate.   The Board was first created in 2004 within the executive branch but became an independent agency several years later.  Until the current NSA surveillance leak, the Board was quiet to say the least.  Apparently, there is no record of the Board members meeting more than once and President Obama met with Board members only days ago for the first time.  Notwithstanding the relative inexperience of the Board working as a team, this hearing will be of great interest if for no other reason the recent unveiling of the NSA surveillance programs may actually be the tip of the proverbial iceberg.

As reported in the UK press — the very same UK paper that broke and explored the Snowden leak, “Britain’s spy agency GCHQ has secretly gained access to the network of cables which carry the world’s phone calls and internet traffic and has started to process vast streams of sensitive personal information which it is sharing with its American partner, the National Security Agency (NSA).”  It would be nice if the American press were interested in taking a ride on this UK investigative bandwagon.  Maybe after July 9, 2013, it finally will.

Update:  July 15, 2013

Here’s the transcript of this very lively public workshop.

California’s Right to Know Law Put on Hold

As reported by the LA Times, “a powerful coalition of technology companies and business lobbies that included Facebook, Inc., Google, Inc., the California Chamber of Commerce, insurers, bankers and cable television companies as well as direct marketers and data brokers” were able to stop a California bill aimed at giving consumers greater insight as to the use of their personal data.

First introduced in February by Assemblywoman Bonnie Lowenthal (D-Long Beach), the proposed Right to Know Law (AB 1291) would have implemented major revisions to existing law and created new rights for consumers.  Specifically, the proposed law would require

any business that has a customer’s personal information, as defined, to provide at no charge, within 30 days of the customer’s specified request, a copy of that information to the customer as well as the names and contact information for all 3rd parties with which the business has shared the information during the previous 12 months, regardless of any business relationship with the customer.

This new level of transparency might have helped sooth consumer concerns.  According to a 2012 USC Dornsife/Los Angeles Times poll, “82 percent of Californians said they are “very concerned” or “somewhat concerned” about Internet and smartphone companies collecting their personal information.”   On the other hand, providing a full and accurate accounting of who had access to a consumer’s data – even to only the small percentage of consumers who would actually take the time to request it – would have generated a major undertaking for a wide range of companies.  It is not surprising that the companies who fought so hard to pull the plug on this bill represent a very diverse coalition of businesses.

Even if this bill does not get revived in a new form sometime in the future, the prospect of what it might have brought to the table should serve as a wake up call to those businesses deep into online behavioral advertizing.  It may be time to better understand just who has access to what information – and it may not eventually matter whether that information belongs to a current client or consumer or whether it was anonymized.  As usual, staying in front of the regulatory curve remains a sound business practice.

Financial Correlation of Privacy Rights

In Letting Down Our Guard With Web Privacy, published on March 30, 2013, the author details ongoing research being conducted by Alessandro Acquisti, a behavioral economist at Carnegie Mellon University.  Mr. Acquisti’s research is cutting edge when it comes to online behavioral advertising (OBA)  and associated consumer behavior.  Indeed, he’s the academic who famously announced in 2011 that one might be able to discover portions of someone’s social security number simply by virtue of a posted photograph.   His research often distills to one major premise – consumers may not always act in their best interests when it comes to online privacy decisions.

It appears consumers and merchants alike may be missing out on fully cultivating a very valuable commodity.  According to the World Economic Forum, “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Rethinking Personal Data:  Strengthening Trust, at 7, World Economic Forum Report (May 2012).  Before this asset class can ever be completely exploited and fully commercialized, however, its constituent value components must be correlated by all in the privacy food chain.

Over three decades ago, it was recognized that the three pillars of privacy – the very foundation of personal data – secrecy, anonymity, and solitude, were distinct yet interrelated.  See Gavison, Ruth, Privacy and the Limits of Law, 89 The Yale Law Journal 421, 428-429 (1980) (“A loss of privacy occurs as others obtain information about an individual, pay attention to him, or gain access to him. These three elements of secrecy, anonymity, and solitude are distinct and independent, but interrelated, and the complex concept of privacy is richer than any definition centered around only one of them.”).

Current OBA has made it so these three privacy pillars may be confusing for consumers to value, manage, and isolate when online – it is not generally up to consumers whether they will be fed an ad based on previous website visits or purchases – it will just happen.  Indeed, according to a survey of 1,000 persons conducted by Ipsos Public Affairs and released by Microsoft in January 2013, forty-five percent of respondents felt they had little or no control over the personal information companies gather about them while they are browsing the Web or using online services.  This view may not be unfounded given that data routinely gathered online, e.g., operating system, browser, IP address, persistent cookies, last used server, can be used to divulge the activity of individual devices.

The privacy trade-offs being researched by Mr. Acquisti and others offer insight into the true value of these data constituents.  Consumers who try to “shut off” or render anonymous access to their device’s data or settings, would not only likely fail in their attempt at being anonymized, they would also lose out on access to most social media and other websites requiring browsers to accept cookies as well as product offers that may presumably are of interest.  Indeed, this coordinated tracking of consumers is not even unique to the Internet.   See generally Bibas, Steve, A Contractual Approach to Data Privacy, 17 Harv. J. Law & Public Policy 591 (Spring 1994) (“Although the ready availability of information helps us to trust others and coordinate actions, it also lessens our privacy. George Orwell presciently expressed our fear of losing all privacy to an omniscient Big Brother.  Computers today track our telephone calls, credit-card spending, plane flights, educational and employment records, medical histories, and more.  Someone with free access to this information could piece together a coherent picture of our actions.”).  There are even companies that bridge the gap between offline and online activities by taking in-store point of sale purchases and converting such data to an anonymous online cookie ID that will eventually be used online by clients.  Such use of in-store data is generally permissible under a retailer’s loyalty program.

Current law does not generally prevent someone from collecting public information to create consumer profiles – nor is there the right to opt out of having your public record information sold or shared.  And, when one wants to self-determine whether data will be disclosed or whether he or she will be “untraceable”, “anonymous” or “left alone”, there may not always exist the ability to easily curtail these rights from being exploited – there is certainly no way to obtain a direct financial gain in return for the relinquishment of such privacy rights.  Instead, there has generally been a “privacy for services” marketing/advertizing arrangement that has been accepted by consumers – which, in fact, has helped pay for and fuel the growth of the commercial Internet.

The current OBA ecosystem does not posit a “loss of privacy” as much as it offers a bartering system where one party feels the value of what is being bartered away while the other party actually quantifies with cascading/monetizing transactions what is only felt by the other party.  In other words, it is not a financial transaction.  Those who are able to find an entertaining online video or locate a product online using a search engine don’t really mind that an ad will be served to them while visiting some other website given they feel this loss of privacy is worth the value of the services being provided.

Ironically, the interactive advertising industry itself may believe it is collecting too much sensitive consumer data.  According to a study conducted by the Ponemon Institute, 67 percent of responding online advertisers believe “limiting sensitive data collection for OBA purposes is key to improving consumer privacy and control when browsing or shopping online.” Leading Practices in Behavioral Advertising & Consumer Privacy:  A Study of Internet Marketers & Advertisers, at 2, The Ponemon Institute (February 2012).

As recognized by privacy researchers, “[e]mpirical evidence on the behavioral effects of privacy is rather scarce.”  Regner, Tobias; Riener, Gerhard, Voluntary Payments, Privacy and Social Pressure On The Internet: A Natural Field Experiment, DICE Discussion Paper, No. 82 (December 2012) at 6.  Although “some consumers are willing to pay a premium to purchase from privacy protective websites”; there is no measure of what that premium should be or how widespread a factor it is for consumers as a whole.  Id. at 7.

More often than not, consumers have been “often willing to provide personal information for small or no rewards.”  Losses, Gains, and Hyperbolic Discounting: An Experimental Approach to Information Security Attitudes and Behavior, presented by Alessandro Acquisti and Jens Grossklags at the 2nd Annual Workshop on Economics and Information Security, College Park, Maryland, May 2003, at 4.

This does not mean researchers have not tried to quantify a “privacy valuation” model.  In 2002, a Jupiter Research study found 82% of online shoppers willing to give personal data to new shopping sites in exchange for the chance to win $100.  See c.f. Tsai, Janice; Egelman, Serge; Cranor, Lorrie; Acquisti, Alessandro; The Effect of Online Privacy Information on Purchasing Behavior: An Experimental Study, Information Systems Research (February 2010) at 22 (describing survey results which concludes that “people will tend to purchase from merchants that offer more privacy protection and even pay a premium to purchase from such merchants.”); Beresford, Alastair; Kübler, Dorothea; Preibusch, Sören, Unwillingness To Pay For Privacy: A Field Experiment, 117 Economics Letters 25 (2010) (“Thus, participants predominantly chose the firm with the lower price and the more sensitive data requirement, indicating that they are willing to provide information about their monthly income and date of birth for a 1 Euro discount.”).

In his 1994 paper, A Contractual Approach to Data Privacy, Steve Bibas suggests that individual contracts may provide the best solution to the privacy compensation dilemma:  “In the hands of the contracting parties, however, flexibility allows people to control their lives and efficiently tailor the law to meet their needs. Flexibility is the market’s forte; the pricing mechanism is extremely sensitive to variations in valuation and quickly adjusts to them.”  Bibas, 17 Harv. J. Law & Public Policy 591 (Spring 1994).   Mr. Bibas, however, recognized the limitations in what could be accomplished with privacy transactions that relied only on static privacy trades.  In other words, a model that might be effective is one that customizes the financial rewards to consumers are based on a continuous exchange of information between the consumer and merchant.

One problem most consumers face when using commonly marketed solutions that are meant to safeguard their privacy is that they fail to also create an acceptable value proposition for merchants.  As well, those recently formed companies promising a private web experience will not be able to – nor should they even try – to curtail firms from using OBA to reach consumers.  For the foreseeable future, OBA will continue to drive the Internet and “pay” for a much richer and rewarding consumer experience than would otherwise exist.  It may one day be determined, however, that an even more effective means to satisfy all constituent needs of the OBA ecosystem (consumer, merchant, publisher, agency, etc.) will be to find a means to directly correlate between privacy rights, consumer data, and a merchant’s revenue.

Sins of our Marketers: SMS, the Telephone Consumer Protection Act, and Strict Liability

In continuing a trend that took hold nearly four years ago in Satterfield v. Simon & Schuster, Inc., 569 F. 3d 946 (9th Cir. 2009), a putative class action was filed on January 25, 2013 alleging that unsolicited SMS texts give rise to statutory damages under the Telephone Consumer Protection Act (TCPA).  Although the suit may have been brought against a big box retailer the allegations are based on the conduct of “a mobile technology company whose identity is currently unknown.”

Under the TCPA, it is unlawful to make “any call (other than a call made for emergency purposes or made with the prior express consent of the called party) using any automatic telephone dialing system [ATDS] . . . [to any] cellular telephone service.” 47 U.S.C. Sec. 227(b)(1)(A).  Although the TCPA was enacted years before SMS was a reality, the FTC, as well as courts in California and Chicago, have interpreted the undefined term “any call” to include  SMS texts so long as the SMS text was sent using an ATDS.

Courts have already ruled that “the TCPA is essentially a strict liability statute which imposes liability for erroneous unsolicited faxes.” Alea London Ltd. v. Am. Home Services, 638 F.3d 768, 776 (11th Cir. 2011) (citation omitted).  See also Universal Underwriters Ins. Co. v. Lou Fusz Auto. Network, Inc., 401 F.3d 876, 882 (8th Cir. 2005) (“The Act makes no exception for senders who mistakenly believe that recipients’ permission or invitation existed.”).  This means that class action counsel need only demonstrate that the SMS messages went out unsolicited via an ATDS and statutory damages will likely follow.

As now being pressed in the Hill putative class action filed on January 25, 2013, this strict liability for unsolicited SMS messages may also extend from the actual sender, i.e., marketer, to the retailer.  Several years ago, the FTC responded to a request for public comments filed by the FCC regarding the following two questions:   “First, does a call placed by an entity that markets a seller’s goods and services qualify as a call made on behalf of, and initiated by, the seller, even if the seller does not physically place the call?; and second, what should determine whether a telemarketing call is made “on behalf of” a seller, thus triggering liability under the TCPA?”

The FTC answered with a vigorous defense of its view that “the plain meaning of the law and its regulations supports holding sellers liable for calls made for the seller’s benefit.”  Given the FTC was merely responding to the FCC’s request for comments and given the FCC has yet to release its final ruling, it remains to be seen whether the courts will ultimately side with the FTC view.  Indeed, several courts have explicitly rejected the FTC position regarding vicarious strict liability.  See e.g.Mey v. Pinnacle Security, LLC, No. 5:11CV47, slip op. at  (N.D.W.Va. Sept. 12, 2012) (“In the Spring of 2011, the FCC released a public notice requesting comment on the issue of strict “on behalf of” liability under §227(b)(3), and this Court has not received information that a ruling has yet been issued on the matter.  26 FCC Rcd 5040. . . . Accordingly, this Court finds that the TCPA does not provide strict “on behalf of” liability under § 277(b)(3).”) (citing Thomas v. Taco Bell Corp., 2012 U.S. Dist. LEXIS 107097, No. SACV 09-01097-CJC (C. D. Calif. June 25, 2012).

Whether or not the FTC is ultimately vindicated by the courts on this issue, it is clear that the FTC is not oblivious to the mechanics of mobile marketing.   For example, the FTC has found that a one-time text message confirming a consumer’s request that no further text messages be sent was not violative of the TCPA.  Notwithstanding any current temporary safe harbor that may exist, the takeaway remains that firms may be on the hook for what their marketing, promotional, and advertising firms are doing when it comes to SMS campaigns.

Given the FTC’s stated desire to visit on innocent retailers the sins of their marketers and the difficulty to insure against this risk, it is obviously more important than ever for those who rely on SMS campaigns to always verify appropriate consent and obtain suitable contractual indemnifications.

First Amendment Does Not Save NJ Teacher from Postings Firing

In a January 11, 2013 ruling, the New Jersey Appellate Division upheld the administrative dismissal of a first grade teacher.  She had argued that the First Amendment precluded her firing — which was based on two Facebook postings.  In the Matter of the Tenure Hearing of Jennifer O’Brien, (NJ App. Div. January 11, 2013).  One of her statements was, “I’m not a teacher — I’m a warden for future criminals!”

O’Brien said she posted the statement that her students were “future criminals” because of “their behaviors, not because of their race or ethnicity.”  She also stated that “six or seven of her students had behavioral problems, which had an adverse impact on the classroom environment.”  Id. at 4 – 5.

In finding that she failed to establish her Facebook postings were protected speech, the Appellate Division found that “even if O’Brien’s comments were on a matter of public concern, her right to express those comments was outweighed by the district’s interest in the efficient operation of its schools.”  Id. at 11.

This ruling sits in contrast to the NLRB’s frequent warnings regarding the sanctity of worker postings — especially when the postings pertain to workplace conditions.  The cringe-worthy nature of these postings, the fact they were directed at first graders, and the deference accorded administrative proceedings certainly all made it easy for the Appellate Division to rule as it did.  In other words, employers should not take great comfort in this ruling when evaluating whether to discipline employees for inflammatory postings.

New Jersey Fast Tracks Employer Social Media Bill

New Jersey is ready to have the harshest law aimed at preventing employers from delving into the social media postings of employees.  In what is considered lightning speed for New Jersey legislative action, the New Jersey Assembly fast-tracked a bill in May that was approved in June by the Assembly 76-1 and by the Senate in October by a 38-0 margin.  The bill – A2878–  is now poised for signature by Governor Christie by the end of the year.

If it is signed by the Governor, it will be the toughest of the similar laws on the books in Maryland, California and Illinois.  All of these laws are aimed primarily at prohibiting employers from asking for social media passwords.   If enacted, New Jersey’s law would also preclude employers from asking if an employee or prospective employee even has a social media account.  And, any agreement to waive this protection would be deemed void pursuant to the law.   There are also civil penalties for any violation with the penalties beginning at $1,000 for an initial violation and increasing to $2,500 for each additional violation.

The New Jersey law would obviously generate issues for an employer who is looking to comply while still ensuring a secure work environment for its employees.  To that end, the new law would not bar company policies curtailing the use of employer-issued electronic communications devices during work hours.   Not surprisingly, it is the blurring of private vs. public social media usage which portends to be a major driver of any future civil litigation.  What may end up being the most important factor regarding how much litigation this new law would create, however, is the fact reasonable attorney fees may also be recoverable under the statute.  Without the financial incentive of a class action or statutory fees, there would be few attorneys willing to bring actions based on $1,000 violations.

UPDATE – February 21, 2013

The bill has still not been signed into law — so much for being fast tracked!  Rather than agree to several Senate changes to the bill and then pass along to the Governor for signature, the Assembly has chosen to sit on the bill.  A good discussion regarding the latest status of this proposed law can be found in Law360.

UPDATE – March 25, 2013

On March 21, 2013, the bill passed the Assembly by a whopping 75 – 2 vote and is now on the Governor’s desk.

UPDATE – May 7, 2013

On May 6, 2013, Governor Christie conditionally vetoed the bill.  In his statement, he suggested that the bill would have been over broad in reach and gave the following example of an unintended consequence of such breadth:

[U]nder this bill, an employer interviewing a candidate for a marketing job would be prohibited from asking about the candidate’s use of social networking so as to gauge the candidate’s technological skills and media savvy. Such a relevant and innocuous inquiry would, under this bill, subject an employer to protracted litigation.

The Governor also vetoed that part of the bill that would have allowed for a private right of action.  He felt any dispute was better resolved by the state labor commissioner.  According to the bill’s sponsor, the Assembly will likely adopt Governor Christie’s suggestions in order to have the bill signed into law.  In effect, the most controversial aspect of the bill was just removed.  While some New Jersey businesses may be breathing a sigh of relief, the plaintiff’s bar is certainly no longer excited about this bill.

October is National Cyber Security Awareness Month

National Cyber Security Awareness Month is being sponsored by the Department of Homeland Defense as well as the National Cyber Security Alliance and the Multi-State Information Sharing and Analysis Center.   In a Presidential Proclamation, President Obama called “upon the people of the United States to recognize the importance of cybersecurity and to observe this month with activities, events, and trainings that will enhance our national security and resilience.”  Many of the same corporations and universities who promote Privacy Day in January also promote NCSAM in October.

According to the FBI, since the first NCSAM was celebrated nine years ago the network security threat has continued to grow even more complex and sophisticated — “Just 12 days ago, in fact, FBI Director Robert Mueller said that ‘cyber security may well become our highest priority in the years to come.'”

There is no denying the obvious good in promoting security awareness and diligence.  It is hoped, however, that a month devoted to “cyber security awareness” does not inadvertently dilute the more important message that security diligence is something that should be done every day of the year.   On the other hand, to the extent NCSAM’s “Stop.Think.Connect.” message touches even one small business owner in Des Moines and makes her less likely to fall victim to a phishing exploit in the future, NCSAM will be a success.

The Privacy Tug of War

According to the World Economic Forum, “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Given the inherent value of this new asset class, it’s no surprise there has been an ongoing tug of war regarding how consumers should be compensated for access to their personal data.

In a March 2003 Wired article titled, “Who’s Winning Privacy Tug of War?“, the author suggests that “[c]onsumers appear to have become weary of the advertising bombardment, no matter how targeted to their tastes those ads may be.”  And, the “tit-for-tat tactic on the Web” that requires users to provide certain personal information in exchange for product or other information may be much less than a perfect marketing model given these marketing preference databases “are polluted with lies.”

Fast forward a decade or so and companies are still trying to figure out the Privacy Tug of War rules of engagement.  In a report released on September 19, 2012, UK think tank Demos released a report it considered “the most in-depth research to date on the public’s attitudes toward the sharing of information.”   Not surprisingly, Demos found that in order to maximize the potential value of customer data, there needs to be “a certain level of trust established and a fair value exchange.”   The firm found that only 19 percent of those surveyed understand the value of their data, and the benefits of sharing it.

The surveys, workshops and other research tools referenced in the Demos report all point towards a “crisis of confidence” which may “lead to people sharing less information and data, which would have detrimental results for individuals, companies and the economy.”   Demos offers up a possible solution to this potential crisis:

The solution is to ensure individuals have more control over what, when and how they share information. Privacy is not easily defined. It is a negotiated concept that changes with technology and culture. It needs continually updating as circumstances and values change, which in turn requires democratic deliberation and a dialogue between the parties involved.

It is hard to have any meaningful deliberations when no one is charting a clear path to victory in the Privacy Tug of War — nor is there any consensus regarding whether it is preferable to even have such a path.   Some on the privacy circuit have suggested we must create better privacy metrics and offer tools to use those metrics to measure whether a company’s privacy protections are “satisfactory”.   Consumers right now can rely on sites such as Clickwrapped to score the online privacy policies of major online brands.   Certification services such as TRUSTe provide insight regarding the online privacy standards of thousands of websites.   If they don’t like what they see, consumers can always “opt out” and use services such as that of start-up Safe Shepherd to remove “your family’s personal info from websites that sell it.”

Unfortunately, no commercially available privacy safeguard, testing service or certification can ever move fast enough to address technological advances that erode consumer privacy given such advances will always launch unabated — and undetected — for a period of time.  Not unlike Moore’s Law regarding the doubling of transistor computing power every two years, it appears that consumer privacy diminishes in some direct proportion to new technological advances.  Consumer privacy expectations should obviously be guided accordingly.   Unlike with Moore’s Law, however, there is no uniform technology, product, or privacy metric that can be benchmarked as it is in the computer industry.

This does not mean we are powerless to follow technology trends and quantify an associated privacy impact.  For example, the Philip Dick/Steven Spielberg Minority Report vision of the future where public iris scanning offers up customized advertisements to people walking around a mall has already taken root in at least one issued iris-scanning patent that is jointly owned by the federal government and a start-up looking to serve ads suggested using facial recognition techniques.  In direct reaction to EU criticism of Facebook’s own facial recognition initiative, Facebook temporarily suspended its “tag-suggest” feature.  This automatic facial recognition system recognized and suggested names for those people included in photographs uploaded to Facebook – without first obtaining the consent of those so recognized and tagged.

Closely monitoring technological advances that may impact privacy rights — whether the body diagnostics of Mc10 and ingested medical sensors from Proteus, the latest in Big Data analytics, or a new EHR system that seamlessly ties such innovations together — becomes the necessary first step towards understanding how to partake in the Privacy Tug of War.

Unlike the PC industry that is tied to Moore’s Law, our government’s unbounded funding is an active participant in developing privacy-curtailing technological advances.  For example, the FBI is currently undergoing a billion-dollar upgrade creating its Next Generation Identification Program which will deploy the latest in facial recognition technologies.   As recognized by CMU Professor Alessandro Acquisti, this “combination of face recognition, social networks data and data mining can significantly undermine our current notions and expectations of privacy and anonymity.”

Not surprisingly, there has been some push back on such government initiatives.    For example, on September 25, 2012, the ACLU filed suit against several government agencies under the Freedom of Information Act seeking seeking records on their use and funding of automatic license plate readers (APLRs).  According to the Complaint, “ALPRs are cameras mounted on stationary objects (e.g., telephone poles and the underside of bridges) or on patrol cars [and] photograph the license plate of each vehicle that passes, capturing information on up to thousands of cars per minute.”   The ACLU suggests that APLRs “pose a serious threat to innocent Americans’ privacy.”

The imminent unleashing of unmanned aircraft systems – commonly known as “drones” – sets in motion another technological advance that should raise serious concerns for just about anyone.  Signed by President Obama in February 2012, The FAA Modernization and Reform Act of 2012, among other things, requires that the Federal Aviation Administration accelerate the use of drone flights:

Not later than 270 days after the date of enactment of this Act, the Secretary of Transportation, in consultation with representatives of the aviation industry, Federal agencies that employ unmanned aircraft systems technology in the national airspace system, and the unmanned aircraft systems industry, shall develop a comprehensive plan to safely accelerate the integration of civil unmanned aircraft systems into the national airspace system.

As recognized by the Government Accountability Office in a September 14, 2012 Report, even though “[m]any [privacy] stakeholders believe that there should be federal regulations” to protect the privacy of individuals from drone usage, “it is not clear what entity should be responsible for addressing privacy concerns across the federal government.”

This is not an insignificant failing given according to this same report, commercial and government drone expenditures could top $89.1 billion over the next decade ($28.5 billion for R&D and $60.6 billion for procurement).  Interestingly, the necessary comprehensive plan to accelerate integration of civil drones into our national airspace systems will be due on November 10, 2012 – right after elections.   According to an Associated Press-National Constitution Center poll, 36 percent of those polled say they “strongly oppose” or “somewhat oppose” police use of drones.   This somewhat muted response is likely driven by the fact most polled just do not understand the capabilities of these drones and just how pervasive they will become in the coming years.

The technology advance that may have the greatest impact on privacy rights does not take to the skies but is actually found in most pockets and purses.   The same survey referenced above found that 43 percent of those polled (the highest percentage) primarily use a mobile device alone rather than a landline or a combination of mobile device and landline — with 34 percent of those polled not even having a landline in their home.   Not surprisingly, companies have been aggressively tapping into the Big Data treasure trove available from mobile device usage.   Some politicians have taken notice and are already drawing lines in the digital sand.

Under the Mobile Device Privacy Act introduced by Congressman Edward J. Markey, anyone who sells a mobile service, device, or app must inform customers if their product contains monitoring software — with statutory penalties ranging from $1,000 per unintentional violation to $3,000 per intentional violation.   This new bill addresses only a single transgression of the personal-data-orgy now being enjoyed by so many different companies up and down the mobile device communication and tech food chain.   As evidenced by the current patent landscape — including an issued Google patent that involves serving ads based on a mobile device’s environmental sounds — and the now well-known GPS capabilities of mobile devices, the privacy Battle of Midway will likely be fought around mobile devices. Companies with a stake in the Privacy Tug of War — as well as those professionals who advise such companies — will only be adequately prepared if they recognize that this battle may ultimately have no clear winners or losers — only willing participants.

World Intellectual Property Day

Happy World Intellectual Property Day!

To increase IP awareness around the world, member states of the World Intellectual Property Organization (WIPO) chose April 26  the day when the WIPO Convention came into force in 1970  as World IP Day.  According to WIPO, World IP Day celebrates innovation and creativity and how intellectual property fosters and encourages them. To celebrate this day, what follows is a discussion of four significant US court rulings decided in April 2012 each involving one of the major IP domains:   patent, trademark, copyright and trade secret.

Communications Involving Patent Settlements are Discoverable

On April 9, 2012, the United States Court of Appeals for the Federal Circuit ruled that communications involving reasonable royalty rates and damage calculations were discoverable.   Specifically, the Federal Circuit ruled that such communications that may underlie settlement agreements were not worthy of creating a new federal privilege.   In re  MSTG, Inc., No. 996 (Fed. Cir. April 9, 2012).   There was previously an open question as to whether settlement discussions were privileged and not subject to disclosure.  The Sixth Circuit in Goodyear Tire & Rubber Co. v. Chiles Power Supply, Inc., 332 F.3d 976, 979-83 (6th Cir. 2003) adopted a settlement privilege while such a privilege was rejected by the Seventh Circuit in In re General Motors Corp. Engine Interchange Litigation, 594 F.2d 1106, 1124 n.20 (7th Cir. 1979).

In rejecting MSTG’s request to create a settlement privilege that would protect the reasonable royalty rate discussions had with other defendants, the Federal Circuit distinguished Fed R. Evid. 408. According to the court, Fed. R. Evid. 408, only addresses the inadmissibility of settlement discussions (for purposes of showing the validity or amount of a claim) and does not expressly prohibit the discovery of such material.   Id. at 11 – 12.   Finding there was no good reason to create a new privilege under the circumstances, the Federal Circuit found communications underlying settlement discussions to be fair game at least so long the requests otherwise comport with the rules of discovery.

Given the In re MSTG, Inc. decision, future patent plaintiffs will now have to contend with the possibility of disclosures being made on sensitive settlement discussions. This decision is noteworthy given that settlements are sometimes done for strategic reasons that may not be directly tied the relative worth of the settled patents – one settlement against a competitor may yield very different results as against another competitor.  Moreover, it may make it more difficult to settle patent disputes if a patent holder feels it needs to establish a certain record it can use in future disputes. This is further complicated by the fact patent litigation may eventually reach new heights with the September 2011 passage of the Leahy-Smith America Invents Act and the current status of patent portfolios as a competitive currency for very large corporations.   Microsoft’s $1.1 billion purchase of 925 AOL patents and Facebook’s subsequent purchase of 650 of these Microsoft/AOL patents for $550 million are illustrative of this competitive currency approach to patents.  No matter how the patent litigation landscape changes down the road, plaintiffs now need to take a structured and strategic approach to settlement discussions given what is said in one case can very well impact the results of future litigation.

Keyword Trademark Cases Remain Viable

In this latest of a long line of cases against Google for keyword trademark infringement, a surprise appellate decision was handed down on April 9, 2012.   Rosetta Stone Ltd. v. Google, Inc., No. 10-2007 (4th Cir. April 9, 2012), reversing, Rosetta Stone Ltd. v. Google Inc., 730 F. Supp. 2d 531 (E.D. Va. 2010).   In reversing portions of the lower court’s summary judgment grant in favor of Google, the Fourth Circuit reinstated plaintiff’s direct infringement, contributory infringement and dilution trademark claims.   In reviving the direct infringement claim which only involved a likelihood of confusion analysis, the court ruled that even well-educated, seasoned Internet consumers are confused by the nature of Google’s sponsored links and are sometimes even unaware that sponsored links are, in actuality, advertisements.   At the summary judgment stage, we cannot say on this record that the consumer sophistication factor favors Google as a matter of law.   Id. at 24 – 25.   In fact, the Court noted, such uncertainty may constitute “quintessential actual confusion evidence.”  Id. at 22.  The Fourth Circuit relied on various internal Google studies analyzing consumer confusion in connection with sponsored links, including studies that concluded “the likelihood of confusion remains high when trademark terms are used in the title or body of a sponsored link appearing on a search results page and 94% of consumers were confused at least once.”  Id. at 21.

This decision stands in sharp contrast to other decisions that have ruled on this particular likelihood of confusion issue. Previously, courts have found that in an age of sophisticated Internet users, it makes little sense to continue with the notion that users will be confused between sponsored results with trademark-protected keywords and standard search results or even by domain names containing trademarked words.  See Network Automation, Inc., v. Advanced System Concepts, Inc., 638 F.3d 1137, 1152 (9th Cir. 2011).

The contributory infringement claim was revived given Rosetta Stone provided Google with approximately 200 instances of counterfeit products found on sponsored links.  This was deemed sufficient to raise a question of fact regarding Google’s knowledge of identified individuals using sponsored links to infringe Rosetta Stone’s marks.  Rosetta Stone Ltd. v. Google, Inc., Slip Op. at 30.  The Fourth Circuit also reversed summary judgment on the dilution claim given the lower court applied the wrong standard when applying available defenses to a dilution claim under the Lanham Act. Id. at 39 – 41.  This and other technical errors made by the lower court claim may be a short-term victory for Rosetta Stone given on remand the court will ultimately determine whether Rosetta Stone’s brand was famous in 2004 – if it was not, the dilution claim is lost.  Id. at 47.  This may be a difficult burden for Rosetta Stone since the court recognized the brand actually became more famous in the years after 2004.  Given the dilution reversal was based largely on technical deficiencies in how the lower court interpreted the fair use defense, the Fourth Circuit missed an opportunity to opine on the more interesting question of whether or not Rosetta Stone could even bring a dilution claim as against Google given there is a very real question as to whether Google sufficiently used the Rosetta Stone marks in commerce.  Id. at 39-40.

The ultimate significance of this case may eventually pivot outside of the search engine context.  For example, despite the solid body of law that continues to sanction keyword marketing, contextual advertisers may benefit from reevaluating their use of keyword triggers associated with famous marks.   And, likelihood of confusion inquiries may reach a new realm with augmented reality devices such as Google’s Project Glass given advertisers may be able to physically guide users towards products and services based on verbal commands and trademark usage all the while without a single trademark being displayed.

DMCA Safe Harbor Provisions Raise Copyright Infringement Questions of Fact

On April 5, 2012, the Second Circuit reinstated Viacom’s long-running copyright infringement action against YouTube.  Viacom Intl., Inc. v. YouTube, Inc.,�Nos. 10-3270-cv, 10-3342-cv (2nd Cir. April 5, 2012).   In its ruling, the court offered an analysis regarding the complete safe harbor framework available to online service providers under the Digital Millennium Copyright Act (DMCA), 17 U.S.C.  512.  It also reaffirmed that the DMCA safe harbor provisions can protect a defendant from all affirmative claims for copyright infringement, including claims for direct infringement, vicarious liability, and contributory liability.

At its most basic, the Second Circuit found that existing questions of fact regarding YouTube’s level of knowledge precluded summary judgment.  Viacom’s five-year suit for direct and secondary copyright infringement previously came to a halt when the trial court found that YouTube was protected by the DMCA’s safe harbor provision given it had insufficient notice of the particular infringements in suit.  Viacom Intl., Inc. v. YouTube, Inc.,718 F. Supp. 2d 514, 529 (S.D.N.Y. 2010).  Under 512(c)(1)(A), safe harbor protection is available only if the service provider:

(i) does not have actual knowledge that the material or an activity using the material on the system or network is infringing;

(ii) in the absence of such actual knowledge, is not aware of facts or circumstances from which infringing activity is apparent; or

(iii) upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to, the material

Viacom Intl., Inc. v. YouTube, Inc., Slip Op at 15 (citing 17 U.S.C.  512(c)(1)(A)).  The lower court held that the actual knowledge and the “facts and circumstances” requirements both refer to knowledge of specific and identifiable infringements and not mere general awareness of infringing activity.  Viacom Intl., Inc. v. YouTube, Inc., 718 F. Supp. 2d at 523.  Although it affirmed this ruling, the Second Circuit further distinguished as follows:

The difference between actual and red flag knowledge is thus not between specific and generalized knowledge, but instead between a subjective and an objective standard. In other words, the actual knowledge provision turns on whether the provider actually or subjectively knew of specific infringement, while the red flag provision turns on whether the provider was subjectively aware of facts that would have made the specific infringement objectively obvious to a reasonable person.

Viacom Intl., Inc. v. YouTube, Inc., Slip Op at 17.  Parting company with the lower court, the Second Circuit found that the current state of facts raised triable questions of fact regarding these two tests.  Id at 20 – 22.  The remand was to determine specific instances of knowledge or awareness and whether such instances mirror the actual clips-in-suit.  Id. at 22.

The Second Circuit also offered the doctrine of “willful blindness” a concept not referenced in the DMCA as yet another means of demonstrating actual knowledge or awareness of specific instances of infringement.   To that end, it remanded for further fact-finding and resolution regarding whether YouTube made a “deliberate effort to avoid guilty knowledge.” Id.at 24.

In addition to the above DMCA knowledge provisions, the DMCA provides that an eligible service provider must “not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity.” Id. at 24 (citing 17 U.S.C.  512(c)(1)(B)).  After reviewing this “right and ability to control” test, the Second Circuit rejected the lower court’s view that a service provider must actually know of a particular case of infringement before it can control it.  Id. at 25.   Rather, the Second Circuit chose to agree with other courts that have determined a finding of liability only requires something more than the ability to remove or block access to materials posted on a service provider’s website.  Id. at 27 (citations omitted).   And, this “something more” involves “exerting substantial influence on the activities of users” so a remand was necessitated to flesh out this standard and determine whether YouTube satisfied it.  Id. at 28 – 29.

Although in its decision the Second Circuit has provided solid authority on a wide range of DMCA safe harbor interpretive issues, the decision may ultimately provide content owners and online service providers with some potential future problems to the extent the ruling leaves the summary judgment door unpredictably ajar for future litigants.

Theft of Trade Secrets Not Necessarily a Federal Offense

On April 11, 2012, the Second Circuit overturned the eight-year sentence imposed on a computer programmer for the theft of trade secrets under the Economic Espionage Act of 1996, 18 U.S.C. 1832(a)(2) & (4) (EEA) and transportation of stolen property in interstate commerce under the National Stolen Property Act, 18 U.S.C. 2314 (NSPA).  United States v. Aleynikov, No. 11-1126 (2d Cir. April 11, 2012).   The NSPA makes it a crime to “transport, transmit, or transfer in interstate or foreign commerce any goods, wares, merchandise, securities or money, of the value of $5,000 or more, knowing the same to have been stolen, converted or taken by fraud.  18 U.S.C. 2314. The statute does not define the terms “goods, wares, or merchandise.”

The EEA makes it a crime for someone to “convert a trade secret, that is related to or included in a product that is produced for or placed in interstate or foreign commerce, to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly. . . steals, or without authorization appropriates, takes, carries away, or conceals, or by fraud, artifice, or deception obtains such information…” 18 U.S.C. 1832(a).

Although the defendant computer programmer was convicted of stealing computer source code from his former employer, the Second Circuit strictly construed both of these two federal laws when tossing the convictions.  Id. at 10. First, the court determined the defendant was wrongly charged with theft of property because the intangible code did not qualify as a physical object that was “produced for or placed in interstate or foreign commerce under the NSPA.”  Id. at 14 – 15.  Declining “to stretch or update statutory words of plain and ordinary meaning in order to better accommodate the digital age”, the Second Circuit held that because the defendant did not “assume physical control” over anything when he took the source code, and because “he did not thereby deprive [his employer] of its use, [defendant] did not violate the [NSPA].”  Id. at 18.  And, given that the stolen code was neither “produced for nor placed in interstate or foreign commerce given the employer had no intention of selling its HFT system or licensing it to anyone”, the EEA was not violated. Id. at 27.

The failure of the EEA to address defendant’s conduct here is problematic given the EEA was “passed after the Supreme Court and the Tenth Circuit said the NSPA did not cover intellectual property.”   Id. at 2 (Calabresi, J., concurring) (citations omitted).  The statute was apparently expressly meant to pick up the theft of intellectual property such as proprietary source code.   The concurrence by Judge Calabresi suggests that Congress should jump in to rectify this apparently significant hole in the EEA:  “While the legislative history can be read to create some ambiguity as to how broad a reach the EEA was designed to have, it is hard for me to conclude that Congress, in this law, actually meant to exempt the kind of behavior in which Aleynikov engaged. . . . I wish to express the hope that Congress will return to the issue and state, in appropriate language, what I believe they meant to make criminal in the EEA.”  Id. at 2 (Calabresi, J., concurring)

If nothing else, this decision reaffirms the need for companies to be proactive in the defense of their trade secrets.  Until Congress fixes the EEA, it is just not enough to assume that criminal conduct such as the theft of source code will rise to a federal offense.

Basketball, Julius Caesar, and Privacy

March Madness and murdered dictators aside, next month may be memorable for significant new privacy polices and obligations coming online — especially those for vendors holding sensitive information of a Massachusetts resident.  Given the expiration of a two-year grace period, Massachusetts will require effective March 1, 2012 that all service provider contracts include provisions requiring that the service provider  implement and maintain security measures for personal information that is consistent with the Standards for the Protection of Personal Information of Residents of the Commonwealth, 201 CMR 17.00.

A service provider must comply with this regulation if it “receives, stores, maintains, processes, or otherwise has access to personal information” of Massachusetts residents, e.g., social security numbers, driver license numbers, and financial account information,  in connection with the provision of goods or services or in connection with employment.  For compliance purposes, it does not matter whether the service provider actually maintains a place of business in Massachusetts.    In addition, those companies who are subject to the regulation must oversee service providers by taking reasonable steps to select and retain service providers who are compliant.  Penalties for non-compliance can be enforced through the Massachusetts Consumer Protection Statute and include penalties under that law as well as possible civil penalty of up to $5,000 for each violation, plus reasonable costs of investigation and attorney’s fees.

On the consumer side, starting March 1, 2012, Google’s new privacy policy will bring together its various privacy documents into a single umbrella privacy policy.  After being implemented, logged in users will be treated as a single user across  all Google products.   Concern over the way Google’s new policy would grant the data aggregator control over user data and allegedly “hold hostage” consumer personal information has caused attorney generals from around the country to reach out to Google.    Not one to miss out on the fun, one EU regulator has chimed in claiming that it is “deeply concerned” about the new Google policy.  And, EPIC even filed suit to enforce a FTC settlement in its effort to stop the March privacy change — a lawsuit that was dismissed on February 24, 2012.   Given it will likely be implemented in a few days, consumers wanting to avoid some of the potential privacy sting of these changes can heed some advice from the EFF.

Finally, on March 7, 2012, HHS is scheduled to publish in the Federal Register its final proposed rule regarding what constitutes “meaningful use” of EHR sufficient to trigger incentive payments under the HIITECH Act.  A draft of the proposed rule is currently  available.   It remains to be seen whether this push for EHR usage will ultimately add or subtract to healthcare data breaches.  

As it stands, a HIPAA covered entity must provide notice to the HHS Secretary “without unreasonable delay and in no case later than 60 days from discovery of the breach” impacting 500 or more individuals.  To assist in reporting, there is even an online means of disclosing breaches.  The current list of all such disclosed breaches is publicly available; and not surprisingly, incidents have been steadily increasing as per an analysis done by OCR of breaches occurring in 2009 and 2010. 

The annual OCR report indicates that larger breaches occurred “as a result of theft, error, or a failure to take adequate care of protected health information.”  OCR Report at 9.  It is not difficult to imagine efforts to obtain governmental incentive payments by achieving meaningful EHR usage — as the term will be further refined in March — may actually  cause an uptick in breaches.    Despite having a requirement that every EHR Module be certified to a “privacy and security” certification criteria — which will ultimately be determined by the HHS Secretary, these incentive payments will continue to be tied to usage and not necessarily verifiable compliance with a security standard.   Given that HITECH’s financial incentives remain based on usage and not protection, “sticks” such as reductions in Medicare payments and stiff HITECH fines will continue to be the only real governmental incentive to maintain adequate protection.   It would be nice if HHS, instead, developed a financial incentive or reward program for those firms who go the extra distance (as per NIST standards) when providing security.  Maybe such a program will make the agenda after the OCR releases a few more breach reports.

Legal and Business Advocacy