Category Archives: Social Media

Is New Jersey Seeking to Become the New California When it comes to Privacy?

By way of a recent opinion of the New Jersey Supreme Court, New Jersey became the first state establishing a Constitutional right to cell-phone location information – thereby precluding law enforcement’s retrieval of such information without a warrant or exigent circumstances.   See State v. Earls, No. A-53-11, slip op., (NJ July 18, 2013) (unanimous opinion).

Recognizing that its decision “creates a new rule of law that would disrupt the administration of justice if applied retroactively”, the Court limited its ruling to the subject defendant and prospective cases only.  Interestingly, the Court did not even make a passing reference to a 2011 New Jersey appellate court that previously ruled no privacy tort existed for the surreptitious use of a location tracking device on a car.  The Earls case is the first appellate case to build on the United States Supreme Court’s GPS decision in United States v. Jones or address in great detail the proliferation and use of location-based information.

The Court in Earles recognized that “[w]ith increasing accuracy, cell phones can now trace our daily movements and disclose not only where individuals are located at a point in time but also which shops, doctors, religious services, and political events they go to, and with whom they choose to associate.”  Not surprisingly, the Court also realized “that cell-phone location information can be a powerful tool to fight crime.” 

Relying on the New Jersey Constitution, however, the Court reasoned that individuals expect that information provided to a third party in order to procure services will only be used by the recipient – in this case a telephone company – to provide the services in question.  In addition to this affirmative expectation of privacy, there is also a concomitant expectation that this information will not also be provided to the government. 

New Jersey’s landmark decision comes on the heels of one state legislator’s proposal of an amendment to the New Jersey Constitution stating that “people have a right to privacy from government intrusion, unless the government follows the due process of law.”  In addition to a proposed Constitutional amendment, Assemblywoman Handlin is also the sponsor of six bills and another resolution that address a person’s right to privacy as well as the freedom of the press:

A-4305:  prohibits the improper release of photographs or videos captured by security cameras or other recording devices operated by public entities.

A-4306:  prohibits a governmental entity from obtaining a biometric identifier of an individual without that individual’s consent. The bill does not prohibit any law enforcement agency from obtaining biometric identifiers of someone who has been placed under arrest. A “biometric identifier” is a retina or iris scan, fingerprint, voiceprint or DNA.

A-4307:  a person who knowingly obtains or discloses personally identifiable health information, in violation of the federal health privacy rule, is guilty of a crime of the third degree.

A-4308:  this bill increases the penalties for the unlawful disclosure or use of taxpayer information by State tax officials. The purpose of this bill is to provide enhanced deterrence against violations of taxpayer confidentiality.

A-4309:  requires a Superior Court judge to approve the installation of any video camera by a public entity.

A-4310:  requires an administrative agency to include a privacy impact statement when adopting, amending, or repealing a rule.

ACR-201:  requests the President and Congress enact a federal shield law for journalists. A shield law would grant journalists notice and an opportunity to be heard in federal court in order to challenge a federal subpoena seeking phone records or other information identifying a source. Federal bills S.987 and H.R.1962, both titled the “Free Flow of Information Act of 2013,” were introduced in May 2013. The bills would establish the federal shield law.

Sandwiched between these privacy-protective efforts, exists a bill aimed at safeguarding the social media accounts of employees.  Before it was conditionally vetoed by Governor Christie in May 2013, New Jersey was teetering on passing the most onerous law in the country regarding employee social media protections – allowing for a private right of action and seeking to bar employers from even asking if an employee has a social media account.  As it currently stands, the bill – if it is ever finally signed by the Governor – will still be among one of the stronger such laws.

Despite its recent efforts, New Jersey still has a great deal of heavy lifting before it can catch up with the land of SB 1386 – California already has a constitutionally guaranteed right to privacy, over seventy privacy-related laws on the books, and multiple regulatory agencies set up to enforce these laws.   It is no surprise that Attorney General Kamala Harris’s recent report opens with the words:  California has the strongest consumer privacy laws in the country.

California’s Right to Know Law Put on Hold

As reported by the LA Times, “a powerful coalition of technology companies and business lobbies that included Facebook, Inc., Google, Inc., the California Chamber of Commerce, insurers, bankers and cable television companies as well as direct marketers and data brokers” were able to stop a California bill aimed at giving consumers greater insight as to the use of their personal data.

First introduced in February by Assemblywoman Bonnie Lowenthal (D-Long Beach), the proposed Right to Know Law (AB 1291) would have implemented major revisions to existing law and created new rights for consumers.  Specifically, the proposed law would require

any business that has a customer’s personal information, as defined, to provide at no charge, within 30 days of the customer’s specified request, a copy of that information to the customer as well as the names and contact information for all 3rd parties with which the business has shared the information during the previous 12 months, regardless of any business relationship with the customer.

This new level of transparency might have helped sooth consumer concerns.  According to a 2012 USC Dornsife/Los Angeles Times poll, “82 percent of Californians said they are “very concerned” or “somewhat concerned” about Internet and smartphone companies collecting their personal information.”   On the other hand, providing a full and accurate accounting of who had access to a consumer’s data – even to only the small percentage of consumers who would actually take the time to request it – would have generated a major undertaking for a wide range of companies.  It is not surprising that the companies who fought so hard to pull the plug on this bill represent a very diverse coalition of businesses.

Even if this bill does not get revived in a new form sometime in the future, the prospect of what it might have brought to the table should serve as a wake up call to those businesses deep into online behavioral advertizing.  It may be time to better understand just who has access to what information – and it may not eventually matter whether that information belongs to a current client or consumer or whether it was anonymized.  As usual, staying in front of the regulatory curve remains a sound business practice.

Financial Correlation of Privacy Rights

In Letting Down Our Guard With Web Privacy, published on March 30, 2013, the author details ongoing research being conducted by Alessandro Acquisti, a behavioral economist at Carnegie Mellon University.  Mr. Acquisti’s research is cutting edge when it comes to online behavioral advertising (OBA)  and associated consumer behavior.  Indeed, he’s the academic who famously announced in 2011 that one might be able to discover portions of someone’s social security number simply by virtue of a posted photograph.   His research often distills to one major premise – consumers may not always act in their best interests when it comes to online privacy decisions.

It appears consumers and merchants alike may be missing out on fully cultivating a very valuable commodity.  According to the World Economic Forum, “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Rethinking Personal Data:  Strengthening Trust, at 7, World Economic Forum Report (May 2012).  Before this asset class can ever be completely exploited and fully commercialized, however, its constituent value components must be correlated by all in the privacy food chain.

Over three decades ago, it was recognized that the three pillars of privacy – the very foundation of personal data – secrecy, anonymity, and solitude, were distinct yet interrelated.  See Gavison, Ruth, Privacy and the Limits of Law, 89 The Yale Law Journal 421, 428-429 (1980) (“A loss of privacy occurs as others obtain information about an individual, pay attention to him, or gain access to him. These three elements of secrecy, anonymity, and solitude are distinct and independent, but interrelated, and the complex concept of privacy is richer than any definition centered around only one of them.”).

Current OBA has made it so these three privacy pillars may be confusing for consumers to value, manage, and isolate when online – it is not generally up to consumers whether they will be fed an ad based on previous website visits or purchases – it will just happen.  Indeed, according to a survey of 1,000 persons conducted by Ipsos Public Affairs and released by Microsoft in January 2013, forty-five percent of respondents felt they had little or no control over the personal information companies gather about them while they are browsing the Web or using online services.  This view may not be unfounded given that data routinely gathered online, e.g., operating system, browser, IP address, persistent cookies, last used server, can be used to divulge the activity of individual devices.

The privacy trade-offs being researched by Mr. Acquisti and others offer insight into the true value of these data constituents.  Consumers who try to “shut off” or render anonymous access to their device’s data or settings, would not only likely fail in their attempt at being anonymized, they would also lose out on access to most social media and other websites requiring browsers to accept cookies as well as product offers that may presumably are of interest.  Indeed, this coordinated tracking of consumers is not even unique to the Internet.   See generally Bibas, Steve, A Contractual Approach to Data Privacy, 17 Harv. J. Law & Public Policy 591 (Spring 1994) (“Although the ready availability of information helps us to trust others and coordinate actions, it also lessens our privacy. George Orwell presciently expressed our fear of losing all privacy to an omniscient Big Brother.  Computers today track our telephone calls, credit-card spending, plane flights, educational and employment records, medical histories, and more.  Someone with free access to this information could piece together a coherent picture of our actions.”).  There are even companies that bridge the gap between offline and online activities by taking in-store point of sale purchases and converting such data to an anonymous online cookie ID that will eventually be used online by clients.  Such use of in-store data is generally permissible under a retailer’s loyalty program.

Current law does not generally prevent someone from collecting public information to create consumer profiles – nor is there the right to opt out of having your public record information sold or shared.  And, when one wants to self-determine whether data will be disclosed or whether he or she will be “untraceable”, “anonymous” or “left alone”, there may not always exist the ability to easily curtail these rights from being exploited – there is certainly no way to obtain a direct financial gain in return for the relinquishment of such privacy rights.  Instead, there has generally been a “privacy for services” marketing/advertizing arrangement that has been accepted by consumers – which, in fact, has helped pay for and fuel the growth of the commercial Internet.

The current OBA ecosystem does not posit a “loss of privacy” as much as it offers a bartering system where one party feels the value of what is being bartered away while the other party actually quantifies with cascading/monetizing transactions what is only felt by the other party.  In other words, it is not a financial transaction.  Those who are able to find an entertaining online video or locate a product online using a search engine don’t really mind that an ad will be served to them while visiting some other website given they feel this loss of privacy is worth the value of the services being provided.

Ironically, the interactive advertising industry itself may believe it is collecting too much sensitive consumer data.  According to a study conducted by the Ponemon Institute, 67 percent of responding online advertisers believe “limiting sensitive data collection for OBA purposes is key to improving consumer privacy and control when browsing or shopping online.” Leading Practices in Behavioral Advertising & Consumer Privacy:  A Study of Internet Marketers & Advertisers, at 2, The Ponemon Institute (February 2012).

As recognized by privacy researchers, “[e]mpirical evidence on the behavioral effects of privacy is rather scarce.”  Regner, Tobias; Riener, Gerhard, Voluntary Payments, Privacy and Social Pressure On The Internet: A Natural Field Experiment, DICE Discussion Paper, No. 82 (December 2012) at 6.  Although “some consumers are willing to pay a premium to purchase from privacy protective websites”; there is no measure of what that premium should be or how widespread a factor it is for consumers as a whole.  Id. at 7.

More often than not, consumers have been “often willing to provide personal information for small or no rewards.”  Losses, Gains, and Hyperbolic Discounting: An Experimental Approach to Information Security Attitudes and Behavior, presented by Alessandro Acquisti and Jens Grossklags at the 2nd Annual Workshop on Economics and Information Security, College Park, Maryland, May 2003, at 4.

This does not mean researchers have not tried to quantify a “privacy valuation” model.  In 2002, a Jupiter Research study found 82% of online shoppers willing to give personal data to new shopping sites in exchange for the chance to win $100.  See c.f. Tsai, Janice; Egelman, Serge; Cranor, Lorrie; Acquisti, Alessandro; The Effect of Online Privacy Information on Purchasing Behavior: An Experimental Study, Information Systems Research (February 2010) at 22 (describing survey results which concludes that “people will tend to purchase from merchants that offer more privacy protection and even pay a premium to purchase from such merchants.”); Beresford, Alastair; Kübler, Dorothea; Preibusch, Sören, Unwillingness To Pay For Privacy: A Field Experiment, 117 Economics Letters 25 (2010) (“Thus, participants predominantly chose the firm with the lower price and the more sensitive data requirement, indicating that they are willing to provide information about their monthly income and date of birth for a 1 Euro discount.”).

In his 1994 paper, A Contractual Approach to Data Privacy, Steve Bibas suggests that individual contracts may provide the best solution to the privacy compensation dilemma:  “In the hands of the contracting parties, however, flexibility allows people to control their lives and efficiently tailor the law to meet their needs. Flexibility is the market’s forte; the pricing mechanism is extremely sensitive to variations in valuation and quickly adjusts to them.”  Bibas, 17 Harv. J. Law & Public Policy 591 (Spring 1994).   Mr. Bibas, however, recognized the limitations in what could be accomplished with privacy transactions that relied only on static privacy trades.  In other words, a model that might be effective is one that customizes the financial rewards to consumers are based on a continuous exchange of information between the consumer and merchant.

One problem most consumers face when using commonly marketed solutions that are meant to safeguard their privacy is that they fail to also create an acceptable value proposition for merchants.  As well, those recently formed companies promising a private web experience will not be able to – nor should they even try – to curtail firms from using OBA to reach consumers.  For the foreseeable future, OBA will continue to drive the Internet and “pay” for a much richer and rewarding consumer experience than would otherwise exist.  It may one day be determined, however, that an even more effective means to satisfy all constituent needs of the OBA ecosystem (consumer, merchant, publisher, agency, etc.) will be to find a means to directly correlate between privacy rights, consumer data, and a merchant’s revenue.

Sins of our Marketers: SMS, the Telephone Consumer Protection Act, and Strict Liability

In continuing a trend that took hold nearly four years ago in Satterfield v. Simon & Schuster, Inc., 569 F. 3d 946 (9th Cir. 2009), a putative class action was filed on January 25, 2013 alleging that unsolicited SMS texts give rise to statutory damages under the Telephone Consumer Protection Act (TCPA).  Although the suit may have been brought against a big box retailer the allegations are based on the conduct of “a mobile technology company whose identity is currently unknown.”

Under the TCPA, it is unlawful to make “any call (other than a call made for emergency purposes or made with the prior express consent of the called party) using any automatic telephone dialing system [ATDS] . . . [to any] cellular telephone service.” 47 U.S.C. Sec. 227(b)(1)(A).  Although the TCPA was enacted years before SMS was a reality, the FTC, as well as courts in California and Chicago, have interpreted the undefined term “any call” to include  SMS texts so long as the SMS text was sent using an ATDS.

Courts have already ruled that “the TCPA is essentially a strict liability statute which imposes liability for erroneous unsolicited faxes.” Alea London Ltd. v. Am. Home Services, 638 F.3d 768, 776 (11th Cir. 2011) (citation omitted).  See also Universal Underwriters Ins. Co. v. Lou Fusz Auto. Network, Inc., 401 F.3d 876, 882 (8th Cir. 2005) (“The Act makes no exception for senders who mistakenly believe that recipients’ permission or invitation existed.”).  This means that class action counsel need only demonstrate that the SMS messages went out unsolicited via an ATDS and statutory damages will likely follow.

As now being pressed in the Hill putative class action filed on January 25, 2013, this strict liability for unsolicited SMS messages may also extend from the actual sender, i.e., marketer, to the retailer.  Several years ago, the FTC responded to a request for public comments filed by the FCC regarding the following two questions:   “First, does a call placed by an entity that markets a seller’s goods and services qualify as a call made on behalf of, and initiated by, the seller, even if the seller does not physically place the call?; and second, what should determine whether a telemarketing call is made “on behalf of” a seller, thus triggering liability under the TCPA?”

The FTC answered with a vigorous defense of its view that “the plain meaning of the law and its regulations supports holding sellers liable for calls made for the seller’s benefit.”  Given the FTC was merely responding to the FCC’s request for comments and given the FCC has yet to release its final ruling, it remains to be seen whether the courts will ultimately side with the FTC view.  Indeed, several courts have explicitly rejected the FTC position regarding vicarious strict liability.  See e.g.Mey v. Pinnacle Security, LLC, No. 5:11CV47, slip op. at  (N.D.W.Va. Sept. 12, 2012) (“In the Spring of 2011, the FCC released a public notice requesting comment on the issue of strict “on behalf of” liability under §227(b)(3), and this Court has not received information that a ruling has yet been issued on the matter.  26 FCC Rcd 5040. . . . Accordingly, this Court finds that the TCPA does not provide strict “on behalf of” liability under § 277(b)(3).”) (citing Thomas v. Taco Bell Corp., 2012 U.S. Dist. LEXIS 107097, No. SACV 09-01097-CJC (C. D. Calif. June 25, 2012).

Whether or not the FTC is ultimately vindicated by the courts on this issue, it is clear that the FTC is not oblivious to the mechanics of mobile marketing.   For example, the FTC has found that a one-time text message confirming a consumer’s request that no further text messages be sent was not violative of the TCPA.  Notwithstanding any current temporary safe harbor that may exist, the takeaway remains that firms may be on the hook for what their marketing, promotional, and advertising firms are doing when it comes to SMS campaigns.

Given the FTC’s stated desire to visit on innocent retailers the sins of their marketers and the difficulty to insure against this risk, it is obviously more important than ever for those who rely on SMS campaigns to always verify appropriate consent and obtain suitable contractual indemnifications.

First Amendment Does Not Save NJ Teacher from Postings Firing

In a January 11, 2013 ruling, the New Jersey Appellate Division upheld the administrative dismissal of a first grade teacher.  She had argued that the First Amendment precluded her firing — which was based on two Facebook postings.  In the Matter of the Tenure Hearing of Jennifer O’Brien, (NJ App. Div. January 11, 2013).  One of her statements was, “I’m not a teacher — I’m a warden for future criminals!”

O’Brien said she posted the statement that her students were “future criminals” because of “their behaviors, not because of their race or ethnicity.”  She also stated that “six or seven of her students had behavioral problems, which had an adverse impact on the classroom environment.”  Id. at 4 – 5.

In finding that she failed to establish her Facebook postings were protected speech, the Appellate Division found that “even if O’Brien’s comments were on a matter of public concern, her right to express those comments was outweighed by the district’s interest in the efficient operation of its schools.”  Id. at 11.

This ruling sits in contrast to the NLRB’s frequent warnings regarding the sanctity of worker postings — especially when the postings pertain to workplace conditions.  The cringe-worthy nature of these postings, the fact they were directed at first graders, and the deference accorded administrative proceedings certainly all made it easy for the Appellate Division to rule as it did.  In other words, employers should not take great comfort in this ruling when evaluating whether to discipline employees for inflammatory postings.

New Jersey Fast Tracks Employer Social Media Bill

New Jersey is ready to have the harshest law aimed at preventing employers from delving into the social media postings of employees.  In what is considered lightning speed for New Jersey legislative action, the New Jersey Assembly fast-tracked a bill in May that was approved in June by the Assembly 76-1 and by the Senate in October by a 38-0 margin.  The bill – A2878–  is now poised for signature by Governor Christie by the end of the year.

If it is signed by the Governor, it will be the toughest of the similar laws on the books in Maryland, California and Illinois.  All of these laws are aimed primarily at prohibiting employers from asking for social media passwords.   If enacted, New Jersey’s law would also preclude employers from asking if an employee or prospective employee even has a social media account.  And, any agreement to waive this protection would be deemed void pursuant to the law.   There are also civil penalties for any violation with the penalties beginning at $1,000 for an initial violation and increasing to $2,500 for each additional violation.

The New Jersey law would obviously generate issues for an employer who is looking to comply while still ensuring a secure work environment for its employees.  To that end, the new law would not bar company policies curtailing the use of employer-issued electronic communications devices during work hours.   Not surprisingly, it is the blurring of private vs. public social media usage which portends to be a major driver of any future civil litigation.  What may end up being the most important factor regarding how much litigation this new law would create, however, is the fact reasonable attorney fees may also be recoverable under the statute.  Without the financial incentive of a class action or statutory fees, there would be few attorneys willing to bring actions based on $1,000 violations.

UPDATE – February 21, 2013

The bill has still not been signed into law — so much for being fast tracked!  Rather than agree to several Senate changes to the bill and then pass along to the Governor for signature, the Assembly has chosen to sit on the bill.  A good discussion regarding the latest status of this proposed law can be found in Law360.

UPDATE – March 25, 2013

On March 21, 2013, the bill passed the Assembly by a whopping 75 – 2 vote and is now on the Governor’s desk.

UPDATE – May 7, 2013

On May 6, 2013, Governor Christie conditionally vetoed the bill.  In his statement, he suggested that the bill would have been over broad in reach and gave the following example of an unintended consequence of such breadth:

[U]nder this bill, an employer interviewing a candidate for a marketing job would be prohibited from asking about the candidate’s use of social networking so as to gauge the candidate’s technological skills and media savvy. Such a relevant and innocuous inquiry would, under this bill, subject an employer to protracted litigation.

The Governor also vetoed that part of the bill that would have allowed for a private right of action.  He felt any dispute was better resolved by the state labor commissioner.  According to the bill’s sponsor, the Assembly will likely adopt Governor Christie’s suggestions in order to have the bill signed into law.  In effect, the most controversial aspect of the bill was just removed.  While some New Jersey businesses may be breathing a sigh of relief, the plaintiff’s bar is certainly no longer excited about this bill.

The Privacy Tug of War

According to the World Economic Forum, “personal data represents an emerging asset class, potentially every bit as valuable as other assets such as traded goods, gold or oil.”  Given the inherent value of this new asset class, it’s no surprise there has been an ongoing tug of war regarding how consumers should be compensated for access to their personal data.

In a March 2003 Wired article titled, “Who’s Winning Privacy Tug of War?“, the author suggests that “[c]onsumers appear to have become weary of the advertising bombardment, no matter how targeted to their tastes those ads may be.”  And, the “tit-for-tat tactic on the Web” that requires users to provide certain personal information in exchange for product or other information may be much less than a perfect marketing model given these marketing preference databases “are polluted with lies.”

Fast forward a decade or so and companies are still trying to figure out the Privacy Tug of War rules of engagement.  In a report released on September 19, 2012, UK think tank Demos released a report it considered “the most in-depth research to date on the public’s attitudes toward the sharing of information.”   Not surprisingly, Demos found that in order to maximize the potential value of customer data, there needs to be “a certain level of trust established and a fair value exchange.”   The firm found that only 19 percent of those surveyed understand the value of their data, and the benefits of sharing it.

The surveys, workshops and other research tools referenced in the Demos report all point towards a “crisis of confidence” which may “lead to people sharing less information and data, which would have detrimental results for individuals, companies and the economy.”   Demos offers up a possible solution to this potential crisis:

The solution is to ensure individuals have more control over what, when and how they share information. Privacy is not easily defined. It is a negotiated concept that changes with technology and culture. It needs continually updating as circumstances and values change, which in turn requires democratic deliberation and a dialogue between the parties involved.

It is hard to have any meaningful deliberations when no one is charting a clear path to victory in the Privacy Tug of War — nor is there any consensus regarding whether it is preferable to even have such a path.   Some on the privacy circuit have suggested we must create better privacy metrics and offer tools to use those metrics to measure whether a company’s privacy protections are “satisfactory”.   Consumers right now can rely on sites such as Clickwrapped to score the online privacy policies of major online brands.   Certification services such as TRUSTe provide insight regarding the online privacy standards of thousands of websites.   If they don’t like what they see, consumers can always “opt out” and use services such as that of start-up Safe Shepherd to remove “your family’s personal info from websites that sell it.”

Unfortunately, no commercially available privacy safeguard, testing service or certification can ever move fast enough to address technological advances that erode consumer privacy given such advances will always launch unabated — and undetected — for a period of time.  Not unlike Moore’s Law regarding the doubling of transistor computing power every two years, it appears that consumer privacy diminishes in some direct proportion to new technological advances.  Consumer privacy expectations should obviously be guided accordingly.   Unlike with Moore’s Law, however, there is no uniform technology, product, or privacy metric that can be benchmarked as it is in the computer industry.

This does not mean we are powerless to follow technology trends and quantify an associated privacy impact.  For example, the Philip Dick/Steven Spielberg Minority Report vision of the future where public iris scanning offers up customized advertisements to people walking around a mall has already taken root in at least one issued iris-scanning patent that is jointly owned by the federal government and a start-up looking to serve ads suggested using facial recognition techniques.  In direct reaction to EU criticism of Facebook’s own facial recognition initiative, Facebook temporarily suspended its “tag-suggest” feature.  This automatic facial recognition system recognized and suggested names for those people included in photographs uploaded to Facebook – without first obtaining the consent of those so recognized and tagged.

Closely monitoring technological advances that may impact privacy rights — whether the body diagnostics of Mc10 and ingested medical sensors from Proteus, the latest in Big Data analytics, or a new EHR system that seamlessly ties such innovations together — becomes the necessary first step towards understanding how to partake in the Privacy Tug of War.

Unlike the PC industry that is tied to Moore’s Law, our government’s unbounded funding is an active participant in developing privacy-curtailing technological advances.  For example, the FBI is currently undergoing a billion-dollar upgrade creating its Next Generation Identification Program which will deploy the latest in facial recognition technologies.   As recognized by CMU Professor Alessandro Acquisti, this “combination of face recognition, social networks data and data mining can significantly undermine our current notions and expectations of privacy and anonymity.”

Not surprisingly, there has been some push back on such government initiatives.    For example, on September 25, 2012, the ACLU filed suit against several government agencies under the Freedom of Information Act seeking seeking records on their use and funding of automatic license plate readers (APLRs).  According to the Complaint, “ALPRs are cameras mounted on stationary objects (e.g., telephone poles and the underside of bridges) or on patrol cars [and] photograph the license plate of each vehicle that passes, capturing information on up to thousands of cars per minute.”   The ACLU suggests that APLRs “pose a serious threat to innocent Americans’ privacy.”

The imminent unleashing of unmanned aircraft systems – commonly known as “drones” – sets in motion another technological advance that should raise serious concerns for just about anyone.  Signed by President Obama in February 2012, The FAA Modernization and Reform Act of 2012, among other things, requires that the Federal Aviation Administration accelerate the use of drone flights:

Not later than 270 days after the date of enactment of this Act, the Secretary of Transportation, in consultation with representatives of the aviation industry, Federal agencies that employ unmanned aircraft systems technology in the national airspace system, and the unmanned aircraft systems industry, shall develop a comprehensive plan to safely accelerate the integration of civil unmanned aircraft systems into the national airspace system.

As recognized by the Government Accountability Office in a September 14, 2012 Report, even though “[m]any [privacy] stakeholders believe that there should be federal regulations” to protect the privacy of individuals from drone usage, “it is not clear what entity should be responsible for addressing privacy concerns across the federal government.”

This is not an insignificant failing given according to this same report, commercial and government drone expenditures could top $89.1 billion over the next decade ($28.5 billion for R&D and $60.6 billion for procurement).  Interestingly, the necessary comprehensive plan to accelerate integration of civil drones into our national airspace systems will be due on November 10, 2012 – right after elections.   According to an Associated Press-National Constitution Center poll, 36 percent of those polled say they “strongly oppose” or “somewhat oppose” police use of drones.   This somewhat muted response is likely driven by the fact most polled just do not understand the capabilities of these drones and just how pervasive they will become in the coming years.

The technology advance that may have the greatest impact on privacy rights does not take to the skies but is actually found in most pockets and purses.   The same survey referenced above found that 43 percent of those polled (the highest percentage) primarily use a mobile device alone rather than a landline or a combination of mobile device and landline — with 34 percent of those polled not even having a landline in their home.   Not surprisingly, companies have been aggressively tapping into the Big Data treasure trove available from mobile device usage.   Some politicians have taken notice and are already drawing lines in the digital sand.

Under the Mobile Device Privacy Act introduced by Congressman Edward J. Markey, anyone who sells a mobile service, device, or app must inform customers if their product contains monitoring software — with statutory penalties ranging from $1,000 per unintentional violation to $3,000 per intentional violation.   This new bill addresses only a single transgression of the personal-data-orgy now being enjoyed by so many different companies up and down the mobile device communication and tech food chain.   As evidenced by the current patent landscape — including an issued Google patent that involves serving ads based on a mobile device’s environmental sounds — and the now well-known GPS capabilities of mobile devices, the privacy Battle of Midway will likely be fought around mobile devices. Companies with a stake in the Privacy Tug of War — as well as those professionals who advise such companies — will only be adequately prepared if they recognize that this battle may ultimately have no clear winners or losers — only willing participants.

Mexico City Redux: Conference of Data Protection and Privacy Commissioners

On November 2 – 3, 2011, about 600 persons from around the world attended the 33rd International Conference of Data Protection and Privacy Commissioners.   For those unable to make the trek to Mexico City, what follows is selected insight gained from several folks who attended and were kind enough to report back what was discussed in Mexico.

The event opened with an exposition of the “big data” concerns driving many large privacy programs.   Ken Cukier of The Economist used the example of how the Sumo wrestling scandal was uncovered using big data analytics, i.e., a complete analysis of 10 years’ worth of Sumo contests, to showcase the fast, ubiquitous, and distributed nature of big data.   A common big data thread turned on the data collection activities of Facebook and Google – with an obvious concern regarding their future usage of collected data.  It was pointed out that a browser configuration is so customized now that it can act as a fingerprint indentifying its owner — leading to even more big data concerns.

Two other covered substantive topics were, not surprisingly, social media and mobile technologies.  Tied to social media was the purported “right to be forgotten.”  Building on prior conferences, it appears as if the commissioners in attendance believed future regulations will eventually create such a right in the EU.  The question of enforcement was not really deemed much of a concern – which is curious given it would be wishful thinking to think anyone can actually completely scrub the Internet of one’s personal data.   Moreover, do we really even want bad information regarding a professional such as a doctor or lawyer ever completely wiped clean?

As for mobile discussions, one session focused exclusively on the ramifications of having over five billion mobile users worldwide.  In ten years time, it was estimated there would be 20 billion SIM cards in use connecting multiple devices to each other.  In effect, chips will be everywhere processing and collecting data — leading to ever-increasing privacy challenges. 

Another area of discussion was the “interoperability” of privacy laws around the world.  The lofty notion of harmonization was abandoned in lieu of the more workable interoperability concept.  This new perspective would entail better cooperation between the various commissioners with perhaps an executive committee to assist in such coordination efforts.  The committee would deal with global issues that would require better cooperation, e.g., regulatory efforts involving multi-national corporations potentially impacting the privacy rights of persons in  many countries.

An interesting sidebar on interoperability was the ability to use of common regulations instead of directives.  Such a change in course would take much longer to implement given the need to, for example, go to a Parliament to pass such  regulations.  It was assumed this path would take 3 – 5 years to implement.  On the other hand, it would allow for much more in the way of teeth to an executive committee’s agenda.   

There was also an interesting debate between the commissioners regarding their perceived roles.  It was universally acknowledged that they are overwhelmed by the explosive privacy issues impacting their respective offices.  What was not universally acknowledges was how they should prioritize their time in meeting this challenge.  One school of thought (spearheaded by Chris Graham, the UK Information Commissioner) was that commissioners and their offices should be counselors assisting companies reach relevant privacy standards — a definitely carrot-centric approach.  The combating school of thought (voiced strongly by Jacob Kohnstamm, Head of the Article 29 Working Group and Chairman of the Dutch Data Protection Authority) was that only enforcement sticks should be used.  Mr. Kohnstamm said that companies have had enough time to be compliant and it is now time to enforce existing laws.  He also apparently stated that even if he wanted to act as a counselor he does not have sufficient advisory personnel on staff to act in that role.  Interestingly, this divide may also be attributable to a common law vs. civil law axis.  Given that Mr. Kohnstamm is up for election as head of the Article 29 Working Group, his election may end up being a referendum on this debate.

There was also interesting insight gained regarding the difference in styles between two newly installed commissioners; the newfound influence of Asia at the conference; the focus — for the first time — on privacy violations involving state actors; and a belief that the closed session resolutions may formalize the working relationships between the various commissioners and their respective offices.  

There is no doubt that the global privacy landscape is expanding at a rapid rate and that this conference will only grow over time – next year it will be at a resort in Uruguay.  Simon Davies, Director of Privacy International, even spoke about how countries such as Pakistan and Afghanistan are now starting a privacy dialogue.   The Dragon also took a privacy bow when Zhou Hanhua of the Chinese Academy of Social Sciences in Beijing gave a keynote address that discussed the new revisions to China’s penal code regarding privacy infractions as well as its revisions to Identification and Telecommunications laws to better address privacy concerns.   And, it was even mentioned Korea will host the conference in a few years. 

In other words, there can be no denying privacy is and will forever be a global issue.  In fact, that truism may very well be the reason this year’s Conference of Data Protection and Privacy Commissioners was titled “PRIVACY: The Global Age.”

Do Not Track Law Comes Closer to Reality

Apparently seeking to mimic the success of the “do not call” registry, on May 9, 2011, Sen. Jay Rockefeller (D-W.Va.) introduced an online “do not track” privacy bill that would give consumers the ability to block companies from tracking their online activities.  The proposed Do-Not-Track Online Act of 2011 comes on the heels of another consumer privacy bill proposed by Senators Kerry and McCain.  The competing Kerry bill does not have a “do not track” feature, excludes the possibility of a private right of action (Sec. 406)  and was generally panned by privacy activists as potentially being too pro-business.   On the other hand, an ACLU spokesman described the Rockefeller bill as “a crucial civil liberties protection for the twenty-first century.

Given the support being offered by the White House, the Rockefeller bill has a real chance of being passed into law.  What it will eventually mean to the cost of “free” applications sponsored by marketers and their clients remains to be seen.



Is it Time to Ditch Your Facebook Account?

A recently published study funded in part by the National Institutes of Health shows that the brain’s capacity to move back and forth from distractions diminishes with age.  The findings, which were reported in the online edition of the Proceedings of the National Academy of Sciences (April 11, 2011), ultimately suggest that multi-tasking may impact our working memory, i.e., the ability to hold and manipulate information in the mind.  According to one of the study’s authors, Adam Gazzaley, MD, PhD, director of the UCSF Neuroscience Imaging Center:

The impact of distractions and interruptions reveals the fragility of working memory.  This is an important fact to consider, given that we increasingly live in a more demanding, high-interference environment, with a dramatic increase in the accessibility and variety of electronic media and the devices that deliver them, many of which are portable.

Other researchers are more direct in pointing a finger at the potential cause of this problem.  According to Dr. Elias Aboujaoude, director of Stanford’s Impulse Control Disorders Clinic, “persons are suffering in terms of cognition and attention spans because of the time spent online.”  Interestingly, some studies have shown that students may be aware that technology is having a detrimental effect on their academic performance and are open to learning time management strategies and strategies for managing cognitive workloads.

What exactly does all of this research mean for the average tech junkie remains unclear.  At the very least, it may be an early wake up call to have a more measured approach to social media.  If the tweets are in the thousands and the blog posts number in the hundreds it may not be healthy to continually jump on an iPad to use Bizzy or check on a Facebook account.  In other words, give it a rest or the work product may ultimately suffer.

[Update:  June 14, 2011]
As per this article in the Daily Mail, Facebook fatigue may be catching on —  six million US users apparently deactivated their accounts in May 2011.