EFF: „FREE from Chains!“: Eskinder Nega is Released from Jail

„FREE from Chains!“: Eskinder Nega is Released from Jail

Eskinder Nega, one of Ethiopia’s most prominent online writers, winner of the Golden Pen of Freedom in 2014, the International Press Institute’s World Press Freedom Hero for 2017, and PEN International’s 2012 Freedom to Write Award, has been finally set free.

Eskinder is greeted by well-wishers on his release. Picture by Befekadu Hailu

Eskinder has been detained in Ethiopian jails since September 2011. He was accused and convicted of violating the country’s Anti-Terrorism Proclamation, primarily by virtue of his warnings in online articles that if Ethiopia’s government continued on its authoritarian path, it might face an Arab Spring-like revolt.

Ethiopia’s leaders refused to listen to Eskinder’s message. Instead they decided the solution was to silence its messenger. Now, within the last few months, that refusal to engage with the challenges of democracy has led to the inevitable result. For two years, protests against the government have risen in frequency and size. A new Prime Minister, Hailemariam Desalegn, sought to reduce tensions by introducing reforms and releasing political prisoners like Eskinder. Despite thousands of prisoner releases, and the closure of one of the country’s more notorious detention facilities, the protests continue. A day after Eskinder’s release, Desalegn was forced to resign from his position. A day later, and the government has declared a new state of emergency.

Even as it comes face-to-face with the consequences of suppressing critics like Eskinder, the Ethiopian authorities pushed back against the truth. Eskinder’s release was delayed for days, after prison officials repeatedly demanded that Eskinder sign a confession that falsely claimed he was a member of Ginbot 7, an opposition party that is banned as a terrorist organization within Ethiopia.

Eventually, following widespread international and domestic pressure, Eskinder was released without concession.

Eskinder, who was in jail for nearly seven years, joins a world whose politics and society have been transformed since his arrest. His predictions about the troubles Ethiopia would face if it silenced free expression may have come true, but his views were not perfect. He was, and will be again, an online writer, not a prophet. The promise of the Arab Spring that he identified has descended into its own authoritarian crackdowns. The technological tools he used to bypass Ethiopia’s censorship and speak to a wider public are now just as often used by dictators to silence them. But that means we need more speakers like Eskinder, not fewer. And those speakers should be carefully listened to, not forced into imprisonment and exile.

Published February 17, 2018 at 02:13AM
Read more on eff.org

Advertisements

EFF: New National Academy of Sciences Report on Encryption Asks the Wrong Questions

New National Academy of Sciences Report on Encryption Asks the Wrong Questions

The National Academy of Sciences (NAS) released a much-anticipated report yesterday that attempts to influence the encryption debate by proposing a “framework for decisionmakers.” At best, the report is unhelpful. At worst, its framing makes the task of defending encryption harder.

The report collapses the question of whether the government should mandate “exceptional access” to the contents of encrypted communications with how the government could accomplish this mandate. We wish the report gave as much weight to the benefits of encryption and risks that exceptional access poses to everyone’s civil liberties as it does to the needs—real and professed—of law enforcement and the intelligence community.

From its outset two years ago, the NAS encryption study was not intended to reach any conclusions about the wisdom of exceptional access, but instead to “provide an authoritative analysis of options and trade-offs.” This would seem to be a fitting task for the National Academy of Sciences, which is a non-profit, non-governmental organization, chartered by Congress to provide “objective, science-based advice on critical issues affecting the nation.” The committee that authored the report included well-respected cryptographers and technologists, lawyers, members of law enforcement, and representatives from the tech industry. It also held two public meetings and solicited input from a range of outside stakeholders, EFF among them.

EFF’s Seth Schoen and Andrew Crocker presented at the committee’s meeting at Stanford University in January 2017. We described what we saw as “three truths” about the encryption debate: First, there is no substitute for “strong” encryption, i.e. encryption without any intentionally included method for any party (other than the intended recipient/device holder) to access plaintext to allow decryption on demand by the government. Second, an exceptional access mandate will help law enforcement and intelligence investigations in certain cases. Third, “strong” encryption cannot be successfully fully outlawed, given its proliferation, the fact that a large proportion of encryption systems are open-source, and the fact that U.S. law has limited reach on the global stage. We wish the report had made a concerted attempt to grapple with that first truth, instead of confining its analysis to the second and third.

We recognize that the NAS report was undertaken in good faith, but the trouble with the final product is twofold.

First, its framing is hopelessly slanted. Not only does the report studiously avoid taking a position on whether compromising encryption is a good idea, its “options and tradeoffs” are all centered around the stated government need of “ensuring access to plaintext.” To that end, the report examines four possible options: (1) taking no legislative action, (2) providing additional support for government hacking and other workarounds, (3) a legislative mandate that providers provide government access to plaintext, and (4) mandating a particular technical method for providing access to plaintext.

But all of these options, including “no legislative action,” treat government agencies’ stated need to access to plaintext as the only goal worth study, with everything else as a tradeoff. For example, from EFF’s perspective, the adoption of encryption by default is one of the most positive developments in technology policy in recent years because it permits regular people to keep their data confidential from eavesdroppers, thieves, abusers, criminals, and repressive regimes around the world. By contrast, because of its framing, the report discusses these developments purely in terms of criminals “who may unknowingly benefit from default settings” and thereby evade law enforcement.

By approaching the question only as one of how to deliver plaintext to law enforcement, rather than approaching the debate more holistically, the NAS does us a disservice. The question of whether encryption should or shouldn’t be compromised for “exceptional access” should not be treated as one of several in the encryption debate: it is the question.

Second, although it attempts to recognize the downsides of exceptional access, the report’s discussion of the possible risks to civil liberties is notably brief. In the span of only three pages (out of nearly a hundred), it acknowledges the importance of encryption to supporting values such as privacy and free expression. Unlike the interests of law enforcement, which are represented in every section, the report discusses the risks to civil liberties posed by exceptional access as just one more tradeoff, and addresses them as a stand-alone concern.

To emphasize the report’s focus, the civil liberties section ends with the observation that criminals and terrorists use encryption to “take actions that negatively impact the security of law-abiding individuals.” This ignores the possibility that encryption can both enhance civil liberties and preserve individual safety. That’s why, for example, experts on domestic violence argue that smartphone encryption protects victims from their abusers, and that law enforcement should not seek to compromise smartphone encryption in order to prosecute these crimes.

Furthermore, the simple act of mandating that providers break encryption in their products is itself a significant civil liberties concern, totally apart from privacy and security implications that would result. Specifically, EFF raised concerns that encryption does not just support free expression, it is free expression. Notably absent is any examination of the rights of developers of cryptographic software, particularly given the role played by free and open source software in the encryption ecosystem. It ignores the legal landscape in the United States—one that strongly protects the principle that code (including encryption) is speech, protected by the First Amendment.

The report also underplays the international implications of any U.S. government mandate for U.S.-based providers. Currently, companies resist demands for plaintext from regimes whose respect for the rule of law is dubious, but that will almost certainly change if they accede to similar demands from U.S. agencies. In a massive understatement, the report notes that this could have “global implications for human rights.” We wish that the NAS had given this crucial issue far more emphasis and delved more deeply into the question, for instance, of how Apple could plausibly say no to a Chinese demand to wiretap a Chinese user’s FaceTime conversations while providing that same capacity to the FBI.

In any tech policy debate, expert advice is valuable not only to inform how to implement a particular policy but whether to undertake that policy in the first place. The NAS might believe that as the provider of “objective, science-based advice,” it isn’t equipped to weigh in on this sort of question. We disagree.

Published February 16, 2018 at 10:04PM
Read more on eff.org

EFF: EFF and MuckRock Are Filing a Thousand Public Records Requests About ALPR Data Sharing

EFF and MuckRock Are Filing a Thousand Public Records Requests About ALPR Data Sharing

EFF and MuckRock have a launched a new public records campaign to reveal how much data law enforcement agencies have collected using automated license plate readers (ALPRs) and are sharing with each other.

Over the next few weeks, the two organizations are filing approximately 1,000 public records requests with agencies that have deals with Vigilant Solutions, one of the nation’s largest vendors of ALPR surveillance technology and software services. We’re seeking documentation showing who’s sharing ALPR data with whom. We are also requesting information on how many plates each agency scanned in 2016 and 2017 and how many of those plates were on predetermined “hot lists” of vehicles suspected of being connected to crimes.

You can see the full list of agencies and track the progress of each request through the Street-Level Surveillance: ALPR Campaign page on MuckRock.

As Easy As Adding a Friend on Facebook

“Joining the largest law enforcement LPR sharing network is as easy as adding a friend on your favorite social media platform.”

That’s a direct quote from Vigilant Solutions in its promotional materials for its ALPR technology. Through its LEARN system, Vigilant Solutions has made it possible for government agencies—particularly sheriff’s offices and police departments—to grant 24-7, unrestricted database access to hundreds of other agencies around the country.

ALPRs are camera systems that scan every license plate that passes in order to create enormous databases of where people drive and park their cars both historically and in real time. Collected en masse by ALPRs mounted on roadways and vehicles, this data can reveal sensitive information about people, such as where they work, socialize, worship, shop, sleep at night, and seek medical care or other services. ALPR allows your license plate to be used as a tracking beacon and a way to map your social networks.

Here’s the question: who is on your local police department’s and sheriff office’s ALPR friend lists?

Perhaps you live in a “sanctuary city.” There’s a very real chance local police are sharing ALPR data with Immigration & Customs Enforcement, Customs & Border Patrol, or one of their subdivisions.

Perhaps you live thousands of miles from the South. You’d be surprised to learn that scores of small towns in rural Georgia have round-the-clock access to your ALPR data. This includes towns like Meigs, which serves a population of 1,000 and did not even have full-time police officers until last fall.

In 2017, EFF and the Center for Human Rights and Privacy filed records requests with several dozen law enforcement agencies in California. We found that police departments were routinely sharing ALPR data with a wide variety of agencies that may be difficult to justify. Police often shared with the DEA, FBI, and U.S. Marshals—but they also shared with federal agencies with a less clear interest, such as the U.S. Forest Service, the U.S. Department of Veteran Affairs, and the Air Force base at Fort Eustis. California agencies were also sharing with public universities on the East Coast, airports in Tennessee and Texas, and agencies that manage public assistance programs, like food stamps and indigent health care. In some cases, the records indicate the agencies were sharing with private actors.

Meanwhile, most agencies are connected to an additional network called the National Vehicle Locator System (NVLS), which shares sensitive information with more than 500 government agencies, the identities of which have never been publicly disclosed.

Here are the data sharing documents we obtained in 2017, which we are seeking to update with our new series of requests.

We hope to create a detailed snapshot of the ALPR mass surveillance network linking law enforcement and other government agencies nationwide. Currently, the only entity that has the definitive list is Vigilant Solutions, which, as a private company, is not subject to state or federal public record disclosure laws. So far, the company has not volunteered this information, despite reaping many millions in tax dollars.

Until they do, we’ll keep filing requests.

For more information on ALPRs, visit EFF’s Street-Level Surveillance hub.

.

Published February 16, 2018 at 07:28PM
Read more on eff.org

EFF: The False Teeth of Chrome’s Ad Filter.

The False Teeth of Chrome’s Ad Filter.

Today Google launched a new version of its Chrome browser with what they call an ad filter„—which means that it sometimes blocks ads but is not an ad blocker.“ EFF welcomes the elimination of the worst ad formats. But Google’s approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed. 

Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for bad ad experiences.“ The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination. 

The Coalition for Better Ads Lacks a Consumer Voice

The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.

This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.

But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus

Stopping the Biggest Boycott in History

Some commentators have interpreted ad blocking as the „biggest boycott in history“ against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads whitelistedunder the Acceptable Ads program. These companies leverage their position as gatekeepers to the user’s eyeballs, obliging Google to buy back access to the blocked part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers. 

Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.

What Difference Will It Make?

Coverage of Chrome’s new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition’s bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only filter 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.

User Protection or Monopoly Power?

The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.

Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube (instream pre-roll) were not user-tested and are exempted from the prohibition on auto-play ads with sound.“ This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome’s monitoring of sites associated with malware and related user protection notifications.

There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)

Chrome Falls Behind on User Privacy 

This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacyconscious users an easy optout. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users‘ concerns.

Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in Private Browsing mode with the option to enable it universally. Meanwhile, Apple’s Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome’s user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF’s Privacy Badger or uBlock Origin to fill this gap.

(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick’s cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.

Published February 16, 2018 at 03:00AM
Read more on eff.org

EFF: Federal Judge Says Embedding a Tweet Can Be Copyright Infringement

Federal Judge Says Embedding a Tweet Can Be Copyright Infringement

Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.

This case began when Justin Goldman accused online publications, including Breitbart, Time, Yahoo, Vox Media, and the Boston Globe, of copyright infringement for publishing articles that linked to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage (the photo was newsworthy because it showed Brady in the Hamptons while the Celtics were trying to recruit Kevin Durant). Goldman said those stories infringe his copyright.

Courts have long held that copyright liability rests with the entity that hosts the infringing content—not someone who simply links to it. The linker generally has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. This “server test,” originally from a 2007 Ninth Circuit case called Perfect 10 v. Amazon, provides a clear and easy-to-administer rule. It has been a foundation of the modern Internet.

Judge Katherine Forrest rejected the Ninth Circuit’s server test, based in part on a surprising approach to the process of embedding. The opinion describes the simple process of embedding a tweet or image—something done every day by millions of ordinary Internet users—as if it were a highly technical process done by “coders.” That process, she concluded, put publishers, not servers, in the drivers’ seat:

[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result.

She also argued that Perfect 10 (which concerned Google’s image search) could be distinguished because in that case the “user made an active choice to click on an image before it was displayed.” But that was not a detail that the Ninth Circuit relied on in reaching its decision. The Ninth Circuit’s rule—which looks at who actually stores and serves the images for display—is far more sensible.

If this ruling is appealed (there would likely need to be further proceedings in the district court first), the Second Circuit will be asked to consider whether to follow Perfect 10 or Judge Forrest’s new rule. We hope that today’s ruling does not stand. If it did, it would threaten the ubiquitous practice of in-line linking that benefits millions of Internet users every day.

Published February 16, 2018 at 03:12AM
Read more on eff.org

EFF: Customs and Border Protection’s Biometric Data Snooping Goes Too Far

Customs and Border Protection’s Biometric Data Snooping Goes Too Far

The U.S. Department of Homeland Security (DHS), Customs and Border Protection (CBP) Privacy Office, and Office of Field Operations recently invited privacy stakeholders—including EFF and the ACLU of Northern California—to participate in a briefing and update on how the CBP is implementing its Biometric Entry/Exit Program.

As we’ve written before, biometrics systems are designed to identify or verify the identity of people by using their intrinsic physical or behavioral characteristics. Because biometric identifiers are by definition unique to an individual person, government collection and storage of this data poses unique threats to privacy and security of individual travelers.

EFF has many concerns about the government collecting and using biometric identifiers, and specifically, we object to the expansion of several DHS programs subjecting Americans and foreign citizens to facial recognition screening at international airports. EFF appreciated the opportunity to share these concerns directly with CBP officers and we hope to work with CBP to allow travelers to opt-out of the program entirely.

You can read the full letter we sent to CBP here.

Published February 16, 2018 at 02:21AM
Read more on eff.org

EFF: The Revolution and Slack

The Revolution and Slack

The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.  

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.

The Mismatch

Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.

Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.

We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.

First, The Good News

Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.

Slack also stores your data in encrypted form, which means that if it leaks or is stolen, it is not readable. This is excellent protection if you are worried about attacks and data breaches. It is not useful, however, if you are worried about governments or other entities putting pressure on Slack to hand over your information.

Risks With Slack In Particular

And now the downsides. These are things that Slack could change, and EFF has called on them to do so.

Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.

Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes

Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.

Risks With Third-Party Platforms in General

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.

Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you. 

Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.

Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.

A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.

Making Slack as Secure as Possible

If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.

The lowest-hanging privacy fruit is to change a workspace’s retention settings. By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. Workspace admins have the ability set shorter retention periods, which can mean less content available for government requests or legal inquiries.

Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages. 

The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace

However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law.

Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.

Published February 14, 2018 at 06:44PM
Read more on eff.org

EFF: Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report

Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report

Independent Oversight, Privacy Protections Are Needed

San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today.

Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems.

This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA.

Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets.

But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rights that whites, older people, and men, respectively. And because of well-documented racially-biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants.

For both reasons, inaccuracies in facial recognition systems will disproportionately affect people of color.

“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”

Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes.

“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”

For the report:
https://www.eff.org/wp/law-enforcement-use-face-recognition

For more on face recognition:
https://www.eff.org/document/facial-recognition-one-pager

Contact: 
Jennifer
Lynch
Senior Staff Attorney

Published February 15, 2018 at 04:45PM
Read more on eff.org