EFF: Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust

Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust

Some of the most worrying kinds of attacks are ones that exploit users’ trust in the systems and softwares they use every day. Yesterday, Cisco’s Talos security team uncovered just that kind of attack in the computer cleanup software CCleaner. Download servers at Avast, the company that owns CCleaner, had been compromised to distribute malware inside CCleaner 5.33 updates for at least a month. Avast estimates that over 2 million users downloaded the affected update. Even worse, CCleaner’s popularity with journalists and human rights activists means that particularly vulnerable users are almost certainly among that number. Avast has advised CCleaner Windows users to update their software immediately.

This is often called a “supply chain” attack, referring to all the steps software takes to get from its developers to its users. As more and more users get better at bread-and-butter personal security like enabling two-factor authentication and detecting phishing, malicious hackers are forced to stop targeting users and move “up” the supply chain to the companies and developers that make software. This means that developers need to get in the practice of “distrusting” their own  infrastructure to ensure safer software releases with reproducible builds, allowing third parties to double-check whether released binary and source packages correspond. The goal should be to secure internal development and release infrastructure to that point that no hijacking, even from a malicious actor inside the company, can slip through unnoticed.

The harms of this hack extend far beyond the 2 million users who were directly affected. Supply chain attacks undermine users’ trust in official sources, and take advantage of the security safeguards that users and developers rely on. Software updates like the one Avast released for CCleaner are typically signed with the developer’s un-spoof-able cryptographic key. But the hackers appear to have penetrated Avast’s download servers before the software update was signed, essentially hijacking Avast’s update distribution process and punishing users for the security best practice of updating their software.

Despite observations that these kind of attack are on the rise, the reality is that they remain extremely rare when compared to other kinds of attacks users might encounter. This and other supply chain attacks should not deter users from updating their software. Like any security decision, this is a trade-off: for every attack that might take advantage of the supply chain, there are one hundred attacks that will take advantage of users not updating their software.

For users, sticking with trusted, official software sources and updating your software whenever prompted remains the best way to protect yourself from software attacks. For developers and software companies, the attack on CCleaner is a reminder of the importance of securing every link of the download supply chain.

Published September 19, 2017 at 09:16PM
Read more on eff.org

Advertisements

EFF: Live Blog: Senate Commerce Committee Discusses SESTA

Live Blog: Senate Commerce Committee Discusses SESTA

There’s a bill in Congress that would be a disaster for free speech online. The Senate Committee on Commerce, Science, and Transportation is holding a hearing on that bill, and we’ll be blogging about it as it happens.

The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem. The authors of SESTA say it’s designed to fight sex trafficking, but the bill wouldn’t punish traffickers. What it would do is threaten legitimate online speech.

Join us at 7:30 a.m. Pacific time (10:30 Eastern) on Tuesday, right here and on the @EFFLive Twitter account. We’ll let you know how to watch the hearing, and we’ll share our thoughts on it as it happens. In the meantime, please take a moment to tell your members of Congress to Stop SESTA.

Take Action

Tell Congress: Stop SESTA.

Published September 19, 2017 at 02:27AM
Read more on eff.org

EFF: The Cybercrime Convention’s New Protocol Needs to Uphold Human Rights

The Cybercrime Convention’s New Protocol Needs to Uphold Human Rights

As part of an ongoing attempt to help law enforcement obtain data across international borders, the Council of Europe’s Cybercrime Convention— finalized in the weeks following 9/11, and ratified by the United States and over 50 countries around the world—is back on the global lawmaking agenda. This time, the Council’s Cybercrime Convention Committee (T-CY) has initiated a process to draft a second additional protocol to the Convention—a new text which could allow direct foreign law enforcement access to data stored in other countries’ territories. EFF has joined EDRi and a number of other organizations in a letter to the Council of Europe, highlighting some anticipated concerns with the upcoming process and seeking to ensure civil society concerns are considered in the new protocol. This new protocol needs to preserve the Council of Europe’s stated aim to uphold human rights, and not undermine privacy, and the integrity of our communication networks.

How the Long Arm of Law Reaches into Foreign Servers

Thanks to the internet, individuals and their data increasingly reside in different jurisdictions: your email might be stored on a Google server in the United States, while your shared Word documents might be stored by Microsoft in Ireland. Law enforcement agencies across the world have sought to gain access to this data, wherever it is held. That means police in one country frequently seek to extract personal, private data from servers in another.

Currently, the primary international mechanism for facilitating governmental cross border data access is the Mutual Legal Assistance Treaty (MLAT) process, a series of treaties between two or more states that create a formal basis for cooperation between designated authorities of signatories. These treaties typically include some safeguards for privacy and due process, most often the safeguards of the country that hosts the data.

The MLAT regime includes steps to protect privacy and due process, but frustrated agencies have increasingly sought to bypass it, by either cross-border hacking, or leaning on large service providers in foreign jurisdictions to hand over data voluntarily.

The legalities of cross-border hacking remain very murky, and its operation is the very opposite of transparent and proportionate. Meanwhile, voluntary cooperation between service providers and law enforcement occurs outside the MLAT process and without any clear accountability framework. The primary window of insight into its scope and operation is the annual Transparency Reports voluntarily issued by some companies such as Google and Twitter.

Hacking often blatantly ignores the laws and rights of a foreign state, but voluntary data handovers can be used to bypass domestic legal protections too.  In Canada, for example, the right to privacy includes rigorous safeguards for online anonymity: private Internet companies are not permitted to identify customers without prior judicial authorization. By identifying often sensitive anonymous online activity directly through the voluntary cooperation of a foreign company not bound by Canadian privacy law, law enforcement agents can effectively bypass this domestic privacy standard.

Faster, but not Better: Bypassing MLAT

The MLAT regime has been criticized as slow and inefficient. Law enforcement officers have claimed that have to wait anywhere between 6-10 months—the reported average time frame for receiving data through an MLAT request—for data necessary to their local investigation. Much of this delay, however, is attributable to a lack of adequate resources, streamlining and prioritization for the huge increase in MLAT requests for data held the United States, plus the absence of adequate training for law enforcement officers seeking to rely on another state’s legal search and seizure powers.

Instead of just working to make the MLAT process more effective, the T-CY committee is seeking to create a parallel mechanism for cross-border cooperation. While the process is still in its earliest stages, many are concerned that the resulting proposals will replicate many of the problems in the existing regime, while adding new ones.

What the New Protocol Might Contain

The Terms of Reference for the drafting of this new second protocol reveal some areas that may be included in the final proposal.

Simplified mechanisms for cross border access

T-CY has flagged a number of new mechanisms it believes will streamline cross-border data access. The terms of reference mention a simplified regime’ for legal assistance with respect to subscriber data. Such a regime could be highly controversial if it compelled companies to identify anonymous online activity without prior judicial authorization. The terms of reference also envision the creation of “international production orders.”. Presumably these would be orders issued by one court under its own standards, but that must be respected by Internet companies in other jurisdictions. Such mechanisms could be problematic where they do not respect the privacy and due process rights of both jurisdictions.

Direct cooperation

The terms of reference also call for „provisions allowing for direct cooperation with service providers in other jurisdictions with regard to requests for [i] subscriber information, [ii] preservation requests, and [iii] emergency requests.“ These mechanisms would be permissive, clearing the way for companies in one state to voluntarily cooperate with certain types of requests issued by another, and even in the absence of any form of judicial authorization.

Each of the proposed direct cooperation mechanisms could be problematic. Preservation requests are not controversial per se. Companies often have standard retention periods for different types of data sets. Preservation orders are intended to extend these so that law enforcement have sufficient time to obtain proper legal authorization to access the preserved data. However, preservation should not be undertaken frivolously. It can carry an accompanying stigma, and exposes affected individuals’ data to greater risk if a security breach occurs during the preservation period. This is why some jurisdictions require reasonable suspicion and court orders as requirements for preservation orders.

Direct voluntary cooperation on emergency matters is challenging as well. While in such instances, there is little time to engage the judicial apparatus and most states recognize direct access to private customer data in emergency situations, such access can still be subject to controversial overreach. This potential for overreach–and even abuse–becomes far higher where there is a disconnect between standards in requesting and responding jurisdictions.

Direct cooperation in identifying customers can be equally controversial. Anonymity is critical to privacy in digital contexts. Some data protection laws (such as Canada’s federal privacy law) prevent Internet companies from voluntarily providing subscriber data to law enforcement voluntarily.

Safeguards

The terms of reference also envisions the adoption of “safeguards“. The scope and nature of these will be critical. Indeed, one of the strongest criticisms against the original Cybercrime Convention has been its lack of specific protections and safeguards for privacy and other human rights. The EDRi Letter calls for adherence to the Council of Europe’s data protection regime, Convention 108, as a minimum prerequisite to participation in the envisioned regime for cross-border access, which would provide some basis for shared privacy protection. The letter also calls for detailed statistical reporting and other safeguards.

What’s next?

On 18 September, the T-CY Bureau will meet with European Digital Rights Group (EDRI) to discuss the protocol. The first meeting of the Drafting Group will be held on 19 and 20 September. The draft Protocol will be prepared and finalized by the T-CY, in closed session.

Law enforcement agencies are granted extraordinary powers to invade privacy in order to investigate crime. This proposed second protocol to the Cybercrime Convention must ensure that the highest privacy standards and due process protections adopted by signatory states remain intact.

We believe that the Council of Europe T-CY Committee — Netherlands, Romania, Canada, Dominica Republic, Estonia, Mauritius, Norway, Portugal, Sri Lanka, Switzerland, and Ukraine — should concentrate first on fixes to the existing MLAT process, and they should ensure that this new initiative does not become an exercise in harmonization to the lowest denominator of international privacy protection. We’ll be keeping track of what happens next.

Published September 19, 2017 at 01:10AM
Read more on eff.org

EFF: EFF to Court: The First Amendment Protects the Right to Record First Responders

EFF to Court: The First Amendment Protects the Right to Record First Responders

The First Amendment protects the right of members of the public to record first responders addressing medical emergencies, EFF argued in an amicus brief filed in the federal trial court for the Northern District of Texas. The case, Adelman v. DART, concerns the arrest of a Dallas freelance press photographer for criminal trespass after he took photos of a man receiving emergency treatment in a public area.

EFF’s amicus brief argues that people frequently use electronic devices to record and share photos and videos. This often includes newsworthy recordings of on-duty police officers and emergency medical services (EMS) personnel interacting with members of the public. These recordings have informed the public’s understanding of emergencies and first responder misconduct.

EFF’s brief was joined by a broad coalition of media organizations: the Freedom of the Press Foundation, the National Press Photographers Association, the PEN American Center, the Radio and Television Digital News Association, Reporters Without Borders, the Society of Professional Journalists, the Texas Association of Broadcasters, and the Texas Press Association.

Our local counsel are Thomas Leatherbury and March Fuller of Vinson & Elkins L.L.P.

EFF’s new brief builds on our amicus brief filed last year before the Third Circuit Court of Appeals in Fields v. Philadelphia. There, we successfully argued that the First Amendment protects the right to use electronic devices to record on-duty police officers.

Adelman, a freelance journalist, has provided photographs to media outlets for nearly 30 years. He heard a call for paramedics to respond to a K2 overdose victim at a Dallas Area Rapid Transit (“DART”) station. When he arrived, he believed the incident might be of public interest and began photographing the scene. A DART police officer demanded that Adelman stop taking photos. Despite Adelman’s assertion that he was well within his constitutional rights, the DART officer, with approval from her supervisor, arrested Adelman for criminal trespass.

Adelman sued the officer and DART. EFF’s amicus brief supports his motion for summary judgment.

Published September 19, 2017 at 12:57AM
Read more on eff.org

EFF: Security Education: What’s New on Surveillance Self-Defense

Security Education: What’s New on Surveillance Self-Defense

Since 2014, our digital security guide, Surveillance Self-Defense (SSD), has taught thousands of Internet users how to protect themselves from surveillance, with practical tutorials and advice on the best tools and expert-approved best practices. After hearing growing concerns among activists following the 2016 US presidential election, we pledged to build, update, and expand SSD and our other security education materials to better advise people, both within and outside the United States, on how to protect their online digital privacy and security.

While there’s still work to be done, here’s what we’ve been up to over the past several months.

SSD Guide Audit

SSD is consistently updated based on evolving technology, current events, and user feedback, but this year our SSD guides are going through a more in-depth technical and legal review to ensure they’re still relevant and up-to-date. We’ve also put our guides through a „simple English“ review in order to make them more usable for digital security novices and veterans alike. We’ve worked to make them a little less jargon-filled, and more straightforward. That helps everyone, whether English is their first language or not. It also makes translation and localization easier: that’s important for us, as SSD is maintained in eleven languages.

Many of these changes are based on reader feedback. We’d like to thank everyone for all the messages you’ve sent and encourage you to continue providing notes and suggestions, which helps us preserve SSD as a reliable resource for people all over the world. Please keep in mind that some feedback may take longer to incorporate than others, so if you’ve made a substantive suggestion, we may still be working on it!

As of today, we’ve updated the following guides and documents:

Assessing your Risks

Formerly known as „Threat Modeling,“ our Assessing your Risks guide was updated to be less intimidating to those new to digital security. Threat modeling is the primary and most important thing we teach at our security trainings, and because it’s such a fundamental skill, we wanted to ensure all users were able to grasp the concept. This guide walks users through how to conduct their own personal threat modeling assessment. We hope users and trainers will find it useful.

SSD Glossary Updates

SSD hosts a glossary of technical terms that users may encounter when using the security guide. We’ve added new terms and intend on expanding this resource over the coming months.

How to: Avoid Phishing Attacks

With new updates, this guide helps users identify phishing attacks when they encounter them and delves deeper into the types of phishing attacks that are out there. It also outlines five practical ways users can protect themselves against such attacks.

One new tip we added suggests using a password manager with autofill. Password managers that auto-fill passwords keep track of which sites those passwords belong to. While it’s easy for a human to be tricked by fake login pages, password managers are not tricked in the same way. Check out the guide for more details, and for other tips to help defend against phishing.

How to: Use Tor

We updated How to: Use Tor for Windows and How to: use Tor for macOS and added a new How to: use Tor for Linux guide to SSD. These guides all include new screenshots and step-by-step instructions for how to install and use the Tor Browser—perfect for people who might need occasional anonymity and privacy when accessing websites.

How to: Install Tor Messenger (beta) for macOS

We’ve added two new guides on installing and using Tor Messenger for instant communications.  In addition to going over the Tor network, which hides your location and can protect your anonymity, Tor Messenger ensures messages are sent strictly with Off-the-Record (OTR) encryption. This means your chats with friends will only be readable by them—not a third party or service provider.  Finally, we believe Tor Messenger is employing best practices in security where other XMPP messaging apps fall short.  We plan to add installation guides for Windows and Linux in the future.

Other guides we’ve updated include circumventing online censorship, and using two-factor authentication.

What’s coming up?

Continuation of our audit: This audit is ongoing, so stay tuned for more security guide updates over the coming months, as well as new additions to the SSD glossary.

Translations: As we continue to audit the guides, we’ll be updating our translated content. If you’re interested in volunteering as a translator, check out EFF’s Volunteer page.

Training materials: Nothing gratifies us more than hearing that someone used SSD to teach a friend or family member how to make stronger passwords, or how to encrypt their devices. While SSD was originally intended to be a self-teaching resource, we’re working towards expanding the guide with resources for users to lead their friends and neighbors in healthy security practices. We’re working hard to ensure this is done in coordination with the powerful efforts of similar initiatives, and we seek to support, complement, and add to that collective body of knowledge and practice.

Thus we’ve interviewed dozens of US-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re also conducting frequent critical assessment of learners and trainers, with regular live-testing of our workshop content and user testing evaluations of the SSD website.

It’s been humbling to observe where beginners have difficulty learning concepts or tools, and to hear where trainers struggle using our materials. With their feedback fresh in mind, we continue to iterate on the materials and curriculum.

Over the next few months, we are rolling out new content for a teacher’s edition of SSD, intended for short awareness-raising one to four hour-long sessions. If you’re interested in testing our early draft digital security educational materials and providing feedback on how they worked, please fill out this form by September 30. We can’t wait to share them with you.

 

Published September 18, 2017 at 10:36PM
Read more on eff.org

EFF: In A Win For Privacy, Uber Restores User Control Over Location-Sharing

In A Win For Privacy, Uber Restores User Control Over Location-Sharing

After making an unfortunate change to its privacy settings last year, we are glad to see that Uber has reverted back to settings that empower its users to make choices about sharing their location information.

Last December, an Uber update restricted users‘ location-sharing choices to „Always“ or „Never,“ removing the more fine-grained „While Using“ setting. This meant that, if someone wanted to use Uber, they had to agree to share their location information with the app at all times or surrender usability. In particular, this meant that riders would be tracked for five minutes after being dropped off.

Now, the „While Using“ setting is back—and Uber says the post-ride tracking will end even for users who choose the „Always“ setting. We are glad to see Uber reverting back to giving users more control over their location privacy, and hope it will stick this time. EFF recommends that all users manually check that their Uber location privacy setting is on „While Using“after they receive the update.

1.     Open the Uber app, and press the three horizontal lines on the top left to open the sidebar.

2.     Once the sidebar is open, press Settings.

3.     Scroll to the bottom of the settings page to select Privacy Settings.

4.     In your privacy settings, select Location.

5.     In Location, check to see if it says “Always.”  If it does, click to change it.

6.     Here, change your location setting to „While Using“ or „Never“. Note that „Never“ will require you to manually enter your pickup address every time you call a ride.

Published September 15, 2017 at 06:32PM
Read more on eff.org

EFF: Azure Confidential Computing Heralds the Next Generation of Encryption in the Cloud

Azure Confidential Computing Heralds the Next Generation of Encryption in the Cloud

For years, EFF has commended companies who make cloud applications that encrypt data in transit. But soon, the new gold standard for cloud application encryption will be the cloud provider never having access to the user’s data—not even while performing computations on it.

Microsoft has become the first major cloud provider to offer developers the ability to build their applications on top of Intel’s Software Guard Extensions (SGX) technology, making Azure “the first SGX-capable servers in the public cloud.” Azure customers in Microsoft’s Early Access program can now begin to develop applications with the “confidential computing” technology.

Intel SGX uses protections baked into the hardware to ensure that data remains secure, even from the platform it’s running on. That means that an application that protects its secrets inside SGX is protecting it not just from other applications running on the system, but from the operating system, the hypervisor, and even Intel’s Management Engine, an extremely privileged coprocessor that we’ve previously warned about.

Cryptographic methods of computing on encrypted data are still an active body of research, with most methods still too inefficient or involving too much data leakage to see practical use in industry. Secure enclaves like SGX, also known as Trusted Execution Environments (TEEs), offer an alternative path to applications looking to compute over encrypted data. For example, a messaging service with a server that uses secure enclaves offers similar guarantees to end-to-end encrypted services. But whereas an end-to-encrypted messaging service would have to use client-side search or accept either side channel leakage or inefficiency to implement server-side search, by using an enclave they can provide server-side search functionality with always-encrypted guarantees at little additional computational cost. The same is true for the classic challenge of changing the key that a ciphertext is encrypted without access to the key, known as proxy re-encryption. Many problems that have challenged cryptographers for decades to find efficient, leakage-free solutions are solvable instead by a sufficiently robust secure enclave ecosystem.

While there is great potential here, SGX is still a relatively new technology, meaning that security vulnerabilities are still being discovered as more research is done. Memory corruption vulnerabilities within enclaves can be exploited by classic attack mechanisms like return-oriented programming (ROP). Various side channel attacks have been discovered, some of which are mitigated by a growing host of protective techniques. Promisingly, Microsoft’s press release teases that they’re “working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.” This could indicate that they’re working on developing something like Sanctum, which isolates caches by trusted application, reducing a major side channel attack surface. Until these issues are fully addressed, a dedicated attacker could recover some or all of the data protected by SGX, but it’s still a massive improvement over not using hardware protection at all.

The technology underlying Azure Confidential Computing is not yet perfect, but it’s efficient enough for practical usage, stops whole classes of attacks, and is available today. EFF applauds this giant step towards making encrypted applications in the cloud feasible, and we look forward to seeing cloud offerings from major providers like Amazon and Google follow suit. Secure enclaves have the potential to be a new frontier in offering users privacy in the cloud, and it will be exciting to see the applications that developers build now that this technology is becoming more widely available.

Published September 18, 2017 at 07:37PM
Read more on eff.org

EFF: An open letter to the W3C Director, CEO, team and membership

An open letter to the W3C Director, CEO, team and membership

Dear Jeff, Tim, and colleagues,

In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties.

When it became clear, following our formal objection, that the W3C’s largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and other supporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action.

This covenant would allow the W3C’s large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C’s DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation.

More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video.

Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected.

The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate.  In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.

But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.

In our campaigning on this issue, we have spoken to many, many members‘ representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn’t on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.

We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.

So we’ll keep fighting to fight to keep the web free and open. We’ll keep suing the US government to overturn the laws that make DRM so toxic, and we’ll keep bringing that fight to the world’s legislatures that are being misled by the US Trade Representative to instigate local equivalents to America’s legal mistakes.

We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them.

We will defend those who are put in harm’s way for blowing the whistle on defects in EME implementations.

It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters.

Effective today, EFF is resigning from the W3C.

Thank you,

Cory Doctorow
Advisory Committee Representative to the W3C for the Electronic Frontier Foundation

Published September 18, 2017 at 06:50PM
Read more on eff.org

EFF: California Legislature Sells Out Our Data to ISPs

California Legislature Sells Out Our Data to ISPs

In the dead of night, the California Legislature shelved legislation that would have protected every Internet user in the state from having their data collected and sold by ISPs without their permission. By failing to pass A.B. 375, the legislature demonstrated that they put the profits of Verizon, AT&T, and Comcast over the privacy rights of their constituents.

Earlier this year, the Republican majority in Congress repealed the strong privacy rules issued by the Federal Communications Commission in 2016, which required ISPs to get affirmative consent before selling our data.  But while Congressional Democrats fought to protect our personal data, the Democratic-controlled California legislature did not follow suit. Instead, they kowtowed to an aggressive lobbying campaign, from telecommunications corporations and Internet companies, which included spurious claims and false social media advertisements about cybersecurity. 

“It is extremely disappointing that the California legislature failed to restore broadband privacy rights for residents in this state in response to the Trump Administration and Congressional efforts to roll back consumer protection,” EFF Legislative Counsel Ernesto Falcon said. “Californians will continue to be denied the legal right to say no to their cable or telephone company using their personal data for enhancing already high profits. Perhaps the legislature needs to spend more time talking to the 80% of voters that support the goal of A.B. 375 and less time with Comcast, AT&T, and Google’s lobbyists in Sacramento.” 

All hope is not lost, because the bill is only stalled for the rest of the year. We can raise it again in 2018.

A.B. 375 was introduced late in the session; that it made it so far in the process so quickly demonstrates that there are many legislators who are all-in on privacy.  In January, EFF will build off this year’s momentum with a renewed push to move A.B. 375 to the governor’s desk. Mark your calendar and join us. 

Published September 16, 2017 at 04:10PM
Read more on eff.org

EFF: One Last Chance for Police Transparency in California

One Last Chance for Police Transparency in California

As the days wind down for the California legislature to pass bills, transparency advocates have seen landmark measures fall by the wayside. Without explanation, an Assembly committee shelved legislation that would have shined light on police use of surveillance technologies, including a requirement that police departments seek approval from their city councils. The legislature also gutted a key reform to the California Public Records Act (CPRA) that would’ve allowed courts to fine agencies that improperly thwart requests for government documents. 

But there is one last chance for California to improve the public’s right to access police records. S.B. 345 would require every law enforcement agency in the state to publish on its website all “current standards, policies, practices, operating procedures, and education and training materials” by January 1, 2019. The legislation would cover all materials that would be otherwise available through a CPRA request.

S.B. 345 is now on Gov. Jerry Brown’s desk, and he should sign it immediately. 

Take Action

Tell Gov. Brown to sign S.B. 345 into law

There are two main reasons EFF is supporting this bill. 

The first is obvious: in order to hold law enforcement accountable, we need to understand the rules that officers are playing by. For privacy advocates, access to materials about advanced surveillance technologies—such as automated license plate readers, facial recognition, drones, and social media monitoring—will lead to better and more informed debates over policy.  The bill also would strengthen the greater police accountability movement, by proactively releasing policies and training about use of force, deaths in custody, body-worn cameras, and myriad other controversial police tactics and procedures.  

The second reason is more philosophical: we believe that rather than putting the onus on the public to always file formal records requests, government agencies should automatically upload their records to the Internet whenever possible. S.B. 345 creates openness by default for hundreds of agencies across the state.

To think of it another way: S.B. 345 is akin to the legislature sending its own public records request to every law enforcement agency in the state. 

Unlike other measures EFF has supported this session, S.B. 345 has not drawn strong opposition from law enforcement. In fact, only the California State Sheriffs’ Association is in opposition, arguing that the bill could require the disclosure of potentially sensitive information. This is incorrect, since the bill would only require agencies to publish records that would already be available under the CPRA.  The claim is further undercut by the fact that eight organizations representing law enforcement have come out in support of the bill, including the California Narcotics Officers Association and the Association of Deputy District Attorneys. 

The bill isn’t perfect. As written, the enforcement mechanism are vague, and it’s unclear what kind of consequences, if any, agencies may face if they fail to post these records in a little more than a year. In addition, agencies may overly withhold or redact policies, as is often the case with responses to traditional public records requests. Nevertheless, EFF believes that even the incremental measure contained in the bill will help pave the way for long term transparency reforms.

Join us in urging Gov. Jerry Brown to sign this important bill. 

Published September 15, 2017 at 05:52PM
Read more on eff.org