Limiting Javascript to secure origins in Firefox

I’m a Firefox user, but I was very interested to read Chris Palmer’s guide to privacy and security settings in Chrome. One thing he did that really intrigued me was enabling Javascript only on secure sites. It ends up being a pretty good default not just because it prevents attacks that rely on Javascript injection—like the ads that Comcast and AT&T have inserted into pages accessed on their hotspots, or the massive man-on-the-side attack the government of China apparently conducted against Github—but also because a site going through the effort to authenticate itself is also a reasonable proxy for the kind of stuff I’d allow anyway.

As far as I can tell, on Firefox that means installing NoScript, a powerful extension that I’d previously disabled because manually turning on Javascript where I needed it was too much of a hassle. After a few hours of browsing with these settings, it seems to strike the right balance: not exactly no fiddling with permissions, but greatly reduced manual intervention with a lot of unnecessary scripts getting blocked.

The option is in NoScript’s preferences, under Options > Advanced > HTTPS > Permissions. As long as the global block is on (which it is by default), I found that setting the drop-down menu, “Forbid active web content unless it comes from a secure HTTPS connection” actually works best when set to “Never”—or if you’re a frequent Tor user, to “When using a proxy”.1 Then the checkbox below, “Allow HTTPS scripts globally on HTTPS documents”, should be checked.


Of course, this isn’t a perfect guarantee of privacy or security. If you don’t trust the Javascript being served from the authenticated site—because the site operators may be malicious or just incompetent—then this technique won’t help. But it does make browsing much faster across much of the web, and preserve the rich interactivity you’re used to on pages your browser trusts.

  1. This setting is pretty counter-intuitive to me, but if it is set to “Always” I experienced some funny interactions with manual permission changes. []

200 Deeplinks at EFF

The article I published about the (qualified) public domain victory in the Happy Birthday case this week was my 200th post on EFF’s blog Deeplinks, an event that—like the 100th post two years ago—calls for a little reflection. I’ve picked out some of my favorites in posts 101-200, below.

In Pushing For Perfect Forward Secrecy, an Important Web Privacy Protection, I worked with our tech folks to explain a property that actually really matters for cryptography, but at that point—in August 2013—hadn’t really been set out for lay people. Since then it’s been described even better, but for a while this post had the honor of being one of the more widely-shared explainers on the topic.

The post In the Silk Road Case, Don’t Blame the Technology was published the day the criminal complaint against Ross Ulbricht was released. It goes through some of the technologies that he was alleged to have used, talking about how essential they are for enabling important speech, and how they’ve been demonized before.

It’s more specialized, but Mobile Tracking Code of Conduct Falls Short of Protecting Consumers was a fun combination of legal and technical talk. One of the points I got to make in there was how the space of MAC address hashes is possible to brute force, alongside general concerns about people finding retail mobile tracking creepy.

Remembering Aaron was a difficult one to write, emotionally. It’s become an important thing for me to look over the work I’m doing, and see how it lines up with his ideals. This post was a chance to do that across the whole organization, and speculate what he might have thought about the developments in this space in the year since his death.

There’s maybe no post that I hope more will be vindicated by history than How a Bad Supreme Court Decision Could Have Good Constitutional Consequences for Copyright, which is basically a short look at a law review article by Neil Netanel arguing that the Golan decision could render 1201 a violation of the First Amendment.

A pair of articles about TPP written 18 months apart are favorites for different reasons. The TPP’s Attack on Artists’ Termination Rights was the first thing I co-wrote with Sarah Jeong, and was both obviously correct and intensely aggravating to people who set up camp on “the other side” of copyright issues from EFF. TPP’s Copyright Trap is a long look at copyright terms that is one of the most widely shared things I’ve written.

Finally, Who Really Owns Your Drones? is so far my most thorough look at one of The Big Questions about DRM and connected devices. It builds on earlier posts like How DRM Harms Our Computer Security and that weird news story from February about the drone downed on the White House lawn.

HOWTO: Diff PDFs pixel-by-pixel on the command line

There was a major order in the Uber class action case today: the class was certified, which means that the suit can be on behalf of 160,000 drivers, instead of just the handful putting their names on the documents. Big deal!

Then a few minutes later, the court issued an amended version of the order, but didn’t release a changelog. How is a reader to know which parts are worth looking at?

There are a lot of ways to solve this problem, but I wanted one that would work on the command line, that wouldn’t require much in the way of unusual software (or Adobe), and that wouldn’t depend on having the text embedded in the PDF. Court PDFs usually do have text, but it’s a little unreliable, and in this case the documents were so close that I could compare the pixels of one to the other.

I googled around a bit, and here’s the workflow I decided to follow, adapted from this Stack Overflow question’s answers. It assumes you have pdftk and imagemagick installed.

  • Put both PDFs, file1 and file2, in one directory by themselves, and make a subdirectory called /out for temporary output.
  • Split, or “burst”, each PDF into its component pages with pdftk, and put those pages in the /out directory.
  • Use a bash loop to run imagemagick‘s compare feature over each pair of pages from file1 and file2, creating a new page for each that just contains the differences highlighted in red.
  • Again using pdftk, merge all of those diff pages back into one document that uses the original as the background.

In code, that looks like this:

pdftk file1.pdf burst out/file1---page%03d.pdf
pdftk file2.pdf burst out/file2---page%03d.pdf
for i in {001..###}; do compare out/file1---page$i.pdf out/file2---page$i.pdf -compose src out/file1--file2--diff---page$i.pdf; done
pdftk out/file1--file2--diff*.pdf cat output diff.pdf
pdftk diff.pdf multibackground file1.pdf output compositediff.pdf

The parts that need to be customized each time are the names of file1 and file2 for the first two lines and the very last line, and the ### needs to be replaced by the number of pages in each document. Other than that, you can let this one rip and end up with a visual diff in just a few seconds!

diffed pdf page

You can see how it looks above. This is the only page that changed, and it’s just one footnote.

HOWTO: One big file from a YouTube playlist

In celebration of the 40th anniversary of the release of Born To Run, I decided to watch Cory Arcangel perform his classic Glockenspiel Addendum. He’s posted videos from a 2008 concert to YouTube, so it should be no problem, right?

Well, the version he posted is in eight parts. Fine for YouTube, but I don’t want to have to click play between each segment, and I don’t want to be interrupted if my Internet goes down. I solved the first problem by creating a YouTube playlist of the whole concert, but in order to solve the second problem I’d need a local copy.

The excellent (and public domain!) program youtube-dl can fetch a copy of each of the videos separately, and will even take a playlist link as input. I made myself a glockenspiel directory, and filled it with eight mp4s.

That’s probably enough for most situations! mplayer (or your media player of choice) can take a list of files. But I wanted one big mp4, and I wanted to do that without transcoding.

In some cases, the ffmpeg concat demuxer would probably work. It’s one of three different concat features documented on the ffmpeg wiki, and designed for merging file formats that cannot be simply concatenated but that shouldn’t be transcoded. It takes a list of files in the following format:

file 'path/to/file1.mp4'
file 'path/to/file2.mp4'

etc. You can generate that list with a little bash loop:

for f in ./*.mp4; do echo "file '$f'" >> list.txt; done

And then feed it into the concat demuxer with the following command:

ffmpeg -f concat -i list.txt -c copy output.mp4

If that works for you, great, you’re set. Unfortunately, I ran into a problem: the resulting mp4 file had some weird reference frame issues, resulting in some (but not all) of the video parts to be garbled flashing green frames.1 mplayer kept spitting errors like: number of reference frames (0+5) exceeds max (3; probably corrupt input), discarding one.

I wasn’t going to be able to use the concat demuxer, but as I mentioned above ffmpeg has three different concat options. This Q&A describes a way to place the mp4 files in a new transport stream container, which is one of the kinds of files that can be concatenated with the concat protocol, at the file level. One by one, I made temporary mpeg transport stream files like this:

ffmpeg -i input1.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts temp1.ts

And then I merged all those files, temp1.ts through temp8.ts, with the following unwieldy command:

ffmpeg -i "concat:temp1.ts|temp2.ts|temp3.ts|temp4.ts|temp5.ts|temp6.ts|temp7.ts|temp8.ts" -c copy -bsf:a aac_adtstoasc output.mp4

Which works like a charm. Not a totally painless process, but now I’ve got a pretty well merged and not transcoded local version and can watch me some glockenspiel.

  1. It’s outside the scope of this post, but the next thing I tried, mkvmerge, created a file with the exact same problem. []

Computer Chronicles: Internet

Who says online users are a bunch of anti-social geeks?

That’s the Icon Byte Bar in San Francisco, one of the first six or eight “electronic cafes” to open in the mid 1990s, according to And this is another episode of the PBS program Computer Chronicles, where today we’re talking about the Internet.

First off, John Markoff explains how “electronic mail” works, and lands some sweet brags in the process. Like, for example, here’s an email he just got from Steve Jobs. And oh yeah, when you’re in his position, you might need some fancy filtering tools, what with getting hundreds or even thousands of electronic mails a day.

Next we get a look at AnArchie, and a tool for browsing USENET. Also a discussion on security. “I’d be careful putting my password on the ‘Net, I’d pick a password that’s a safe password, and I wouldn’t put my credit card up until there’s security software that will protect the credit card.”

Next we talk to Severe Tire Damage, a group of weekend musicians with day jobs at Xerox, Apple, and Digital Equipment, who “upstaged the Rolling Stones by transmitting their own performance over the Internet” in November 1994.

“I think what we did was a kind of piracy, like in the early days of people flying airplanes, where you land in some farmer’s field ‘cos you had no place else to go, and it was okay because there weren’t very many airplanes around. There aren’t very many people now who can use the Internet in this way. And so anything goes for now, ‘cos we’re still explorers exploring brand new space and there’s very very few of us.”

Compuserve’s Charla Beaverson demos her company’s service, navigating through USENET and some selected popular FTP sites, like Book Stacks Unlimited. “We can go here and download entire copies of books!” Our host prods, “Assuming it’s public domain stuff—”

Ms. Beaverson assures him, “That’s correct.”

“Right now we’re looking at a copy of Air Mosaic.” We’re looking at the Pizza Hut homepage.

Next we get a tour of the Whole Earth Catalog’s business operations on the ‘Net. “We are as gods and might as well get good at it,” Stewart Brand reminds us. “To offer those electronic transactions, the Catalog’s web service provider had to supply a new level of security using data encryption.” The WELL’s Mark Graham explains: “What we’re seeing now is the integration of this encryption technology with the software people use to access the networks.”

Up next: activism online! Congressional scorecards for environmental policy. Wonder if we’ll ever hear from that Dodd fellow again.

But what if you want to make your own site? Good news: the San Francisco Digital Media Center offers classes for anybody who wants to tell their story online. “In our classes, we’re discussing what the aesthetics of interactivity are. … There is a very complex artistic question to be solved by the people working in this field, and all of it is so new.”

For those of you outside of San Francisco, this man will teach you how to use HoTMetaL.

“Alright, that’s our look at the Internet—in fact, just a glance at the tip of the cyberspace iceberg.” Thanks, Stewart Cheifet!

Don’t miss an episode! Subscribe today for just $32.50.