Some folks have filed a really silly lawsuit against the Internet Archive and another law firm (news report; complaint). Here's the story:
A couple of years ago, a law firm called McCarter & English, representing a New Jersey company called Healthcare Advocates, sued a Pennsylvania firm called Health Advocate for trademark infringement. Defendant's lawyers — a firm called Harding Earley — used the Internet Archive to pull up plaintiff's old web pages, to help in the defense. It appears that Healthcare Advocates had recently put up a robots.txt file with instructions to block public access to its old pages, but the folks at Harding Earley made a whole bunch of requests, and the pages sometimes displayed anyway.
Healthcare Advocates, represented by McCarter & English, is now suing both the Harding Earley firm — for copyright infringement, violations of the DMCA and the Computer Fraud and Abuse Act, and state-law torts — and the Internet Archive, for breach of contract, promissory estoppel, breach of fiduciary duty, negligence and misrepresentation.
This is silly. The copyright claim against Harding Earley is silly. Setting aside anything else, if there ever were a textbook example of fair use, reproducing a once-publicly available web page because its content was relevant to the proper disposition of a lawsuit would be it. The DMCA claim is, if not silly, at least wrong. It's hardly obvious that sticking a robots.txt file on your server counts as a technological protection measure within the meaning of the DMCA, since web crawlers are free to ignore such markers if they choose. If plaintiff's robots.txt file were a TPM, its instruction to the Internet Archive to withhold the file looks to me like copy protection rather than access protection, which puts defendants in the clear. And finally, as Bill Patry has noted, it's an unworkable reading of the DMCA to say that if you click on a link once and don't get anything, then you're illegally “circumventing” by clicking a bunch more times to see if your luck changes.
The silliest claims are the ones against the Internet Archive itself. Take it from me: The Internet Archive didn't have an obligation under the relevant laws to make sure that that there were no glitches in its implementation of its decision to respect robots.txt.
Sigh …
I’ve written a post with some technical speculations:
Internet Archive DMCA Circumvention Lawsuit
http://sethf.com/infothought/blog/archives/000877.html
“its an unworkable reading of the DMCA to say that if you click on a link once and dont get anything, then youre illegally circumventing by clicking a bunch more times to see if your luck changes.”
I’m not so sure – it comes down to how the DMCA applies to *buggy* technological measures – the language about “ordinary course of its operation” seems problematic here.
Pingback: IPTAblog
Is an unreliable server then just a buggy protection measure, so that getting a bunch of “connection refused” or “not authorized” messages puts me on notice that I might go to jail for trying to look at the page again?
I’ve always thought that that line in the DMCA was uselessly vague, because the notion of an effective anticopying mechanism seems so deeply lodged in the eye of the beholder. Is loading a special driver unless the user holds down the shift key? XOR encryption? Rot-13? Double Rot-13?
Groklaw has an article More on Silly Lawsuits – Internet Archive and the BBC Flap. “But if you are seriously wanting to be 100% private, the Internet probably isn’t for you, and you should get off entirely or restrict access by membership or something like that and thus build a moat around your castle and lift the drawbridge. The Internet is what it is. And fair use still exists, so you need to factor that in too.
—–