Following Up for the Week Ending 2/21/2010

Following Up for the Week Ending 2/14/2010

Following Up for the Week Ending 2/7/2010

France to Digitize Its Own Books

France submitted comments expressing concern during the last go round in revising the terms of the original Google Books settlement. Michael Geist links to a New York Times story that at first would seem continuous with the French government’s earlier stance. French President Nicolas Sarkozy pledged almost $1.1 billion to fund a new public-private partnership for the country to digitize its own literary works.

The article explains that the French national library originally approached Google about this effort. This is apparently what prompted the criticism filed as part of the ongoing class action against Google. The criticism apparently was not strenuous enough to prevent the Library from once again approaching the search giant as a potential private partner under Sarkozy’s new proposal.

I can only guess, though the article doesn’t say, that the difference is France would retain more control over the resulting data. The article mentions the need for capital beyond what the government can supply. That means there would have to be some consideration, in terms of some privileged access or exclusive opportunity, for whatever outfit steps up. So how would partnering now be any different from partnering with Google before?

There is another aspect to this that the NYT piece doesn’t explore, similar to the story upon which I remarked about the UK pushing civil services online. Namely that there is a troubling conflict implied by Sarkozy’s push to render France’s cultural heritage into digital format for preservation and access online while at the same time Sarkozy is one of the earliest and loudest proponents for three strikes as a measure for dealing with copyright infringement committed via online file sharing.

You cannot champion the ability of digital technology and networks to copy and share information on the one hand while simultaneously suggesting that permanent disconnection from the network is a valid response to a substantially similar activity. Worse, the movement of publicly funded goods and services online exacerbates the loss of access, often under rules that forego due process and judicial oversight.

TCLP 2009-12-06 News

This is news cast 199, an episode of The Command Line Podcast.

This week’s security alerts are malware may start affecting non-jailbroken iPhones and testing whether Google’s new DNS resolver is secure. I wrote about Google’s new offering earlier in the week.

In this week’s news latest ACTA leak seems to confirm the worst exceeding EU and Canadian copyright policy and makes better sense of KEI Director James Love’s encounter with USTR chief Ron Kirk, standalone JavaScript, historical antecedents to the current broadband regulation debate, and a new privacy reporting tool from the CDT which is part of a larger trend to try to improve awareness around online privacy.

Following up this week one British politician pushed back on the Digital Economy Bill and evaluating the censorship risk of the current version of the Google Books settlement.

[display_podcast]

Grab the detailed show notes with time offsets and additional links either as PDF or OPML. You can also grab the flac encoded audio from the Internet Archive.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

TCLP 2009-11-29 News

This is news cast 198.

In the intro, a quick Philcon recap. See my earlier post for more.

This week’s security alerts are details on an exploitable flaw in older versions of MSIE including news of an actual exploit and the latest iPhone worm snags bank credentials.

In this week’s news research suggesting combining the promising techniques in quantum computing to produce a viable system sooner (this builds on recent work by NIST and a design for a photon “machine gun” for producing larger registers of qubits), encoding shell code as English text, a client side file handling API for JavaScript and HTML5 which is already implemented in Firefox’s latest beta build, and a site for sharing information about rejections for Apple’s app store.

Following up this week evaluating privacy in the revised Books settlement and two senators question ACTA secrecy.

[display_podcast]

Grab the detailed show notes with time offsets and additional links either as PDF or OPML. You can also grab the flac encoded audio from the Internet Archive.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Following Up for the Week of 11/22/2009

Notes from September’s CopyNight for Washingon, DC

The topic for September’s CopyNight here in DC was the Google Book settlement. We had a decent turn out, not as large as the couple of Summer gatherings but respectable. The discussion was excellent, covering many aspects of this story that hadn’t occurred to me, even as closely as I have been following it for my blog and my podcast.

We started by discussing the impact of the DoJ’s letter on the settlement. It is important to note this was just a letter, not a ruling. some sort of broader antitrust investigation may be underway behind the scenes that prompted the letter. The net effect of the input from the DoJ may be to strip the settlement largely back to its original contours. It ultimately may have been sparked by the Copyright Office, though, as the bulk of the letter was consistent with a Copyright Office hearing from some time back. That the case brought against Google was a class action may have also added pressure to the DoJ to comment.

The unfortunate consequence of a scaling back is that libraries may lose out. The settlement would have set up considerable public access resources for which the American Library Association was in favor. The ALA would have preferred greater government oversight than for what the settlement would have initially called but its a tough compromise to think through. Assessing the risks and costs of such oversight, in particular how they may have limited access, is difficult at best.

The impact to orphan works may be a little easier to appreciate. While the settlement wouldn’t give perpetual consideration to future works, limited to just works up through January of this year, scaling back will cost us a useful registry for out of print, hard to attribute works. Adam Marcus clarified that under the original settlement, that registry may not have been as closed as has been represented. The board was supposed to be open. The sticking point though is that the registry would have been one of a kind, no other attempt to scan out of print and orphan works would have gotten a leg up in terms of protections or allowances despite potential further public good.

The conversation then turned briefly to patents. There was some speculation about possible chilling effects on further development of OCR technolofies, more specifically I think physical systems to make book scanning more cost effective. There was of course mention of one of my favorite projects, reCAPTCHA. Luis van Ahn was at least co-inventor of the original CAPTCHA and no doubt has some interesting IP bound up in his latest venture that directly impacts the field of book scanning. We wondered what further implications Google’s acquisition of reCAPTCHA may have other than to beef up their internal spam fighting efforts.

A couple of folks weighed in at this point with some predictions and observations about the possible ultimate outcome. Tim Vollmer of the ALA worried about the settlement being reduced to the least/worst of what we’ve seen so far. Gavin Baker, a regular with a background in open access in academia, commented that most of the NGOs are currently for what we’ve seen of the stripped back, amended settlement. The only holdouts, noticeably, are commercial outfits that may fear Books as a toehold into the traditional publishing space.

The discussion moved on to orphan works, trying to understand why reform has moved so slowly. The degree of stalling seems to vary by medium, photography being perhaps the most contended case. This may be a consequence of the difficulty of consistently carrying attribution. Digital photography may deal with this issue better than printed photographs but it is still trivial for someone to even inadvertently destroy metadata carrying proper attribution of a work. Gavin seemed to think the scope of the orphan works problem may have been worth setting up Google as a benevolent dictator of a central registry, assuming their remit could be kept exclusively to identifying, registering and mediating orphan works only.

Things took a more philosophical turn as we explored a tangent around reform more generally. It was noted that legislation is almost entirely an additive process, rarely are laws removed from the books to address the need for more suitable compromises. Someone, I believe either Adam or Kat Walsh mentioned a recent Cato Institute event whose topic was the criminalization of everything. The idea seemed to be consistent with the solely additive nature of law making.

Gavin asked the group why the suit was pursued as a class action rather than some other kind of complaint. He offered his own theory, that basing it on a class was a form of preemption. He suggested it actually might be a form of carrot, that if Google would settle, the terms would carry farther with a class than an individual action. The implied threat is that if it wasn’t a class, Google would remain open to a potentially unending string of individual actions.

We closed with another tangent, delving into a consideration of why copyright is viewed and expressed differently across multiple types of media. The consensus was that this was a consequence of the norms and expectations arising from the introduction and adoption of each subsequent new form of media rather than anything inherent in each distinct medium. It is tempting, almost a logical trap, to think there are inherent qualities of media that naturally lead to different legal considerations. Law is made without any such notion, though. Just ponder for a moment the average technical literacy of your typical Congress critter and you’ll understand why that is.


I have notes from the October CopyNight, too, and should be getting those posted soon. Hopefully sooner than it took to get these notes out. In the meantime, I am thinking about dates for November’s DC CopyNight and will be sending out some notes soliciting feedback on the question soon. As I am now the official coordinator, not just the unofficial secretary, of the group feel free to email me through the contact info here if you have any thoughts.

Another Book Scaning Project, Whether New Technology Should Have New Patent Rules, and More

  • HP’s, UMich’s book scanning collaboration
    Jon Stokes has the details at Ars of a project that may be similar in spirit to Google’s original Books project but rather different in the details. While it will be making digital versions from the university’s rare books collection available online, for free, HP is also offering a novel print-on-demand offering designed to work with high resolution scans that otherwise are not suitable for your run of the mill PoD service.
  • Another online, anonymous speech case
    Jacqui Cheung at Ars describes the latest in an developing trend of cases testing the limits of online, anonymous free speech. A final ruling on unmasking an anonymous commenter is still pending and if granted could set precedents that contract anonymous speech. Here’s hoping the proceeding uncovers clear and indisputable facts so the ruling can be more of a bright line, either defending the commenter as reporting the truth or clearly committing defamation.
  • New comment system goes live at the FCC
    At Ars, Matthew Lasar points out a lot to like about the new online system. In particular, the I am hoping the enhanced transparency will make it easier for activists and advocates to keep pressure on the issues, rather than comments getting lost in the tubes. Lasar points out one procedural concern, about which comments will be tallied as formal comments. I think that slots into a larger concern about whether the much more usable system will result in better attention to the public discourse from the FCC.
  • More details on Canadian net neutrality ruling
    Professor Geist provides some excellent analysis of what the CRTC has committed to, especially in the realm of network management. This expands on the story as I picked it up last week, filling in much clearer detail on the further ramifications of this policy making. Geist also dwells on the remaining challenges in this area that he feels need regulatory attention.
  • New bill seeks to change patent rules for new technology
    The bill in question is apparently motivated by the market realities of much higher costs to develop biotechnology. One of the problems I have with this is whether it is reasonable enough to assume that the cost will remain high, hence worthwhile to ossify a response into law. I leave the parallel concerns for software patents to the reader.
  • More details on Mozilla’s Raindrop
    Ryan Paul at Ars gives a good walking tour of this project that was announced last week. His findings are pretty consistent with my own quick experimentation over the weekend.