Rant on the Failure of Programming to be Pragmatic

A listener sent me a link to this rant, The State of the Art is Terrible, by Zack Morris. If you can wade through the technical humbuggery, I think there is a useful point. Several decades after the advent of high level programming languages and well into the age of ubiquitous computing, it genuinely is time for computing technology to be more focused on outcomes.

I’m on the threshold now of rejecting this false idol, but for at least a little longer I have to cling to it to carry me through. I have a dream of starting some kind of open source movement for evolvable hardware and languages. The core philosophy would be that if your grandparents can’t use it out of the box to do something real (like do their taxes or call 911 when they fall down) then it fails. You should literally be able to tell it what you want it to do and it would do its darnedest to do a good job for you. Computers today are the opposite of that. They own you and bend you to their will. And I don’t think people fully realize how trapped we are within this aging infrastructure.

The post is rife with examples of how the status quo is an abysmal failure to all but those of a very hackish bent. Morris touches on why this is so, the industrialization of software and the subsequent urge to profit. If you can wade through the very down tone, I think there is a kernel of optimism–a call for a sea change in how computers work and work for us.

Morris isn’t alone in this view, keeping company with the likes of Jaron Lanier. This is not likely to be the last rant in this vein. I think he is a bit more pragmatic, though, highlighting PHP as an example of a step in the right direction. His point isn’t that PHP has a natural language based syntax or that it has syntax or semantics that mirror concepts and idioms with which non-programmers are familiar. Rather he suggests it for its more productive failure modes, that it makes a best effort on the easy stuff and doesn’t obscure breakages requiring more investigation.

Whether you agree with PHP specifically or not, it is worth considering it as an example of the model he is proposing–languages and tools that are more focused on outcomes than abstract design principles or idealized syntax. The emphasis of getting out of the way of doing useful things ultimately sets apart this rant from a crowd of voices raising many of the same critiques of the state of the art.

Are We Really Stuck with Plus-ified Google Reader?

There has been much furor over the deprecation of Google Reader’s built-in social tools, especially the ability to share feed items with comments.

The first problem with forcing Reader users to shift over to Plus is that it brings many more people directly into conflict with the much debated real name policy for the search giant’s shiny new social network. Feed reading and curation is often closely associated with blogging, an activity that has a long and respected tradition (despite the occasional conspicuous failure) of anonymous and pseudonymous authorship. Many such users previously had an easier time following Google’s own advice to not use Plus if they are not in a position to use a real or common name.

This leads to the second problem with Google’s stance on not just this change, but now a couple of recent policies. Namely they have been espousing the view that if you don’t like how they run their services, you can export your data and use some other tool. Richard MacManus at ReadWriteWeb takes a pretty dim view of that recommendation, reasoning that the popularity of Reader has killed off the alternatives.

I agree only in so far as if you want a feed reader that is accessible from multiple machines, remembering the state of what you have or have not read and offers the ability to directly curate items from the reader, as opposed to using a blog or tumblr, then Google’s stance is indeed incredibly disingenuous.

The optimist in me, however, hopes that Google’s ham-fisting of Reader shakes enough free software and open source developers loose from their complacency to quickly spin up some compelling alternatives. I think there is some serious low hanging fruit here in the form of bridging between the feed reading capabilities in Mozilla’s Thunderbird and their Sync service, a secure and extensible means of sharing state between multiple instances.

Remembering the Contributions of Dennis Ritchie

I’ve been meaning to remark on the passing of Dennis Ritchie but have been incredibly busy at work. The irony is unlike Steve Jobs who touched on my career and my life only glancingly, Ritchie’s contributions in the former of the C programming language and co-inventing Unix are pretty critical pre-conditions for most of what I’ve been doing professionally and out of enthusiasm for well over a decade.

When I was in college, access to and usage of the limited number of Unix workstations on campus were of mythic proportions. All of my friends and my co-workers within the school’s Technology Services enjoyed noodling around with PCs of different strips. Gaining entrance to the access-limited Unix labs and setting at the quietly humming machines with their remarkably large and high resolution displays for that time was something else altogether.

Those machines in no way felt like toys. To a one they were all networked together and connected via fast links to the Internet. What you had to bash and cobble together on your own PC to get barely functioning was a given in terms of horse power and network connectivity with these machines.

That sense of awe, the invitation to explore that is woven into my earliest experiences of Unix deeply informs my relationship with Linux, its spiritual descendant. I still experience a subtle frisson of delight when exercising root privileges on any of my Linux boxen for the way it takes me back to those almost furtive trips into the Unix labs at school.

The C programming language holds a similar place in my personal pantheon. Almost every programming language with which I have more than a passing fluency can be described as C-like. I have only worked directly with C for limited stints over the years, experiences too few and far between to transform the experience from mysterious into the quotidian. I realize that rationally it is a bit silly but just the age and application of C seem to beg a certain veneration that few if any subsequent languages have yet to achieve.

The contrast between the coverage of Jobs’ passing and Ritchie’s is pretty extreme. The temptation to read much into the difference is great but I think easily explained. By all accounts Ritchie was a very quiet and private person. Unlike Jobs, you don’t have to have a sense of Ritchie’s personality to appreciate his contribution to modern computing. The technical merits of C, Unix and his collaboration with Kernighan in the form of The C Programming Language, or simply K&R, speak for themselves.

If you are unfamiliar with Dennis Ritchie’s work, Joe “zonker” Brockmeier posted an excellent recollection at ReadWriteWeb.

Another Example of Why I Question Some of Google’s Technical Decisions

@gnat brought to my attention a Hacker News post by JavaScript creator, Brendan Eich, that tries to unpack the real motivations and possible outcomes of Google’s recently announced in browser programming language, Dart. I’ll admit the day job has been keeping me so busy that while I saw the announcement, I didn’t have time to read through even the high level details. Eich hits on the most salient points in his criticism of Google’s disingenuous move to “fix” what it deems as “unfixable” in JavaScript by claiming to be advancing an open replacement.

We’re in a multi-browser market. Competitors try (some harder than others, pace Alex Russell’s latest blog post) to work together in standards bodies. This does not necessarily mean everything takes too long (Dart didn’t take a month or a year — it has been going longer than that, in secret).

[…]

Dart goes the wrong way and is likely to bounce off other browsers. It is therefore anti-open-web in my book. “The end justifies the means” slaves will say “but but but it’ll eventually force things to get better”. Maybe it will, but at high cost. More likely, it won’t, and we’ll have two problems (Dart and JS).

Honestly, I am a little sick of the hubris that accompanies decisions like this. I’ve explained my admiration for Mozilla repeatedly before as an increasingly necessary counterbalance to Google’s now established pattern of eschewing community developed open standards in favor of its own efforts. Chrome instead of Firefox, Web-M instead of Theora, Plus instead of a federated social network approach using ActivityStreams, OStatus, etc.

In the interest of disclosure, and fairness, I collaborate daily with folks at Google. They do much that is needful and even admirable. In this one area, however, I think there needs to be more forcefully and clearly asked questions each succesive time Google charts its own way, often at the expense of the open web community.

Brendan Eich on Hacker News, via @gnat

Call for an App to Automatically Track Serendipitous Finds

One of the things I enjoy most about reading Clive Thompson’s writings, whether it is at one of the outlets to which he regularly contributes or his less frequent posts to his own blog, is how he unpacks and examines many of the activities we often take for granted in this post-network world.

In one of his most recent blog posts, he looks at how we as web surfers find the interesting flotsam that is so enjoyable to share.

Indeed, we clearly have an appetite for knowing how and where people found stuff. Every time someone creates a new tool for publishing online — blogs, status updates, social networks, you name it — users on a grassroots level immediately create conventions for elaborately backlinking and @crediting where they got stuff from. It’s partly reputational, but it also betrays the fact that we seriously enjoy associational thinking and finding.

I love how he dovetails this with some of the amazingly prescient work of Vannevar Bush. It also meshes well with Dan Gillmor’s recent call to those of us who aggregate and curates stories to dig deeper to expose the original and correct attribution for the work going into these fantastic nuggets of interest.

In his comments on Thompson’s post, Cory Doctorow teases out the call for help in enhancing or building tools to get at these trails used to find and associate content. Personally, I rely almost exclusively on RSS so usually have little trouble clarifying the original sources on anything I read. Regardless, I understand the need and the challenge. Had I any time to spare, this would be a fantastic weekend project, a good excuse to build some stronger skills in web browser extension development.

Whether you can help with the call to build better or even any tools that tackle this engrossing problem, the article is well worth a read.

“How did you find my site?” and Vannevar Bush’s memex, collision detection

Digital Dumpster Diving

I am uncertain whether Dumpster Drive, the creation of interaction designer Justin Blinder, is actually useful or even meant to be so. It strikes me much more as a sort of digital, networked art project. There might be an interesting thought experiment too around whether the intent and act of removing some digital media affects in anyway the legal analysis over whether the sharing done by the software consists piracy comparable to the activity on more traditional P2P networks.

Dumpster Drive is a file-sharing application that recycles digital files. Using dumpster diving as a model for recirculating unwanted objects, Dumpster Drive allows others to dig through files that you delete on your computer in a passive file-sharing network. Instead of simply erasing data from your computer, the software allows users to extend the lifecycle of their unwanted files and pass them on to others.

The application is only available for Mac. Reading around the site answered my only other question. It does not replace your existing Trash folder at all, rather it provides an additional target. Otherwise I had nightmare images of all kinds of unintentional and embarrassing sharing taking place.

Dumpster Drive, via Slashdot

MP3 Decoder Written in JavaScript

At firest I was a little puzzled by this Geek.com story to which Slashdot linked last night. HTML5 includes natively capabilities for playing back audio though not all formats are supported equally by all browsers, for reasons similar to the much more visible debates over video formats. A JavaScript powered player on cursory inspection would seem to be yet another front end to this multimedia capability increasingly available in newer browser.

jsmad isn’t a front end, though. Digging into the story a bit more, it is actually a native decoder for several variations of the MP3 format that runs entirely in JavaScript. It has more in common, then, with the the recent x86 emulator and several game emulators that have been ported or written from scratch to execute in the browser without using any plugins or any special multimedia capabilities.

Porting notes

Obviously, porting low-level C code to Javascript isn’t an easy task. Some things had to be adapted pretty heavily. jsmad is not the result of an automatic translation – all 15K+ lines of code were translated by hand by @nddrylliog and @jensnockert during MusicHackDay Berlin. Then, @mgeorgi helped us a lot with the debugging process, and @antoinem did the design of the demo during MusicHackDay Barcelona.

It performs well enough to decode and play MP3s in realtime on Firefox on modern computers, although if you do lots of things at once, Firefox might forget at all about scheduled tasks and let the soundcard underflow. There is a rescue mechanism for that in the demo, which works most of the time.

There is a fully capable demo, written in a very brief amount of time as part of the Music Hackday. I ran into a couple of issues with playback but outside of that, the experience is entirely comparable with the usual Flash players.

If it is possible to run a fixed point, compute and data intensive decoder like this with nothing more than the browser’s JavaScript engine, I have to imagine it should also be possible to port many of the open formats, like Ogg Vorbis and Flac. As a podcaster, the possibilities here are very alluring. jsmad is free software, available under the GPL v2 so it isn’t unreasonable to expect as interest increases, so should performance, accuracy and stability.

JavaScript decoder lets MP3s play in Firefox without Flash, Geek.com via Slashdot

Apple Patents Potential in the World DRM

If you are among a certain set, those who have truly grokked how brain damaged DRM is, you’ve no doubt joked about how proponents of restricting digital technologies would love to extend that reach into the environment, beyond just the access and playback of digital files. That idea has taken a massive and disturbing step closer to reality.

On June 2, 2011, the US Patent & Trademark Office published a patent application from Apple that revealed various concepts behind a newly advanced next generation camera system that could employ infrared technology. On one side, the new system would go a long way in assisting the music and movie industries by automatically disabling camera functions when trying to photograph or film a movie or concert. On the other hand, the new system could turn your iOS device into a kind of automated tour guide for museums or cityscapes as well as eventually being an auto retail clerk providing customers with price, availability and product information. The technology behind Apple’s patent application holds a lot of potential.

Cory over at BoingBoing linked to some of the coverage of this Patently Apple scoop around this patent application that emphasized the negative application. I think that emphasis is warranted, given that the positive scenario of providing context or location aware capabilities is already well doable with existing, deployed technologies like GPS, Bluetooth, AR, and most recently NFC. It hardly seems like we need an IR based technology for that end, leaving the more chilling implication of allowing venue owners and rights holders to reach into and affect the operation of your device, against your wishes.

Apple working on a Sophisticated Infrared System for iOS Cameras, Patently Apple (via BoingBoing)

Archos 43 More than 6 Months Later: Largely Fail

I purchased an Archos 43 notaphone a little over six months ago. I have little use for cell phones or expensive data plans as I am usually within easy range of WiFi and Google Voice neatly takes care of the few instances where I have to give someone a working cell number even though I prefer just about any other means of communication. A few months ago I even popped for a pay-as-you-go mobile hot spot for those occasions when I am traveling or otherwise need connectivity and the availability of WiFi is unknown or unavailable.

At first, the lack of the Android Market was my biggest complaint, followed by the crummy resistive touch screen. Over time, those two complaints have swapped places. A bit of hacking got the Market onto the device and only occasionally does it present problems, mostly around major firmware updates from Archos. The screen, however, has not worn well and continues to get worse and worse.

There is a broad strip down the righthand side of the screen that no longer reliably works. If I re-calibrate the touch screen, it will work for a few minutes before it settles into its usual semi-functional state. If it was just an inoperable chunk of the screen, rotating would mostly overcome it at the expense of some small hassle. The problem is the accuracy on the rest of the screen is absolutely abysmal. All the way over to the left, it is pretty much spot on but the further to the right you touch, the worse it gets, registering touches as offset increasingly to the left. I am convinced the non-working portion of the display is part of this mis-registration, that the offset just gets so large you’d have to tap beyond the physical boundary of the screen to register successfully.

As you might imagine, typing on the soft keyboard with this idiosyncratic touch screen is an exercise in frustration. More often than not, after the third word of a message or update, I want to hurl the accursed devices into the nearest hard surface as hard as I possibly can. I try to avoid any applications now that require any typing, resigning myself to media consumption. You’d think that would alleviate the frustration with the damn thing a bit but not hardly.

Just reliably hitting the play, pause and next buttons often is an utter crap shoot. A miss can result in sending me back to the home screen or bouncing around to another podcast episode or track. Usually I have to rotate the thing around repeatedly to get the most reliable, left most edge to line up with the buttons I need. The amount of effort involved just to keep up with my podcasts and occasionally listen to some music when I am reading on my morning train ride is tiresome to say the least.

To add insult to injury, I finally installed a firmware updated from Archos that I’ve been avoiding for weeks. I was uncertain whether it would undo my Market hack, hence my hesitation. My (undeserved) that the update might improve the screen operation finally overcame my reluctance and yesterday I installed the patch. Not only did it do absolutely nothing to alleviate my existing woes, now it has introduced a new glitch. Whenever the screen automatically shuts off to help manage battery life, media playback goes out the window. I have disabled the auto shut off just so I can continue to listen to podcasts, otherwise that app would be utterly unusable. I also realize this may be a worsening of an existing bug that was interfering with some music files that previously had been glitchy. Leaving the screen on while using the built-in music player actually seems to work better on files I thought were just mis-encoded or had some metadata that was culpable.

Heck of a workaround, risk destroying my battery life or weird series of app activations and utilization as a result of the MID floating around my pocket with its screen on or give up on the core reason I bought the stupid thing in the first place.

So what to do? The gadget is still within its warranty but I am not optimistic about the vendor’s ability to address any of my complaints. I am also loathe to give up even a brain damaged media player for the duration it would take to get it repaired or replaced. I struggle enough to keep up with podcasts as it is.

I looked around a bit online today for a possible replacement. In short, there really are none. I could get a simpler, non-Android media player. There are several that work well with Linux. Even if I set aside how deeply habituated I am to having Internet access with me constantly, I cannot imagine going back to a device that has to be routinely synchronized with a computer. Of the other Android powered devices that are not phones, the vast majority of them are full sized tablets. For reasons I may discuss in some other post, I don’t want anything larger than my shirt pocket. Besides, judging by customer reviews of at least one WiFi only version of a popular seven inch tablet, the device makers often hobble the non-cell modem equipped tablets as a subtle and irritating prod towards the more lucrative versions.

Samsung has released an interesting media player that bears some passing resemblance to its popular Galaxy line of phones. It has not reached the US though and reviews so far have been mixed. I am not convinced it would be a worthwhile purchase.

As a last resort, I’ve looked into unlocked smart phones. A could see carrying around a Nexus S or some Galaxy based phone but haven’t been able to find any discussions about how reasonable it is to leave such a device unactivated. All the posts and forum threads I’ve found assume you’ll pop a SIM in from some carrier or another and start using it as a regular phone, voice + data plan and all.

I even considered biting the bullet and getting an Android smartphone with a plan of some kind. I can’t get past the fact that any contract option still costs more each month than I am willing to pay considering how lightly I’ll use the minutes and bandwidth. See my comments on access to WiFi and my ingrained aversion to mobile telephony. There are now Android phones available with pay as you go plans which could be a reasonable upgrade to the 2G dumb phone I still carry for when I absolutely, positively have to make or receive a mobile call. Of course none of the smart phones on offer with that option are ones for which I actually would pay good money.

Am I being unreasonable? Is there an option I haven’t considered to get an Android powered, small form factor media player and Internet device? If you have an answer to the latter, I sure would like to hear about it in the comments. Or if you can clarify how well an unactivated phone might work, I’d like to hear that too.

Understanding the Transition Problem with Bitcoin

I share Cory Doctorow’s ambivalence towards to increasingly popular digital currency, Bitcoin. I like the abstract idea since I first encountered rough forms of it in fiction. Reading up on Bitcoin, I have failed to find anything that convinces me that it either will ultimately replace a large chunk of traditional currency or it will implode, perhaps dangerously so, due to some fatal design or implementation flaw. I am a bit mystified at why it has succeeded where so many other schemes, ones arguably better designed, haven’t managed to go anywhere.

I appreciate that Cory is drawing attention to some of the better considered and researched discussions of Bitcoin, like this post by Edward Z. Yang. In it he works through how the hardwiring of SHA-256 will at some point force a transition to a successor currency and how a decentralized scheme for doing so will falter compared to a centrally managed one.

At this point, we’ll take a short detour into the mooncake black market, a fascinating “currency” in China that has many similar properties to an obsolescing Bitcoin. The premise behind this market is that, while giving cash bribes are illegal, giving moon cake vouchers are not. Thus, someone looking to bribe someone can simply “gift” them a moon cake voucher, which is then sold on the black market to be converted back into cash.

The problem with mooncake vouchers, which must be converted into actual cakes at the Autumn Festival, is the same as the method for a decentralized transition from Bitcoin to a notional successor. At some point, the bottom falls out of the market as fewer and fewer buyers remain willing to purchase the quickly obsolescing cash.

Yang admits this all assumes Bitcoin has the staying power to make it to the point where SHA-256 is broken and needs replacing. Given how quickly MD5 was thoroughly defeated and practical attacks were demonstrated against SHA-1, it isn’t an unreasonable question to ponder even if the currency has a short lifespan.

Bitcoin is not decentralized, Inside T5 via BoingBoing