Research like this continues to invite us to re-think how computing technologies can intersect with and enlive the world around us. This isn’t going to transform your PC or phone but rather will continue the trend of weaving computation into everything, even more so than the still relatively apparent propagation of the so-called Internet of Things. This technology also reminds us to ask: when memory is so available you can just literally paint it on, what would you do with it? Environmental sensors is really just the tip of the iceberg.
John Pavlus has an interesting post at io9 about Mark Coleran who has come up with quite a few of the most recognizable fantasy user interfaces seen on film in recent years. More often than not, tech enthusiasts find Hollywood’s imaginings of futuristic computers to be groan inducingly bad. Coleran’s work, however, is grounded in real design principles.
“[The FUIs] look fantastic when you see them in the theater, but a lot of that is actually grounded in reality — stuff that’s not mainstream yet, that I’ve been researching and experimenting with.” Case in point: Coleran’s design for a near-future music player in Children of Men looks uncannily similar to iTunes’s “Coverflow” interface, which came out nine months later.
As it turns out, his experience includes both real world design as well as the more fantastical work that has appeared on screen. That mixture of the visionary with the practical no doubt factored into his being hired by Bonfire Labs.
But he says Bonfire hired him to be more of a “visual concept designer” for their interactive and advertising clients — sort of a Syd Mead for UIs, “looking at the bigger picture rather than the detail of individual buttons,” says Coleran. “My background from the film work, plus my experience in engineering, electronics, and graphic design, sort of fits with these interactive projects. There’s an element of futurism, where you can play the ‘what ifs’ out to their logical conclusions. Not just for the sake of it, but if you know the rules, you can break them to get something better.”
Read on for his thoughts on the distinct challenges presented by movie UIs and ones, no matter how speculative, that are intended for the real world. If only Canonical or RedHat had snapped him up. Can you imagine what a “Syd Mead for UIs” could do to revolutionize Linux on all kinds of devices?
Slashdot links to some discussion originating out of IBM. Yu-Ming Lin, a researcher from Big Blue, explained in an interview that graphene transistors cannot be switched off in the same way as silicon ones. Slashdot and bit-tech both quote Yu-Ming’s explanation as saying:
… there is an important distinction between the graphene transistors that we demonstrated, and the transistors used in a CPU. Unlike silicon, ‘graphene does not have an energy gap, and therefore, graphene cannot be “switched off,” resulting in a small on/off ratio.
The quote is from an as of yet unpublished interview. It isn’t clear if the lack of an energy gap is a quality of graphene as a material or the current way transistors are constructed from it. Given the direct comparisons to silicon, I infer the former. If it were the latter, then the possibility would remain that a different approach could overcome this critical obstacle.
The article goes on to share some more optimistic thoughts from Yu-Ming on plenty of interesting applications within computer chips for graphene. A further quote from Mike Mayberry, Intel’s director of component research, suggests this all may still be theoretical, that more experimentation may be required before we can so confidently declare the practical limits of the material.
Graphene offers considerable advantage over silicon, a few are mentioned in the bit-tech article. I’ve discussed many of them in past posts here and on the podcast. It is intriguing to imagine graphene’s further use in computing, even replacing many of the materials in use today. Mayberry’s quote reminds us of how wide the gap is between such speculation and even tomorrow’s technology just in terms of what we know about silicon and don’t know about graphene.
Slashdot links to a Gizmag piece covering some research that covers one of the things I’ve always wondered about the amazing material, graphene. Being just a single atom sheet of atomic carbon, the material itself is environmentally sustainable but I’ve wonder if we could produce it without nasty chemicals involved or arising as byproducts.
Researchers at Rice University have made graphene even sweeter by developing a way to make pristine sheets of the one-atom-thick form of carbon from plain table sugar and other carbon-based substances. In another plus, the one-step process takes place at temperatures low enough to make the wonder material easy to manufacture.
The process they discovered is also versatile. The article doesn’t say if this is true of sugar as a carbon source, but using the original material, plexiglass, they could variously produce single, double and multilayer sheets. They also could alter the doping of the graphene. The additional of other materials is key to controlling the sheet’s resulting electrical and optical properties.
Graphene Can Be Made With Table Sugar, Slashdot
John Timmer at Ars Technica explains some fascinating new work on a type of processor that could build on the advantages of FPGAs and provide the speed of more conventional CPUs. Timmer explains the relationship between specialized chips, like DSPs, and traditional CPUs pretty well. In the course of doing so, he notes how a field programmable gate array in many ways represents the best of both, allowing such a chip to dedicate all of its silicon to specialized tasks but able to change the type of task as needed. In reality, FPGAs have limits that make them useful only in certain circumstances, like prototype new chip designs without dedicating fabrication capabilities to building set chips.
The key to this new approach is harness chaos theory.
Those who think of chaos as completely unpredictable are likely to be wondering how unpredictable behavior can be used to perform logic operations. But chaos theory isn’t concerned with unpredictability; instead, it focuses on what are called nonlinear functions, ones where the ultimate output is very sensitive to the initial conditions. When you can control the initial conditions, you can still predict the output.
That ability is at the heart of a chaotic processor. The authors of a recent paper in Chaos describe what they call “chaogates,” which use simple, nonlinear functions to perform logic operations. The basic idea is that, ultimately, you want a logical output, a binary 1 or 0. It’s possible to convert the output of even a complex function into that sort of binary distinction using a strategically placed less than or equal to (<=) operation. If this sort of function is hardwired into the chip, then it’s simply a matter of knowing how to select your inputs so that you get the operation of your choice.
This is very early stage work. While there is a working prototype, it is far from the scale that would make it comparable to existing FPGAs. Timmer notes one aspect of these “chaogates” that already has worked out well, that is they can be re-purposed in about a single clock cycle. If that holds as they are accelerated from the current 30MHz to useful speeds, that would be a considerable advantage.
The biggest barrier is that the existing hardware description languages, used in programming FPGAs, do not apply to these new chips. In addition to proving the theory and building workable prototypes, the researchers have to invent an entirely new, compilable language as well.
Researchers harness chaos theory for new class of CPUs, Ars Technica
- Kickstarter for a modern interactive fiction game
Jason Scot, the producer of the wonderful documentary, “Get Lamp”, tweeted about this. Andrew “Zarf” Plotkin has started a pledge drive to support his full time effort to produce his next interactive fiction project. He has already sailed past his goal, did so in a mere 13 hours. I imagine pledges will continue to push his funding up for a few more days yet, especially with an endorsement from Scott.
- Microsoft upset by open Kinect drivers, then relents
Ryan Paul at Ars Technica was one of a few folks to note the Redmond giant’s reaction to Adafruit’s bounty program and winner. I really don’t understand their response given that it is going to add sales to folks who otherwise wouldn’t have bothered with the device. Dana Blankenhorn at ZDNet’s Open Source blog soon after had the story that Microsoft had relented on their threats.
- A working 8-bit computer in Minecraft
This is a different effort from the working ALU to which Cory linked before on BoingBoing, but definitely in that same vein. Not sure what architecture Lazcraft is building here as the video lacks any kind of narration, inluke theInternetFTW’s.
- Enigma cipher machine up for auction, Christie’s, via Hacker News
- IBM offers glimpse of extreme, miniature super computers of tomorrow, BBC
- How to make art online without getting ripped off, BoingBoing
- Obama administration to appoint web privacy czar, Ars Technica
- Apple partnering with Oracle to bring OpenJDK to OS X, Ars Technica
Apologies for the second day of just links. I was in a rush to get to the local CopyNight here in DC last night. I took a sick day from work today to try to final get over this cold and have been trying to keep blogging to a minimum, too, in order to maximize my rest.
Thankfully, tonight’s podcast is an interview I recorded last week so will got out with minimum effort as scheduled.
- Water may unlock a graphene based transistor , io9
- Judge blocks over broad child protection law in Massachusetts, Ars Technica
- Mozilla’s response to Firesheep, further urging to use SSL more broadly, Mozilla Security Blog
- B&N caught deleting customer’s files, blames user, Techdirt
- Mozilla delays Firefox 4 until early 2011, The Register
- New technology allows drawing, and erasing, wires on circuits
Tim Barribeau at io9 points to some research that initial seems similar to a story I discussed a while back, about using a heated atomic force microscope to etch conductive traces in graphene. The materials involved are a bit more complex but unlike the graphene research, erasure is definitely doable where it was a vague possibility using the oxidized carbon substrate.
- Eight epic failures of regulating cryptography
In the wake of the feds’ campaign to make surveilling the internet easier, EFF has some timely reminders of how legally mandating functions and aspects of cryptography is a problem. I am a fan of the constitutional argument and the final point in the last, the absence of proof of harm or risk.
- AP wants to become the ASCAP of news, Techdirt
- European court rules against indiscriminate copyright levies blank media, Ars Technica
- Bicycle thief burdened with unusual computer-related restrictions as part of probation, Slashdot
This is news cast 225, an episode of The Command Line Podcast.
In the intro, thanks to Steve for his latest donation which also means he gets the signed copies of Wizzywig 1 & 2. Also, an announcement of audio and feed changes to go in effect on October 3rd.
This week’s security alert is a more in-depth look at the Stuxnet worm.
In this week’s news Intel to use DRM to charge for processor features and why that is problematic, an Ubuntu designer shares his thoughts on a context aware UI, a course on the anthropology of hackers (one I wish UMD’s MITH would offer), and the FCC finalizes rules for white space devices (including details on those rules) prompting one commissioner to speculate we no longer need net neutrality rules.
Following up this week the MPAA wants to know if it can use ACTA to block WikiLeaks and one judge quashes a US Copyright Group subpoena.
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.
Slashdot embedded this video demo that is pretty compelling.
What is shown isn’t going to replace the fine selection and manipulation possible with touch interfaces but would make an excellent complement. The very first thing I thought of was for in car control where you could easily gesture at your console without taking your eyes of the road, easily turning the system on and off and performing simple navigation. Of course, that’s the very example mentioned in the link post so clearly is intentional in the video.
Although there isn’t any more detail in the press release, it didn’t dissuade me from my other impression, about the coarseness of control. I very much doubt you’ll be able to pull off any sophisticated gestures, like drawing shapes. All the same, even a chunky version will be intensely useful.
Touchless Gesture User Interfaces, Slashdot