2015-11-15 The Command Line Podcast

old-newspaper-350376_1280This is an episode of The Command Line Podcast.

This time, I chat about some recent news stories that caught my attention, including:

You can subscribe to a feed of articles I am reading for more. You can follow my random podcast items on HuffDuffer too.

You can directly download the MP3 or Ogg Vorbis audio files. You can grab additional formats and audio source files from the Internet Archive.

Creative Commons License

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.

Watson and the Future of Machine Learning

I had only been following the story of IBM’s Watson taking on and besting two long standing Jeopardy champions peripherally. It just didn’t strike me as much more than a distraction in the field of general or strong machine intelligence.

Today, Mike Loukides at O’Reilly Radar had a thoughtful piece that has me re-considering. He seems to also be uninterested in Watson’s ability to come up with a single correct response. However, in digging into the processing behind arriving at that datum, he suggests some interesting possibilities.

The next level down in Watson’s analysis is even more interesting. The confidence level assigned to each answer comes from how well the answer matched various sources of information. Possible answers are scored against a number of data sources; these scores are weighted and combined to form the final confidence rating. If exposed to the human users, the scoring process completely changes the kind of relationship we can have with machines. An answer is one thing; a series of alternate answers is something more; but when you’re looking at the reasons behind the answers, you’re finally getting at the heart of intelligence. I’m not going to talk about the Turing Test. But I am suggesting that, when you have the reasons for the alternative answers in hand, you’re suddenly looking at the possibility of a meaningful conversation between human and machine.

Read the rest of the article, he gives some fairly compelling examples and practical applications. I can already see some aspects beyond what he considers, such as helping to map out authority and trust, interactively, for various information sources, a task that is often just too imposing for the casual reader browsing around.

Watson and the future of machine learning, O’Reilly Radar

feeds | grep links > Autonomous Vans Follow Marco Polo, Pushing Limits of Chip Making Further, Facebook’s New Friend Stalker Tool, and More

  • Vans drive themselves across the world
    Slashdot links to a Techeye piece describing the track of four driver-less vehicles that successfully re-traced the route of Marco Polo. Autonomous vehicles seem to be improving dramatically rather rapidly. The fact that these are not sedans but the smallest style of commercial vehicle reinforces my expectation that we’ll see this technology in regular use for long haul freight hauling before it becomes an up-class option on your next personal vehicle.
  • Research suggesting an end run around scale limits of chip photo-lithography
    Chris Lee at Ars Technica describes so new work that may give Moore’s Law, as seen with current techniques for making computer chips, a reprieve until more advanced replacements come into play. The effective threshold on current photo-lithographic techniques is how small a bit of light you can cast through a mask onto the chip. What researchers are now realizing is that they may be able to manipulate secondary effects to go beyond this diffraction limit, continuing to shrink the scale at which they can manipulate materials with light.
  • Facebook adds friend stalker tool
    Slashdot is just one of many places pointing to this developer driven feature recently announced by the social networking giant. It is difficult to know if this really exposes any more private information than any other page or feature on the site. What is clear is that by casting it into a new context, the interactions between two friends the observer selects, more expectations are likely to be violated about where and how this information is seen.
  • Australian privacy commissioner slams data retention plan , Slashdot
  • Israel to join list of nations with ‘adequate’ data protection plans, The Register
  • Archive of Geocities being released as a near 1TB torrent, Techdirt

feeds | grep links > Remembering Mandelbrot, and More

As I predicted, I was not able to get enough work done on the stories I had bookmarked for tonight’s news show. As busy I as I was volunteering yesterday and grinding on my interviews notes for this week, I still had these links I wanted to share.

  • Remembering Benoit Mandelbrot
    I was incredibly saddened to read news of Mandelbroit’s passing over this weekend. His study of fractals is thoroughly bound up in my own readings on complexity. It’s a topic I find as endlessly fascinating as the ability to infinitely zoom in on the fuzzy forms he characterized without ever hitting a limit to the detail. In this blog post, Rudy Rucker, another icon in my readings on universal gnarl, presents his personal memories on first meeting Mandelbrot. Seems very fitting to me.
  • Google secretly tests autonomous vehicles in real traffic, ReadWriteWeb
  • Offering censorship as a product feature
    From Slashdot, this is concerning for its potential for abuse and the obvious privacy implications. A recent patent grant to Apple for a similar notion in the iPhone, covered by ReadWriteWeb, hints this may become a trend. This is like the problem of hard drives full of copies in junked photocopiers but now with a network connection. An even greater fear for me is that competitors will feel compelled to also offer this feature, worse even possible one-upping the original.

An AI That Is Reading the Web to Learn

io9 has an excerpt from a longer article at Universe that describes an artificial intelligence program that is reading the web in order to learn language. The very idea sounds like it was taken from a science fiction novel.

If you doubt me [that science fiction is reality], read the news. Read, for example, this recent article in the New York Times about Carnegie Mellon’s “Read the Web” program, in which a computer system called NELL (Never Ending Language Learner) is systematically reading the internet and analyzing sentences for semantic categories and facts, essentially teaching itself idiomatic English as well as educating itself in human affairs. Paging Vernor Vinge, right?

What the author, Claire Evans, goes on to describe sounds like a pretty straight forward web crawler whose frontier is hooked into a system that is anything but typical. Evans spoke with several of the researchers behind NELL and the interview portion of the article is well worth the read.

For instance, part of the eventual goal is for the program to become much more self directed in its learning. It already supplements the half million pages curated and provided to it with targeted web searches. Evans didn’t ask whether the end result, the knowledge NELL acquires would be useful for other AI projects. A generalized, portable body of knowledge, parsed and ready to go is a key holy grail in this field of research.

A computer learns the hard way: By reading the Internet, io9

Cyborgs Among Us

Slashdot points out that September is cyborg month. I, myself, have been accused of being more man than machine. Seriously I strongly appreciate the work of the early cyberneticists, realizing that there is far more to the space of ideas than the popular conception of cyborgs. Slashdot’s post links to the writings by a group of artists and writers exploring the idea more deeply in commemoration of the 50th anniversary of the coinage of the word cyborg.

I can’t help but relate this anniversary to a couple of other stories I saw in my feeds, today. First is a Technology Review article explaining new research combining thought control with artificial intelligence. This sort of combination almost seems obvious to me. It certainly would to some of those first cyberneticists, many of whom were interested in the idea of augmented cognition.

As the article explains, the artificial intelligence interprets simpler commands from the operator, alleviating the burden of thinking through many of the complex tasks most of us take for granted. I expect this strongly mirrors the sort of subsumption hierarchy that takes place in our own minds. We consciously think about moving and operating at a higher, simpler level and unconsciously unfold lower level, more complicated steps to accomplish those ends. It is astounding work for achieving such compelling, early results.

The other story is also from Technology Review and discusses two projects tackling one of the tougher challenges in the arena of replicating, or even improving on, human senses, namely our sense of touch.

The new electronic-skin devices “are a considerable advance in the state of the art in terms of power consumption and sensitivity,” says John Boland, professor of chemistry at Trinity College at the University of Dublin. “The real advance, though, is moving away from a flat geometry to a flexible device that could be used to make something in the shape of a human finger,” he says.

I could easily see both projects eventually leading to prosthetics that really are indistinguishable to the operator from the original.

Wheelchair Makes the Most of Brain Control, Technology Review
Electric Skin that Rivals the Real Thing, Technology Review

feeds | grep links > Faster JavaScript for Firefox 4, Details of Google’s New Search Index, Leaked EU Surveillance Plan, and More

Kurzweil Responds to Critics of His Prediction

More specifically, he wrote a response to PZ Myers whose article I linked in my own criticism. I saw this via Hacker News and thought it would be fair to write it up as well, especially given the repeated comments on my own thoughts by a clear defender of Kurzweil’s work.

The part I will concede is that neither PZ nor I had the whole of Kurzweil’s argument to inform our reactions. I have already decided to borrow “The Singularity is Near” from the library on the suggestion of my own commenter, specifically to read through the more fleshed out hypothesis Kurzweil wrote in that book’s fourth chapter.

Despite clear need to inform it further, on reading his own defense my opinion remains that the man is profoundly naive. He suggests that forty years contemplating the problem of reverse engineering the human brain pushes his ideas above the sort of reproach expressed by Myers and myself. I would offer that the whole of the artificial intelligence field has been studying variations of the very same problem for that duration and has very little to show for it. Kind of suggests that the problem truly is that hard.  The vast majority of thought leaders in the field are much more humble in what they predict about its future as well as the time scales involved. Simple time spent on the problem is once again a poor metric to gauge any single researcher’s grasp on the overall complexity.

I also find his flogging of Moore’s law suspect. I’ve been tracking the state of current and future computing architectures, physical and logical.  Though I am not a computer scientist or researcher my own reading leaves me skeptical that progress will remain on a doubling curve.  That isn’t a certainty just my view as an enthusiast.

Moore’s second, lesser known observation about the stable or decreasing power consumption and thermal load on chips over time hasn’t panned out.  I suspect will inevitable exert a braking force on his more famous, first observation. We all get that exponential trends are hard to predict, more is different. I doubt anyone would gainsay that assertion.  I am just not so certain that the doubling of computing power every eighteen months will abide for the next two decades.

The two most likely alternative ways forward, partially or completely bypassing limits on transistor density, present considerable hurdles that make prediction of Moore’s Law, or something like it, holding true over the next two decades unlikely in my view. Increasingly parallel direct successors to today’s chips are taxing computer science and programming practice to continue to saturate all of the horsepower these chips have to offer. Maybe we’ll have effect some kind of Kuhnian paradigm shift like the past leaps to structured, procedural and object oriented programming. That is far from guaranteed let alone probable. At this point, no one knows.

The future of quantum computing is even less certain. We haven’t been able to scale experimental computers of this type to a point where we can build informed guesses about their capabilities, let alone gauge how they might or might not make short work of simulating a system as complex as the human brain.

Even if I agreed with Kurzweil’s estimation of the complexity of virtualizing a human brain, it just isn’t a certainty that at the end of the next two decades we’ll have the horse power to drive it. I’ll extend the benefit for now on his estimation until I’ve had a chance to read his explanation more thoroughly but I rather doubt it will change my opinion.  I will endeavor to keep an open mind.

Just so we are clear, I want to see machine intelligence in my lifetime. That achievement will herald unpredictable changes not only in our society but in what it means to be human. I just think that the progress towards that goal is better served by humility and nose-to-grindstone pragmatism than Kurzweil’s unquestioning faith in an unqualified outcome that seems far from certain. Embracing the questions more fully seems like a better way forward than blind devotion to a presupposed answer.

Naive Prediction on Simulating a Human Brain

I am not surprised that the person making this prediction[1] is Ray Kurzweil. It seems like he may be the last person stumping for the singularity. As a step along the way, he is claiming that we’ll have successfully reverse engineered and run the human brain on a computer by 2030. Appropriately enough this latest prediction was delivered at the Singularity Summit.

His back of the envelope figuring is a bit suspect if you ask me.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

This reason demands a computer with a capacity of 36.9 petaflops and a memory capacity of 3.2 petabytes. He arrives at his date by drawing a line between the beefiest computers of today out towards this notional system. If raw computing power were the sole determinant, I’d be inclined to agree.

DNA is a blueprint for developing a brain, not “running” one or even building a fully realized one. Brains are a product of biological development that bootstraps through stages that I believe are necessary to the end product. More than that, I suspect that for a fully functioning human brain to develop, it must be embedded in an environment throughout that maturation.

A more realistic use of Kurzweil’s DNA figures is to continue to extrapolate what it would take to simulate a full human at least through pre-natal development. If that were even possible, then you could tackle the training and knowledge acquisition challenges which is his sole and thin concession to skeptics.

I get far more excited by modest advances with bottom up approaches like the Avida article I wrote spoke about on the last podcast and the swarm intelligence article I wrote up yesterday. I think that bottom up systems stand a better chance of capturing the emergent complexity of a developing mind. Better yet, reducing such a distributed, complex system to its barest requirements is more likely to be fruitful than blindly compress or stripping out parts of a blindly ignorant simulation, even assuming it were possible. Many studies have explored the minimal amount of complexity for emergent behaviors to evolve. Typically such systems can be winnowed out quantitatively with less impact of the qualitative end result.

In case that doesn’t convince you, PZ Myers has an excellent critique[2] of the same talk at Pharyngula. He expands on those necessary intermediate complexities, at least the ones we have some inkling are necessary, in the form of proteomics.


1. Reverse-Engineering of Human Brain Likely by 2030, Wired
2. Ray Kurzweil doesn’t understand the brain, Pharyngula (HT Glyn Moody)

Using Swarm Intelligence with AI

In the same vein as the artificial life story I discussed on yesterday’s podcast, The Economist has an article discussing the application of swarming algorithms developed by observing ants. It is actually a pretty good primer on a field that has been around for almost two decades, focusing specifically on Dr. Marco Dorigo who was instrumental in launching the development of swarm intelligence.

In 1992 Dr Dorigo and his group began developing Ant Colony Optimisation (ACO), an algorithm that looks for solutions to a problem by simulating a group of ants wandering over an area and laying down pheromones. ACO proved good at solving travelling-salesman-type problems. Since then it has grown into a whole family of algorithms, which have been applied to many practical questions.

The idea overlaps directly with the bottom up approach to artificial intelligence. Trying to understand, model and execute all the complexity of even a simple mind ab initio is cost prohibitive. Tracking masses of simple states and equally basic rules has become much more tractable, especially with increasingly parallel capable computers. I think solving NP hard problems may be overstating things. The way swarms explore problem spaces though is clearly fruitful in terms of fast, high quality optimization and approximation. Mind as an emergent phenomenon of swarm-like, bottom up systems undoubtedly relies quite a bit on the resilience to noise and fuzziness that also characterize these systems.

If you want an excellent fiction treatment of this idea, read Cory Doctorow’s “Human Readable“. If you think trying to hash through the problems of network neutrality is hard with our traditional computers and networks, think of the same questions applied to swarm driven route optimization.

Artificial intelligence: Riders on a swarm, The Economist via Slashdot