Using Swarm Intelligence with AI

In the same vein as the artificial life story I discussed on yesterday’s podcast, The Economist has an article discussing the application of swarming algorithms developed by observing ants. It is actually a pretty good primer on a field that has been around for almost two decades, focusing specifically on Dr. Marco Dorigo who was instrumental in launching the development of swarm intelligence.

In 1992 Dr Dorigo and his group began developing Ant Colony Optimisation (ACO), an algorithm that looks for solutions to a problem by simulating a group of ants wandering over an area and laying down pheromones. ACO proved good at solving travelling-salesman-type problems. Since then it has grown into a whole family of algorithms, which have been applied to many practical questions.

The idea overlaps directly with the bottom up approach to artificial intelligence. Trying to understand, model and execute all the complexity of even a simple mind ab initio is cost prohibitive. Tracking masses of simple states and equally basic rules has become much more tractable, especially with increasingly parallel capable computers. I think solving NP hard problems may be overstating things. The way swarms explore problem spaces though is clearly fruitful in terms of fast, high quality optimization and approximation. Mind as an emergent phenomenon of swarm-like, bottom up systems undoubtedly relies quite a bit on the resilience to noise and fuzziness that also characterize these systems.

If you want an excellent fiction treatment of this idea, read Cory Doctorow’s “Human Readable“. If you think trying to hash through the problems of network neutrality is hard with our traditional computers and networks, think of the same questions applied to swarm driven route optimization.

Artificial intelligence: Riders on a swarm, The Economist via Slashdot

feeds | grep links > Open Source Hardware Definition, Autonomous Helicopter, and More

Sorry that this is it for today, I am rushing off a bit early to catch a public talk at Google’s DC office.

feeds | grep links > Internet Kill Switch, Fair Use before DRM in Brazil, and More

feeds | grep links > Self Replicating MakerBot, AI Predicting Manhole Explosions, Mousing without the Mouse, and More

  • Self replicating MakerBot
    Via Nat’s Four Short Links on O’Reilly Radar. As he notes, highly appropriate as MakerBot started as a modified RepRap which was all about being self reproducible.
  • AI used to predict manhole explosions in NYC
    I had no idea the scale of this problem was worth harnessing machine learning to tackle but according to Slashdot, apparently it is. It sounds to me like a pretty big multivariate analysis depending on pretty laboriously collected data and observations from the field. Regardless of the risk of a heavy, iron manhole cover being ejected in a gout of flame and gas, the idea to use an AI to help stay on top of the mammoth maintenance challenge for a city as old as New York greatly appeals to me.
  • NetApp threatens sellers of appliances running ZFS
    What the Slashdot summary glosses over but the linked articles make a bit more clear is that there is a history to these complaints to goes back a ways. The same company apparently repeatedly threatened Sun for much the same reason that they are now threatening NAS maker Coraid. I find it hard to credit that there isn’t a less fraught file system offering similar capabilities originating more directly from the FLOSS world.
  • Mousing without a mouse
    Priya Ganapati describes an MIT project from the creator of Sixth Sense, Pranav Mistry. It definitely seems to be strongly related, using commodity hardware to track your mousing hand as you pantomime the gestures you’ve become used to in order to drive your computer without actually needing a mouse. Given the rate at which scroll wheels get gummed up, I would gladly invest many times more than the $20 figure quoted to never have to clean any part of a mouse ever again.
  • Incremental update to OLPC XO to include multitouch screen
    Via Hacker News.
  • Skype’s encryption is partially reverse engineered
  • Fan remake of Ultima VI released
  • Blizzard backs down on requiring real names in its forums

feeds | grep links > Bill to Pressure Those Who Would Break the Internet, Historic Cipher Revealed, New Developments in Weak AI, and More

Neural Network in JavaScript

I’ve seen just about everything else implemented in the lightweight, scripting language of the web, so why not a neural network? I saw this via Hacker News and it doesn’t strike me as too far different from libraries for doing heavy graphic processing in JavaScript. I could also see some distributed applications potentially using this. Think about it, modern browsers increasingly provide excellent client side storage, useful for hanging onto locally produced results, and means for sending and receiving data much more smartly, just what you would need to distribute tasks and collect results. I think a folding@home style project that works completely in the browser makes a great deal of sense.

The code is licensed under an MIT license and is available on github. Both would make it pretty simple to grab it and experiment quickly, whether pursuing my idea of distributed neural networks or any application that could make use of the capabilities of such a tool in a lightweight execution environment.

feeds | grep links > State of WikiLeaks’ Site, What to Expect in Firefox 4, and More

Album Composed with Algorithmic Swarm

This story from Make is a little different than the couple of other recent AI music stories I’ve written up. In those instances, the music is being generated or processed at a much lower level, in a more integrated fashion. As near as I can tell, the work here, by Evan Merz’s, is more like an audio assemblage that happens to be driven by a predator-prey cellular automata. The inspiration and borrowing from Cage is hardly surprising as so much of his work was governed by a meticulously following of how systems unfolded by seemingly simple rules.

Swarm Controlled Sampler – Becoming Live from Evan Merz on Vimeo.

I enjoyed the three tracks in the embedded video. They are more coherent than I would have guessed but do have a thrilling edge to them arising either from the structural changes the CA wrought or just the knowledge that this form of primal, computational complexity was harnessed in creating these creative expressions.

Astronomers Use AI to Help Classify Galaxies

Slashdot links to a Singularity Hub article describing a project that is forehead slappingly obvious in hindsight.

Scientists are teaching an artificial intelligence how to classify galaxies imaged by telescopes like the Hubble. Manda Banerji at the University of Cambridge along with researchers at University College London, Johns Hopkins and elsewhere, has succeeded in getting the program to agree with human analysis at an impressive rate of more than 90%.

The article goes on to explain how the team used data from Galaxy Zoo to train the AI.  Galaxy Zoo is a crowd sourced effort to aggregate small bits of highly distributed human effort to classify galaxies in astronomical imagery.  It has produced some startlingly good results due to efforts at cross verification.  It makes perfect sense as a training set for a directed learning program.

The AI will be used to alleviate the more trivial tasks involved in many coming astronomical projects so that human input can be applied for best effect, on the harder problems inherent in sifting through the reams of data.

Using Neural Networks to Classify Music

Technology Review describes some recent research from the University of Hong Kong. Students there set about using a neural network to classify music spread across ten genres. Given the number of variables in a musical piece this is considered one of the harder problems of AI. The project was able to achieve a considerable success rate, around 87%. As the article explains this high ratio can be attributed to the kind and in particular the depth of the network used.

Neural networks as I understand them are typically constructed in layers. The first group of artificial neurons accepts input and plugs its outputs into the next group, which then plugs into a success group and so on until the final layer. This arrangement augments or weakens weights during training and has similar advantages when the network is applied, refining the results as information flows through the network. The students used a network with a particular wiring scheme, a convolutional network which is usually used in visual recognition. While their network only had three layers, according to the article this is unusually deep, helping drive optimal classification.

Unfortunately, the high success rate was limited to the initial training library. When the students introduced a wider selection of music from outside of the lab, the network didn’t fare so well. Their assumption that more training would help is valid as local optimization can be a problem with directed learning systems. The article doesn’t mention the training speed but if the speed of matching mentioned is an indication then it may not be very long before the students’ hypothesis about more general classification is tested.

I am most interested to see further application of this particular type of network for archival purposes. Volunteering on a digitization project gives me plenty of opportunity to consider the costs in identifying and adequately tagging works once they are converted. I’d be willing to bet a success rate in the high eighties is pretty close to what human volunteers are able to achieve on average. A successfully deployed neural network could act as a force multiplier on top of the efforts of volunteers speeding their ability to make the vast body of pre-digital works that much more available.