Slashdot links to a story at MIT News that literally explains how Noah Goodman, a researcher in Brain Science and Cognition and Computer Science and AI, has managed to unify the advantages of the classical rules and inferences approach to AI with the more modern probabilistic approach. The result is a system that gains considerable advantage in terms of classification and inference but isn’t bogged down with the earlier problems of training or encoding all of the rules and knowledge ahead of time.
From the article, it sounds to me like the system that Goodman built, Church, is able to suss out its own rules and inferences from a reasonably small starting set. Beyond the theoretical breakthrough this represents, the article doesn’t consider how this may affect or even improve applications using the newer probabilistic approach like speech recognition. One hurdle will be optimization as Nick Chater, a professor at University College London following the work, explains that at the moment Church programs are very computationally intensive.
There are a couple of compelling, if abstract, examples mentioned in the article from a presentation given by Goodman and a student at the time, Charles Kemp. The sort of abstraction they were able to realize out of patterns of email suggests to me that there may eventually be applications that rely on sophisticated and non-obvious modeling and classification, like spam detection and recommendation systems.