This profile by Katie Drummond at Wired of a Darpa project in AI caught my eye. In the past year or so, I’ve seen eulogies for sub-fields of artificial intelligence and announcements of the re-invigoration of the overall field. I like reading about this pair of researchers quietly getting on with it.
The problem Yann LeCun and Rob Fergus at NYU are tackling is how to get a machine to learn without the labor intensive guidance and training that is usually required. This is a big problem in both the fields of computer science and philosophy, identifying where field ends and object begins and vice versa.
Existing software programs rely heavily on human assistance to identify objects. A user extracts key feature sets, like edge statistics (how many edges an object has, and where they are) and then feeds the data into a running algorithm, which uses the feature sets to recognize the visual input.
“People spend huge amounts of time building these feature sets, figuring out which are better or more accurate, and then refining them,” LeCun told Danger Room. “The question we’re asking is whether we can create computers that automatically learn feature sets from data. The brain can do it, so why not machines?”
Drummond includes a decent high level explanation of the pair’s approach, a method of layering masks that seems similar to certain aspects of neural networks. If they make progress beyond these promising beginnings, it will have implications not just for hard problems in computing but perhaps for how our own brains tackle these challenges of object identification and self learning.