The location is changed to: CALIT2 ~ Atkinson Hall Auditorium
Time: Monday, January 30th, 2012, 11:00 am
Abstract
By building large-scale simulations of cortical (brain) computations, can
we enable revolutionary progress in AI and machine learning? Machine
learning often works very well, but can be a lot of work to apply because
it requires spending a long time engineering the input representation (or
"features") for each specific problem. This is true for machine learning
applications in vision, audio, text/NLP and other problems.
To address this, researchers have recently developed "unsupervised feature
learning" and "deep learning" algorithms that can automatically learn
feature representations from unlabeled data, thus bypassing much of this
time-consuming engineering. Many of these algorithms are developed using
simple simulations of cortical (brain) computations, and build on such
ideas as sparse coding and deep belief networks. By doing so, they exploit
large amounts of unlabeled data (which is cheap and easy to obtain) to
learn a good feature representation. These methods have also surpassed the
previous state-of-the-art on a number of problems in vision, audio, and
text. In this talk, I describe some of the key ideas behind unsupervised
feature learning and deep learning, and present a few algorithms. I also
speculate on how large-scale brain simulations may enable us to make
significant progress in machine learning and AI, especially perception.
This talk will be broadly accessible, and will not assume a machine
learning background.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.