Research not for publishing papers, but for fun, for satisfying curiosity, and for revealing the truth.

This blog reports latest progresses in
(1) Signal Processing and Machine Learning for Biomedicine, Neuroimaging, Wearable Healthcare, and Smart-Home
(2) Sparse Signal Recovery and Compressed Sensing of Signals by Exploiting Spatiotemporal Structures
(3) My Works


Thursday, January 26, 2012

Andrew Ng: Machine learning and AI via large scale brain simulations

The location is changed to: CALIT2 ~ Atkinson Hall Auditorium
Time: Monday, January 30th, 2012, 11:00 am

Abstract

By building large-scale simulations of cortical (brain) computations, can
we enable revolutionary progress in AI and machine learning? Machine
learning often works very well, but can be a lot of work to apply because
it requires spending a long time engineering the input representation (or
"features") for each specific problem. This is true for machine learning
applications in vision, audio, text/NLP and other problems.
To address this, researchers have recently developed "unsupervised feature
learning" and "deep learning" algorithms that can automatically learn
feature representations from unlabeled data, thus bypassing much of this
time-consuming engineering. Many of these algorithms are developed using
simple simulations of cortical (brain) computations, and build on such
ideas as sparse coding and deep belief networks. By doing so, they exploit
large amounts of unlabeled data (which is cheap and easy to obtain) to
learn a good feature representation. These methods have also surpassed the
previous state-of-the-art on a number of problems in vision, audio, and
text. In this talk, I describe some of the key ideas behind unsupervised
feature learning and deep learning, and present a few algorithms. I also
speculate on how large-scale brain simulations may enable us to make
significant progress in machine learning and AI, especially perception.
This talk will be broadly accessible, and will not assume a machine
learning background.

No comments:

Post a Comment