Mercurial > ift6266
comparison writeup/mlj_submission.tex @ 586:f5a198b2854a
contributions.tex
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Thu, 30 Sep 2010 17:43:48 -0400 |
parents | 4933077b8676 |
children | 9a6abcf143e8 |
comparison
equal
deleted
inserted
replaced
585:4933077b8676 | 586:f5a198b2854a |
---|---|
97 It is also only recently that successful algorithms were proposed to | 97 It is also only recently that successful algorithms were proposed to |
98 overcome some of these difficulties. All are based on unsupervised | 98 overcome some of these difficulties. All are based on unsupervised |
99 learning, often in an greedy layer-wise ``unsupervised pre-training'' | 99 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
100 stage~\citep{Bengio-2009}. One of these layer initialization techniques, | 100 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
101 applied here, is the Denoising | 101 applied here, is the Denoising |
102 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see Figure~\ref{fig:da}), | 102 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008} (see Figure~\ref{fig:da}), |
103 which | 103 which |
104 performed similarly or better than previously proposed Restricted Boltzmann | 104 performed similarly or better than previously proposed Restricted Boltzmann |
105 Machines in terms of unsupervised extraction of a hierarchy of features | 105 Machines in terms of unsupervised extraction of a hierarchy of features |
106 useful for classification. Each layer is trained to denoise its | 106 useful for classification. Each layer is trained to denoise its |
107 input, creating a layer of features that can be used as input for the next layer. | 107 input, creating a layer of features that can be used as input for the next layer. |