changeset 521:13816dbef6ed

des choses ont disparu
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Tue, 01 Jun 2010 15:48:46 -0400
parents 18a6379999fd
children d41926a68993
files writeup/images/denoising_autoencoder_small.pdf writeup/nips2010_submission.tex
diffstat 2 files changed, 15 insertions(+), 3 deletions(-) [+]
line wrap: on
line diff
Binary file writeup/images/denoising_autoencoder_small.pdf has changed
--- a/writeup/nips2010_submission.tex	Tue Jun 01 11:58:14 2010 -0700
+++ b/writeup/nips2010_submission.tex	Tue Jun 01 15:48:46 2010 -0400
@@ -206,7 +206,7 @@
 {\bf Pinch.}
 This is a GIMP filter called ``Whirl and
 pinch'', but whirl was set to 0. A pinch is ``similar to projecting the image onto an elastic
-surface and pressing or pulling on the center of the surface''~\citep{GIMP-manual}.
+surface and pressing or pulling on the center of the surface'' (GIMP documentation manual).
 For a square input image, this is akin to drawing a circle of
 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to
 that disk (region inside circle) will have its value recalculated by taking
@@ -454,6 +454,18 @@
 through preliminary experiments (measuring performance on a validation set),
 and $0.1$ was then selected.
 
+\begin{figure}[h]
+\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}
+\caption{Illustration of the computations and training criterion for the denoising
+auto-encoder used to pre-train each layer of the deep architecture. Input $x$
+is corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$.
+The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which
+is compared to the uncorrupted input $x$ through the loss function
+$L_H(x,z)$, whose expected value is approximately minimized during training
+by tuning $\theta$ and $\theta'$.}
+\label{fig:da}
+\end{figure}
+
 {\bf Stacked Denoising Auto-Encoders (SDA).}
 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs)
 can be used to initialize the weights of each layer of a deep MLP (with many hidden 
@@ -470,9 +482,9 @@
 compositions of simpler ones through a deep hierarchy).
 Here we chose to use the Denoising
 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for
-% AJOUTER UNE IMAGE?
 these deep hierarchies of features, as it is very simple to train and
-teach (see tutorial and code there: {\tt http://deeplearning.net/tutorial}), 
+teach (see Figure~\ref{fig:da}, as well as 
+tutorial and code there: {\tt http://deeplearning.net/tutorial}), 
 provides immediate and efficient inference, and yielded results
 comparable or better than RBMs in series of experiments
 \citep{VincentPLarochelleH2008}. During training, a Denoising