diff writeup/nips2010_submission.tex @ 469:d02d288257bf

redone bib style
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Sat, 29 May 2010 18:03:37 -0400
parents e0e57270b2af
children 2dd6e8962df1 ead3085c1c66
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Sat May 29 16:56:49 2010 -0400
+++ b/writeup/nips2010_submission.tex	Sat May 29 18:03:37 2010 -0400
@@ -5,7 +5,7 @@
 \usepackage{algorithm,algorithmic}
 \usepackage[utf8]{inputenc}
 \usepackage{graphicx,subfigure}
-\usepackage{mlapa}
+\usepackage[numbers]{natbib}
 
 \title{Generating and Exploiting Perturbed and Multi-Task Handwritten Training Data for Deep Architectures}
 \author{The IFT6266 Gang}
@@ -45,7 +45,7 @@
 \section{Introduction}
 
 Deep Learning has emerged as a promising new area of research in
-statistical machine learning (see~\emcite{Bengio-2009} for a review).
+statistical machine learning (see~\citet{Bengio-2009} for a review).
 Learning algorithms for deep architectures are centered on the learning
 of useful representations of data, which are better suited to the task at hand.
 This is in great part inspired by observations of the mammalian visual cortex, 
@@ -53,16 +53,16 @@
 different representation. In fact,
 it was found recently that the features learnt in deep architectures resemble
 those observed in the first two of these stages (in areas V1 and V2
-of visual cortex)~\cite{HonglakL2008}.
+of visual cortex)~\citep{HonglakL2008}.
 Processing images typically involves transforming the raw pixel data into
 new {\bf representations} that can be used for analysis or classification.
 For example, a principal component analysis representation linearly projects 
 the input image into a lower-dimensional feature space.
 Why learn a representation?  Current practice in the computer vision
 literature converts the raw pixels into a hand-crafted representation
-(e.g.\ SIFT features~\cite{Lowe04}), but deep learning algorithms
+e.g.\ SIFT features~\citep{Lowe04}, but deep learning algorithms
 tend to discover similar features in their first few 
-levels~\cite{HonglakL2008,ranzato-08,Koray-08,VincentPLarochelleH2008-very-small}.
+levels~\citep{HonglakL2008,ranzato-08,Koray-08,VincentPLarochelleH2008-very-small}.
 Learning increases the
 ease and practicality of developing representations that are at once
 tailored to specific tasks, yet are able to borrow statistical strength
@@ -77,9 +77,9 @@
 It is also only recently that successful algorithms were proposed to
 overcome some of these difficulties.  All are based on unsupervised
 learning, often in an greedy layer-wise ``unsupervised pre-training''
-stage~\cite{Bengio-2009}.  One of these layer initialization techniques,
+stage~\citep{Bengio-2009}.  One of these layer initialization techniques,
 applied here, is the Denoising
-Auto-Encoder~(DEA)~\cite{VincentPLarochelleH2008-very-small}, which
+Auto-Encoder~(DEA)~\citep{VincentPLarochelleH2008-very-small}, which
 performed similarly or better than previously proposed Restricted Boltzmann
 Machines in terms of unsupervised extraction of a hierarchy of features
 useful for classification.  The principle is that each layer starting from
@@ -99,7 +99,7 @@
 classifier better not only on similarly perturbed images but also on
 the {\em original clean examples}?
 \item Do deep architectures benefit more from such {\em out-of-distribution}
-examples, i.e. do they benefit more from the self-taught learning~\cite{RainaR2007} framework?
+examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework?
 \item Similarly, does the feature learning step in deep learning algorithms benefit more 
 training with similar but different classes (i.e. a multi-task learning scenario) than
 a corresponding shallow and purely supervised architecture?
@@ -110,7 +110,7 @@
 
 This section describes the different transformations we used to stochastically
 transform source images in order to obtain data. More details can
-be found in this technical report~\cite{ift6266-tr-anonymous}.
+be found in this technical report~\citep{ift6266-tr-anonymous}.
 The code for these transformations (mostly python) is available at 
 {\tt http://anonymous.url.net}. All the modules in the pipeline share
 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
@@ -130,7 +130,7 @@
 maximum displacement for the lowest or highest pixel line is of
 $round(complexity \times 32)$.\\
 {\bf Thickness}\\
-Morpholigical operators of dilation and erosion~\cite{Haralick87,Serra82}
+Morpholigical operators of dilation and erosion~\citep{Haralick87,Serra82}
 are applied. The neighborhood of each pixel is multiplied
 element-wise with a {\em structuring element} matrix.
 The pixel value is replaced by the maximum or the minimum of the resulting
@@ -156,7 +156,7 @@
 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times
 complexity]$.\\
 {\bf Local Elastic Deformations}\\
-This filter induces a "wiggly" effect in the image, following~\cite{SimardSP03},
+This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03},
 which provides more details. 
 Two "displacements" fields are generated and applied, for horizontal
 and vertical displacements of pixels. 
@@ -171,7 +171,7 @@
 {\bf Pinch}\\
 This GIMP filter is named "Whirl and
 pinch", but whirl was set to 0. A pinch is ``similar to projecting the image onto an elastic
-surface and pressing or pulling on the center of the surface''~\cite{GIMP-manual}.
+surface and pressing or pulling on the center of the surface''~\citep{GIMP-manual}.
 For a square input image, think of drawing a circle of
 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to
 that disk (region inside circle) will have its value recalculated by taking
@@ -198,7 +198,7 @@
 closer to black. The corners of the occluder  The rectangle corners
 are sampled so that larger complexity gives larger rectangles.
 The destination position in the occluded image are also sampled
-according to a normal distribution (see more details in~\cite{ift6266-tr-anonymous}.
+according to a normal distribution (see more details in~\citet{ift6266-tr-anonymous}).
 It has has a probability of not being applied at all of 60\%.\\
 {\bf Pixel Permutation}\\
 This filter permutes neighbouring pixels. It selects first
@@ -212,7 +212,7 @@
 noise $\sim Normal(0(\frac{complexity}{10})^2)$.
 It has has a probability of not being applied at all of 70\%.\\
 {\bf Background Images}\\
-Following~\cite{Larochelle-jmlr-2009}, this transformation adds a random
+Following~\citet{Larochelle-jmlr-2009}, this transformation adds a random
 background behind the letter. The background is chosen by first selecting,
 at random, an image from a set of images. Then a 32$\times$32 subregion
 of that image is chosen as the background image (by sampling position
@@ -367,7 +367,7 @@
 The stacked version is an adaptation to deep MLPs where you initialize each layer with a denoising auto-encoder  starting from the bottom.
 During the initialization, which is usually called pre-training, the bottom layer is treated as if it were an isolated auto-encoder.
 The second and following layers receive the same treatment except that they take as input the encoded version of the data that has gone through the layers before it.
-For additional details see \cite{vincent:icml08}.
+For additional details see \citet{vincent:icml08}.
 
 \section{Experimental Results}
 
@@ -379,8 +379,8 @@
 service\footnote{http://mturk.com}. AMT users are paid small amounts
 of money to perform tasks for which human intelligence is required.
 Mechanical Turk has been used extensively in natural language
-processing \cite{SnowEtAl2008} and vision
-\cite{SorokinAndForsyth2008,whitehill09}. AMT users where presented
+processing \citep{SnowEtAl2008} and vision
+\citep{SorokinAndForsyth2008,whitehill09}. AMT users where presented
 with 10 character images and asked to type 10 corresponding ascii
 characters. Hence they were forced to make a hard choice among the
 62 character classes. Three users classified each image, allowing
@@ -408,10 +408,10 @@
 MLP0   &  24.2\% $\pm$.15\%  & 68.8\%$\pm$.33\%  & 78.70\%$\pm$.14\%  & 3.45\% $\pm$.15\% \\ \hline 
 MLP1   &  23.0\% $\pm$.15\%  &  41.8\%$\pm$.35\%  & 90.4\%$\pm$.1\%  & 3.85\% $\pm$.16\% \\ \hline 
 MLP2   &  24.3\% $\pm$.15\%  &  46.0\%$\pm$.35\%  & 54.7\%$\pm$.17\%  & 4.85\% $\pm$.18\% \\ \hline 
-\cite{Granger+al-2007} &     &                    &                   & 4.95\% $\pm$.18\% \\ \hline
-\cite{Cortes+al-2000} &      &                    &                   & 3.71\% $\pm$.16\% \\ \hline
-\cite{Oliveira+al-2002} &    &                    &                   & 2.4\% $\pm$.13\% \\ \hline
-\cite{Migram+al-2005} &      &                    &                   & 2.1\% $\pm$.12\% \\ \hline
+\citep{Granger+al-2007} &     &                    &                   & 4.95\% $\pm$.18\% \\ \hline
+\citep{Cortes+al-2000} &      &                    &                   & 3.71\% $\pm$.16\% \\ \hline
+\citep{Oliveira+al-2002} &    &                    &                   & 2.4\% $\pm$.13\% \\ \hline
+\citep{Migram+al-2005} &      &                    &                   & 2.1\% $\pm$.12\% \\ \hline
 \end{tabular}
 \end{center}
 \end{table}
@@ -427,7 +427,7 @@
 from perturbed training data, even when testing on clean data, whereas the MLP
 trained on perturbed data performed worse on the clean digits and about the same
 on the clean characters. }
-\label{tab:sda-vs-mlp-vs-humans}
+\label{tab:perturbation-effect}
 \begin{center}
 \begin{tabular}{|l|r|r|r|r|} \hline
       & NIST test          & NISTP test      & P07 test       & NIST test digits   \\ \hline
@@ -490,7 +490,8 @@
 \section{Conclusions}
 
 \bibliography{strings,ml,aigaion,specials}
-\bibliographystyle{mlapa}
+%\bibliographystyle{plainnat}
+\bibliographystyle{unsrtnat}
 %\bibliographystyle{apalike}
 
 \end{document}