Mercurial > ift6266
comparison writeup/nips2010_submission.tex @ 464:24f4a8b53fcc
nips2010_submission.tex
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Fri, 28 May 2010 17:21:21 -0600 |
parents | |
children | 6205481bf33f |
comparison
equal
deleted
inserted
replaced
463:5fa1c653620c | 464:24f4a8b53fcc |
---|---|
1 \documentclass{article} % For LaTeX2e | |
2 \usepackage{nips10submit_e,times} | |
3 | |
4 \usepackage{amsthm,amsmath,amssymb,bbold,bbm} | |
5 \usepackage{algorithm,algorithmic} | |
6 \usepackage[utf8]{inputenc} | |
7 \usepackage{graphicx,subfigure} | |
8 \usepackage{mlapa} | |
9 | |
10 \title{Generating and Exploiting Perturbed and Multi-Task Handwritten Training Data for Deep Architectures} | |
11 \author{The IFT6266 Gang} | |
12 | |
13 \begin{document} | |
14 | |
15 %\makeanontitle | |
16 \maketitle | |
17 | |
18 \begin{abstract} | |
19 Recent theoretical and empirical work in statistical machine learning has | |
20 demonstrated the importance of learning algorithms for deep | |
21 architectures, i.e., function classes obtained by composing multiple | |
22 non-linear transformations. In the area of handwriting recognition, | |
23 deep learning algorithms | |
24 had been evaluated on rather small datasets with a few tens of thousands | |
25 of examples. Here we propose a powerful generator of variations | |
26 of examples for character images based on a pipeline of stochastic | |
27 transformations that include not only the usual affine transformations | |
28 but also the addition of slant, local elastic deformations, changes | |
29 in thickness, background images, color, contrast, occlusion, and | |
30 various types of pixel and spatially correlated noise. | |
31 We evaluate a deep learning algorithm (Stacked Denoising Autoencoders) | |
32 on the task of learning to classify digits and letters transformed | |
33 with this pipeline, using the hundreds of millions of generated examples | |
34 and testing on the full 62-class NIST test set. | |
35 We find that the SDA outperforms its | |
36 shallow counterpart, an ordinary Multi-Layer Perceptron, | |
37 and that it is better able to take advantage of the additional | |
38 generated data, as well as better able to take advantage of | |
39 the multi-task setting, i.e., | |
40 training from more classes than those of interest in the end. | |
41 In fact, we find that the SDA reaches human performance as | |
42 estimated by the Amazon Mechanical Turk on the 62-class NIST test characters. | |
43 \end{abstract} | |
44 | |
45 \section{Introduction} | |
46 | |
47 Deep Learning has emerged as a promising new area of research in | |
48 statistical machine learning (see~\emcite{Bengio-2009} for a review). | |
49 Learning algorithms for deep architectures are centered on the learning | |
50 of useful representations of data, which are better suited to the task at hand. | |
51 This is in great part inspired by observations of the mammalian visual cortex, | |
52 which consists of a chain of processing elements, each of which is associated with a | |
53 different representation. In fact, | |
54 it was found recently that the features learnt in deep architectures resemble | |
55 those observed in the first two of these stages (in areas V1 and V2 | |
56 of visual cortex)~\cite{HonglakL2008}. | |
57 Processing images typically involves transforming the raw pixel data into | |
58 new {\bf representations} that can be used for analysis or classification. | |
59 For example, a principal component analysis representation linearly projects | |
60 the input image into a lower-dimensional feature space. | |
61 Why learn a representation? Current practice in the computer vision | |
62 literature converts the raw pixels into a hand-crafted representation | |
63 (e.g.\ SIFT features~\cite{Lowe04}), but deep learning algorithms | |
64 tend to discover similar features in their first few | |
65 levels~\cite{HonglakL2008,ranzato-08,Koray-08,VincentPLarochelleH2008-very-small}. | |
66 Learning increases the | |
67 ease and practicality of developing representations that are at once | |
68 tailored to specific tasks, yet are able to borrow statistical strength | |
69 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the | |
70 feature representation can lead to higher-level (more abstract, more | |
71 general) features that are more robust to unanticipated sources of | |
72 variance extant in real data. | |
73 | |
74 Whereas a deep architecture can in principle be more powerful than a | |
75 shallow one in terms of representation, depth appears to render the | |
76 training problem more difficult in terms of optimization and local minima. | |
77 It is also only recently that successful algorithms were proposed to | |
78 overcome some of these difficulties. All are based on unsupervised | |
79 learning, often in an greedy layer-wise ``unsupervised pre-training'' | |
80 stage~\cite{Bengio-2009}. One of these layer initialization techniques, | |
81 applied here, is the Denoising | |
82 Auto-Encoder~(DEA)~\cite{VincentPLarochelleH2008-very-small}, which | |
83 performed similarly or better than previously proposed Restricted Boltzmann | |
84 Machines in terms of unsupervised extraction of a hierarchy of features | |
85 useful for classification. The principle is that each layer starting from | |
86 the bottom is trained to encode their input (the output of the previous | |
87 layer) and try to reconstruct it from a corrupted version of it. After this | |
88 unsupervised initialization, the stack of denoising auto-encoders can be | |
89 converted into a deep supervised feedforward neural network and trained by | |
90 stochastic gradient descent. | |
91 | |
92 | |
93 \section{Perturbation and Transformation of Character Images} | |
94 | |
95 This section describes the different transformations we used to generate data, in their order. | |
96 The code for these transformations (mostly python) is available at | |
97 {\tt http://anonymous.url.net}. All the modules in the pipeline share | |
98 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the | |
99 amount of deformation or noise introduced. | |
100 | |
101 We can differentiate two important parts in the pipeline. The first one, | |
102 from slant to pinch, performs transformations of the character. The second | |
103 part, from blur to contrast, adds noise to the image. | |
104 | |
105 \subsection{Slant} | |
106 | |
107 In order to mimic a slant effect, we simply shift each row of the image | |
108 proportionnaly to its height: $shift = round(slant \times height)$. We | |
109 round the shift in order to have a discret displacement. We do not use a | |
110 filter to smooth the result in order to save computing time and also | |
111 because latter transformations have similar effects. | |
112 | |
113 The $slant$ coefficient can be negative or positive with equal probability | |
114 and its value is randomly sampled according to the complexity level. In | |
115 our case we take uniformly a number in the range $[0,complexity]$, so the | |
116 maximum displacement for the lowest or highest pixel line is of | |
117 $round(complexity \times 32)$. | |
118 | |
119 | |
120 \subsection{Thickness} | |
121 | |
122 To change the thickness of the characters we used morpholigical operators: | |
123 dilation and erosion~\cite{Haralick87,Serra82}. | |
124 | |
125 The basic idea of such transform is, for each pixel, to multiply in the | |
126 element-wise manner its neighbourhood with a matrix called the structuring | |
127 element. Then for dilation we remplace the pixel value by the maximum of | |
128 the result, or the minimum for erosion. This will dilate or erode objects | |
129 in the image and the strength of the transform only depends on the | |
130 structuring element. | |
131 | |
132 We used ten different structural elements with increasing dimensions (the | |
133 biggest is $5\times5$). for each image, we radomly sample the operator | |
134 type (dilation or erosion) with equal probability and one structural | |
135 element from a subset of the $n$ smallest structuring elements where $n$ is | |
136 $round(10 \times complexity)$ for dilation and $round(6 \times complexity)$ | |
137 for erosion. A neutral element is always present in the set, if it is | |
138 chosen the transformation is not applied. Erosion allows only the six | |
139 smallest structural elements because when the character is too thin it may | |
140 erase it completly. | |
141 | |
142 \subsection{Affine Transformations} | |
143 | |
144 We generate an affine transform matrix according to the complexity level, | |
145 then we apply it directly to the image. The matrix is of size $2 \times | |
146 3$, so we can represent it by six parameters $(a,b,c,d,e,f)$. Formally, | |
147 for each pixel $(x,y)$ of the output image, we give the value of the pixel | |
148 nearest to : $(ax+by+c,dx+ey+f)$, in the input image. This allows to | |
149 produce scaling, translation, rotation and shearing variances. | |
150 | |
151 The sampling of the parameters $(a,b,c,d,e,f)$ have been tuned by hand to | |
152 forbid important rotations (not to confuse classes) but to give good | |
153 variability of the transformation. For each image we sample uniformly the | |
154 parameters in the following ranges: $a$ and $d$ in $[1-3 \times | |
155 complexity,1+3 \times complexity]$, $b$ and $e$ in $[-3 \times complexity,3 | |
156 \times complexity]$ and $c$ and $f$ in $[-4 \times complexity, 4 \times | |
157 complexity]$. | |
158 | |
159 | |
160 \subsection{Local Elastic Deformations} | |
161 | |
162 This filter induces a "wiggly" effect in the image. The description here | |
163 will be brief, as the algorithm follows precisely what is described in | |
164 \cite{SimardSP03}. | |
165 | |
166 The general idea is to generate two "displacements" fields, for horizontal | |
167 and vertical displacements of pixels. Each of these fields has the same | |
168 size as the original image. | |
169 | |
170 When generating the transformed image, we'll loop over the x and y | |
171 positions in the fields and select, as a value, the value of the pixel in | |
172 the original image at the (relative) position given by the displacement | |
173 fields for this x and y. If the position we'd retrieve is outside the | |
174 borders of the image, we use a 0 value instead. | |
175 | |
176 To generate a pixel in either field, first a value between -1 and 1 is | |
177 chosen from a uniform distribution. Then all the pixels, in both fields, is | |
178 multiplied by a constant $\alpha$ which controls the intensity of the | |
179 displacements (bigger $\alpha$ translates into larger wiggles). | |
180 | |
181 As a final step, each field is convoluted with a Gaussian 2D kernel of | |
182 standard deviation $\sigma$. Visually, this results in a "blur" | |
183 filter. This has the effect of making values next to each other in the | |
184 displacement fields similar. In effect, this makes the wiggles more | |
185 coherent, less noisy. | |
186 | |
187 As displacement fields were long to compute, 50 pairs of fields were | |
188 generated per complexity in increments of 0.1 (50 pairs for 0.1, 50 pairs | |
189 for 0.2, etc.), and afterwards, given a complexity, we selected randomly | |
190 among the 50 corresponding pairs. | |
191 | |
192 $\sigma$ and $\alpha$ were linked to complexity through the formulas | |
193 $\alpha = \sqrt[3]{complexity} \times 10.0$ and $\sigma = 10 - 7 \times | |
194 \sqrt[3]{complexity}$. | |
195 | |
196 | |
197 \subsection{Pinch} | |
198 | |
199 This is another GIMP filter we used. The filter is in fact named "Whirl and | |
200 pinch", but we don't use the "whirl" part (whirl is set to 0). As described | |
201 in GIMP, a pinch is "similar to projecting the image onto an elastic | |
202 surface and pressing or pulling on the center of the surface". | |
203 | |
204 Mathematically, for a square input image, think of drawing a circle of | |
205 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to | |
206 that disk (region inside circle) will have its value recalculated by taking | |
207 the value of another "source" pixel in the original image. The position of | |
208 that source pixel is found on the line thats goes through $C$ and $P$, but | |
209 at some other distance $d_2$. Define $d_1$ to be the distance between $P$ | |
210 and $C$. $d_2$ is given by $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times | |
211 d_1$, where $pinch$ is a parameter to the filter. | |
212 | |
213 If the region considered is not square then, before computing $d_2$, the | |
214 smallest dimension (x or y) is stretched such that we may consider the | |
215 region as if it was square. Then, after $d_2$ has been computed and | |
216 corresponding components $d_2\_x$ and $d_2\_y$ have been found, the | |
217 component corresponding to the stretched dimension is compressed back by an | |
218 inverse ratio. | |
219 | |
220 The actual value is given by bilinear interpolation considering the pixels | |
221 around the (non-integer) source position thus found. | |
222 | |
223 The value for $pinch$ in our case was given by sampling from an uniform | |
224 distribution over the range $[-complexity, 0.7 \times complexity]$. | |
225 | |
226 \subsection{Motion Blur} | |
227 | |
228 This is a GIMP filter we applied, a "linear motion blur" in GIMP | |
229 terminology. The description will be brief as it is a well-known filter. | |
230 | |
231 This algorithm has two input parameters, $length$ and $angle$. The value of | |
232 a pixel in the final image is the mean value of the $length$ first pixels | |
233 found by moving in the $angle$ direction. An approximation of this idea is | |
234 used, as we won't fall onto precise pixels by following that | |
235 direction. This is done using the Bresenham line algorithm. | |
236 | |
237 The angle, in our case, is chosen from a uniform distribution over | |
238 $[0,360]$ degrees. The length, though, depends on the complexity; it's | |
239 sampled from a Gaussian distribution of mean 0 and standard deviation | |
240 $\sigma = 3 \times complexity$. | |
241 | |
242 \subsection{Occlusion} | |
243 | |
244 This filter selects random parts of other (hereafter "occlusive") letter | |
245 images and places them over the original letter (hereafter "occluded") | |
246 image. To be more precise, having selected a subregion of the occlusive | |
247 image and a desination position in the occluded image, to determine the | |
248 final value for a given overlapping pixel, it selects whichever pixel is | |
249 the lightest. As a reminder, the background value is 0, black, so the value | |
250 nearest to 1 is selected. | |
251 | |
252 To select a subpart of the occlusive image, four numbers are generated. For | |
253 compability with the code, we'll call them "haut", "bas", "gauche" and | |
254 "droite" (respectively meaning top, bottom, left and right). Each of these | |
255 numbers is selected according to a Gaussian distribution of mean $8 \times | |
256 complexity$ and standard deviation $2$. This means the largest the | |
257 complexity is, the biggest the occlusion will be. The absolute value is | |
258 taken, as the numbers must be positive, and the maximum value is capped at | |
259 15. | |
260 | |
261 These four sizes collectively define a window centered on the middle pixel | |
262 of the occlusive image. This is the part that will be extracted as the | |
263 occlusion. | |
264 | |
265 The next step is to select a destination position in the occluded | |
266 image. Vertical and horizontal displacements $y\_arrivee$ and $x\_arrivee$ | |
267 are selected according to Gaussian distributions of mean 0 and of standard | |
268 deviations of, respectively, 3 and 2. Then an horizontal placement mode, | |
269 $place$, is selected to be of three values meaning | |
270 left, middle or right. | |
271 | |
272 If $place$ is "middle", the occlusion will be horizontally centered | |
273 around the horizontal middle of the occluded image, then shifted according | |
274 to $x\_arrivee$. If $place$ is "left", it will be placed on the left of | |
275 the occluded image, then displaced right according to $x\_arrivee$. The | |
276 contrary happens if $place$ is $right$. | |
277 | |
278 In both the horizontal and vertical positionning, the maximum position in | |
279 either direction is such that the selected occlusion won't go beyond the | |
280 borders of the occluded image. | |
281 | |
282 This filter has a probability of not being applied, at all, of 60\%. | |
283 | |
284 | |
285 \subsection{Pixel Permutation} | |
286 | |
287 This filter permuts neighbouring pixels. It selects first | |
288 $\frac{complexity}{3}$ pixels randomly in the image. Each of them are then | |
289 sequentially exchanged to one other pixel in its $V4$ neighbourhood. Number | |
290 of exchanges to the left, right, top, bottom are equal or does not differ | |
291 from more than 1 if the number of selected pixels is not a multiple of 4. | |
292 | |
293 It has has a probability of not being applied, at all, of 80\%. | |
294 | |
295 | |
296 \subsection{Gaussian Noise} | |
297 | |
298 This filter simply adds, to each pixel of the image independently, a | |
299 Gaussian noise of mean $0$ and standard deviation $\frac{complexity}{10}$. | |
300 | |
301 It has has a probability of not being applied, at all, of 70\%. | |
302 | |
303 | |
304 \subsection{Background Images} | |
305 | |
306 Following~\cite{Larochelle-jmlr-2009}, this transformation adds a random | |
307 background behind the letter. The background is chosen by first selecting, | |
308 at random, an image from a set of images. Then we choose a 32x32 subregion | |
309 of that image as the background image (by sampling x and y positions | |
310 uniformly while making sure not to cross image borders). | |
311 | |
312 To combine the original letter image and the background image, contrast | |
313 adjustments are made. We first get the maximal values (i.e. maximal | |
314 intensity) for both the original image and the background image, $maximage$ | |
315 and $maxbg$. We also have a parameter, $contrast$, given by sampling from a | |
316 uniform distribution over $[complexity, 1]$. | |
317 | |
318 Once we have all these numbers, we first adjust the values for the | |
319 background image. Each pixel value is multiplied by $\frac{max(maximage - | |
320 contrast, 0)}{maxbg}$. Therefore the higher the contrast, the darkest the | |
321 background will be. | |
322 | |
323 The final image is found by taking the brightest (i.e. value nearest to 1) | |
324 pixel from either the background image or the corresponding pixel in the | |
325 original image. | |
326 | |
327 \subsection{Salt and Pepper Noise} | |
328 | |
329 This filter adds noise to the image by randomly selecting a certain number | |
330 of them and, for those selected pixels, assign a random value according to | |
331 a uniform distribution over the $[0,1]$ ranges. This last distribution does | |
332 not change according to complexity. Instead, the number of selected pixels | |
333 does: the proportion of changed pixels corresponds to $complexity / 5$, | |
334 which means, as a maximum, 20\% of the pixels will be randomized. On the | |
335 lowest extreme, no pixel is changed. | |
336 | |
337 This filter also has a probability of not being applied, at all, of 75\%. | |
338 | |
339 \subsection{Spatially Gaussian Noise} | |
340 | |
341 The aim of this transformation is to filter, with a gaussian kernel, | |
342 different regions of the image. In order to save computing time we decided | |
343 to convolve the whole image only once with a symmetric gaussian kernel of | |
344 size and variance choosen uniformly in the ranges: $[12,12 + 20 \times | |
345 complexity]$ and $[2,2 + 6 \times complexity]$. The result is normalized | |
346 between $0$ and $1$. We also create a symmetric averaging window, of the | |
347 kernel size, with maximum value at the center. For each image we sample | |
348 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be | |
349 averaging centers between the original image and the filtered one. We | |
350 initialize to zero a mask matrix of the image size. For each selected pixel | |
351 we add to the mask the averaging window centered to it. The final image is | |
352 computed from the following element-wise operation: $\frac{image + filtered | |
353 image \times mask}{mask+1}$. | |
354 | |
355 This filter has a probability of not being applied, at all, of 75\%. | |
356 | |
357 \subsection{Scratches} | |
358 | |
359 The scratches module places line-like white patches on the image. The | |
360 lines are in fact heavily transformed images of the digit "1" (one), chosen | |
361 at random among five thousands such start images of this digit. | |
362 | |
363 Once the image is selected, the transformation begins by finding the first | |
364 $top$, $bottom$, $right$ and $left$ non-zero pixels in the image. It is | |
365 then cropped to the region thus delimited, then this cropped version is | |
366 expanded to $32\times32$ again. It is then rotated by a random angle having a | |
367 Gaussian distribution of mean 90 and standard deviation $100 \times | |
368 complexity$ (in degrees). The rotation is done with bicubic interpolation. | |
369 | |
370 The rotated image is then resized to $50\times50$, with anti-aliasing. In | |
371 that image, we crop the image again by selecting a region delimited | |
372 horizontally to $left$ to $left+32$ and vertically by $top$ to $top+32$. | |
373 | |
374 Once this is done, two passes of a greyscale morphological erosion filter | |
375 are applied. Put briefly, this erosion filter reduces the width of the line | |
376 by a certain $smoothing$ amount. For small complexities (< 0.5), | |
377 $smoothing$ is 6, so the line is very small. For complexities ranging from | |
378 0.25 to 0.5, $smoothing$ is 5. It is 4 for complexities 0.5 to 0.75, and 3 | |
379 for higher complexities. | |
380 | |
381 To compensate for border effects, the image is then cropped to 28x28 by | |
382 removing two pixels everywhere on the borders, then expanded to 32x32 | |
383 again. The pixel values are then linearly expanded such that the minimum | |
384 value is 0 and the maximal one is 1. Then, 50\% of the time, the image is | |
385 vertically flipped. | |
386 | |
387 This filter is only applied only 15\% of the time. When it is applied, 50\% | |
388 of the time, only one patch image is generated and applied. In 30\% of | |
389 cases, two patches are generated, and otherwise three patches are | |
390 generated. The patch is applied by taking the maximal value on any given | |
391 patch or the original image, for each of the 32x32 pixel locations. | |
392 | |
393 \subsection{Color and Contrast Changes} | |
394 | |
395 This filter changes the constrast and may invert the image polarity (white | |
396 on black to black on white). The contrast $C$ is defined here as the | |
397 difference between the maximum and the minimum pixel value of the image. A | |
398 contrast value is sampled uniformly between $1$ and $1-0.85 \times | |
399 complexity$ (this insure a minimum constrast of $0.15$). We then simply | |
400 normalize the image to the range $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The | |
401 polarity is inverted with $0.5$ probability. | |
402 | |
403 | |
404 \begin{figure}[h] | |
405 \resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}\\ | |
406 \caption{Illustration of the pipeline of stochastic | |
407 transformations applied to the image of a lower-case t | |
408 (the upper left image). Each image in the pipeline (going from | |
409 left to right, first top line, then bottom line) shows the result | |
410 of applying one of the modules in the pipeline. The last image | |
411 (bottom right) is used as training example.} | |
412 \label{fig:pipeline} | |
413 \end{figure} | |
414 | |
415 | |
416 \section{Experimental Setup} | |
417 | |
418 \subsection{Training Datasets} | |
419 | |
420 \subsubsection{Data Sources} | |
421 | |
422 \begin{itemize} | |
423 \item {\bf NIST} | |
424 The NIST Special Database 19 (NIST19) is a very widely used dataset for training and testing OCR systems. | |
425 The dataset is composed with 8????? digits and characters (upper and lower cases), with hand checked classifications, | |
426 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes | |
427 corresponding to "0"-"9","A"-"Z" and "a"-"z". The dataset contains 8 series of different complexity. | |
428 The fourth series, $hsf_4$, experimentally recognized to be the most difficult one for classification task is recommended | |
429 by NIST as testing set and is used in our work for that purpose. It contains 82600 examples, | |
430 while the training and validation sets (which have the same distribution) contain XXXXX and | |
431 XXXXX examples respectively. | |
432 The performances reported by previous work on that dataset mostly use only the digits. | |
433 Here we use all the classes both in the training and testing phase. This is especially | |
434 useful to estimate the effect of a multi-task setting. | |
435 Note that the distribution of the classes in the NIST training and test sets differs | |
436 substantially, with relatively many more digits in the test set, and uniform distribution | |
437 of letters in the test set, not in the training set (more like the natural distribution | |
438 of letters in text). | |
439 | |
440 \item {\bf Fonts} TODO!!! | |
441 | |
442 \item {\bf Captchas} | |
443 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for | |
444 generating characters of the same format as the NIST dataset. The core of this data source is composed with a random character | |
445 generator and various kinds of tranformations similar to those described in the previous sections. | |
446 In order to increase the variability of the data generated, different fonts are used for generating the characters. | |
447 Transformations (slant, distorsions, rotation, translation) are applied to each randomly generated character with a complexity | |
448 depending on the value of the complexity parameter provided by the user of the data source. Two levels of complexity are | |
449 allowed and can be controlled via an easy to use facade class. | |
450 \item {\bf OCR data} | |
451 \end{itemize} | |
452 | |
453 \subsubsection{Data Sets} | |
454 \begin{itemize} | |
455 \item {\bf NIST} This is the raw NIST special database 19. | |
456 \item {\bf P07} | |
457 The dataset P07 is sampled with our transformation pipeline with a complexity parameter of $0.7$. | |
458 For each new exemple to generate, we choose one source with the following probability: $0.1$ for the fonts, | |
459 $0.25$ for the captchas, $0.25$ for OCR data and $0.4$ for NIST. We apply all the transformations in their order | |
460 and for each of them we sample uniformly a complexity in the range $[0,0.7]$. | |
461 \item {\bf NISTP} NISTP is equivalent to P07 (complexity parameter of $0.7$ with the same sources proportion) | |
462 except that we only apply | |
463 transformations from slant to pinch. Therefore, the character is | |
464 transformed but no additionnal noise is added to the image, giving images | |
465 closer to the NIST dataset. | |
466 \end{itemize} | |
467 | |
468 \subsection{Models and their Hyperparameters} | |
469 | |
470 \subsubsection{Multi-Layer Perceptrons (MLP)} | |
471 | |
472 An MLP is a family of functions that are described by stacking layers of of a function similar to | |
473 $$g(x) = \tanh(b+Wx)$$ | |
474 The input, $x$, is a $d$-dimension vector. | |
475 The output, $g(x)$, is a $m$-dimension vector. | |
476 The parameter $W$ is a $m\times d$ matrix and is called the weight matrix. | |
477 The parameter $b$ is a $m$-vector and is called the bias vector. | |
478 The non-linearity (here $\tanh$) is applied element-wise to the output vector. | |
479 Usually the input is referred to a input layer and similarly for the output. | |
480 You can of course chain several such functions to obtain a more complex one. | |
481 Here is a common example | |
482 $$f(x) = c + V\tanh(b+Wx)$$ | |
483 In this case the intermediate layer corresponding to $\tanh(b+Wx)$ is called a hidden layer. | |
484 Here the output layer does not have the same non-linearity as the hidden layer. | |
485 This is a common case where some specialized non-linearity is applied to the output layer only depending on the task at hand. | |
486 | |
487 If you put 3 or more hidden layers in such a network you obtain what is called a deep MLP. | |
488 The parameters to adapt are the weight matrix and the bias vector for each layer. | |
489 | |
490 \subsubsection{Stacked Denoising Auto-Encoders (SDAE)} | |
491 \label{SdA} | |
492 | |
493 Auto-encoders are essentially a way to initialize the weights of the network to enable better generalization. | |
494 This is essentially unsupervised training where the layer is made to reconstruct its input through and encoding and decoding phase. | |
495 Denoising auto-encoders are a variant where the input is corrupted with random noise but the target is the uncorrupted input. | |
496 The principle behind these initialization methods is that the network will learn the inherent relation between portions of the data and be able to represent them thus helping with whatever task we want to perform. | |
497 | |
498 An auto-encoder unit is formed of two MLP layers with the bottom one called the encoding layer and the top one the decoding layer. | |
499 Usually the top and bottom weight matrices are the transpose of each other and are fixed this way. | |
500 The network is trained as such and, when sufficiently trained, the MLP layer is initialized with the parameters of the encoding layer. | |
501 The other parameters are discarded. | |
502 | |
503 The stacked version is an adaptation to deep MLPs where you initialize each layer with a denoising auto-encoder starting from the bottom. | |
504 During the initialization, which is usually called pre-training, the bottom layer is treated as if it were an isolated auto-encoder. | |
505 The second and following layers receive the same treatment except that they take as input the encoded version of the data that has gone through the layers before it. | |
506 For additional details see \cite{vincent:icml08}. | |
507 | |
508 \section{Experimental Results} | |
509 | |
510 \subsection{SDA vs MLP vs Humans} | |
511 | |
512 We compare here the best MLP (according to validation set error) that we found against | |
513 the best SDA (again according to validation set error), along with a precise estimate | |
514 of human performance obtained via Amazon's Mechanical Turk (AMT) | |
515 service\footnote{http://mturk.com}. AMT users are paid small amounts | |
516 of money to perform tasks for which human intelligence is required. | |
517 Mechanical Turk has been used extensively in natural language | |
518 processing \cite{SnowEtAl2008} and vision | |
519 \cite{SorokinAndForsyth2008,whitehill09}. AMT users where presented | |
520 with 10 character images and asked to type 10 corresponding ascii | |
521 characters. Hence they were forced to make a hard choice among the | |
522 62 character classes. Three users classified each image, allowing | |
523 to estimate inter-human variability (shown as +/- in parenthesis below). | |
524 | |
525 \begin{table} | |
526 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits + | |
527 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training | |
528 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture | |
529 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07) | |
530 and using a validation set to select hyper-parameters and other training choices. | |
531 \{SDA,MLP\}0 are trained on NIST, | |
532 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07. | |
533 The human error rate on digits is a lower bound because it does not count digits that were | |
534 recognized as letters. For comparison, the results found in the literature | |
535 on NIST digits classification using the same test set are included.} | |
536 \label{tab:sda-vs-mlp-vs-humans} | |
537 \begin{center} | |
538 \begin{tabular}{|l|r|r|r|r|} \hline | |
539 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline | |
540 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $>1.1\%$ \\ \hline | |
541 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline | |
542 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline | |
543 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline | |
544 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline | |
545 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline | |
546 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline | |
547 [5] & & & & 4.95\% $\pm$.18\% \\ \hline | |
548 [2] & & & & 3.71\% $\pm$.16\% \\ \hline | |
549 [3] & & & & 2.4\% $\pm$.13\% \\ \hline | |
550 [4] & & & & 2.1\% $\pm$.12\% \\ \hline | |
551 \end{tabular} | |
552 \end{center} | |
553 \end{table} | |
554 | |
555 \subsection{Perturbed Training Data More Helpful for SDAE} | |
556 | |
557 \begin{table} | |
558 \caption{Relative change in error rates due to the use of perturbed training data, | |
559 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models. | |
560 A positive value indicates that training on the perturbed data helped for the | |
561 given test set (the first 3 columns on the 62-class tasks and the last one is | |
562 on the clean 10-class digits). Clearly, the deep learning models did benefit more | |
563 from perturbed training data, even when testing on clean data, whereas the MLP | |
564 trained on perturbed data performed worse on the clean digits and about the same | |
565 on the clean characters. } | |
566 \label{tab:sda-vs-mlp-vs-humans} | |
567 \begin{center} | |
568 \begin{tabular}{|l|r|r|r|r|} \hline | |
569 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline | |
570 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline | |
571 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline | |
572 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline | |
573 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline | |
574 \end{tabular} | |
575 \end{center} | |
576 \end{table} | |
577 | |
578 | |
579 \subsection{Multi-Task Learning Effects} | |
580 | |
581 As previously seen, the SDA is better able to benefit from the | |
582 transformations applied to the data than the MLP. In this experiment we | |
583 define three tasks: recognizing digits (knowing that the input is a digit), | |
584 recognizing upper case characters (knowing that the input is one), and | |
585 recognizing lower case characters (knowing that the input is one). We | |
586 consider the digit classification task as the target task and we want to | |
587 evaluate whether training with the other tasks can help or hurt, and | |
588 whether the effect is different for MLPs versus SDAs. The goal is to find | |
589 out if deep learning can benefit more (or less) from multiple related tasks | |
590 (i.e. the multi-task setting) compared to a corresponding purely supervised | |
591 shallow learner. | |
592 | |
593 We use a single hidden layer MLP with 1000 hidden units, and a SDA | |
594 with 3 hidden layers (1000 hidden units per layer), pre-trained and | |
595 fine-tuned on NIST. | |
596 | |
597 Our results show that the MLP benefits marginally from the multi-task setting | |
598 in the case of digits (5\% relative improvement) but is actually hurt in the case | |
599 of characters (respectively 3\% and 4\% worse for lower and upper class characters). | |
600 On the other hand the SDA benefitted from the multi-task setting, with relative | |
601 error rate improvements of 27\%, 15\% and 13\% respectively for digits, | |
602 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. | |
603 | |
604 \begin{table} | |
605 \caption{Test error rates and relative change in error rates due to the use of | |
606 a multi-task setting, i.e., training on each task in isolation vs training | |
607 for all three tasks together, for MLPs vs SDAs. The SDA benefits much | |
608 more from the multi-task setting. All experiments on only on the | |
609 unperturbed NIST data, using validation error for model selection. | |
610 Relative improvement is 1 - single-task error / multi-task error.} | |
611 \label{tab:multi-task} | |
612 \begin{center} | |
613 \begin{tabular}{|l|r|r|r|} \hline | |
614 & single-task & multi-task & relative \\ | |
615 & setting & setting & improvement \\ \hline | |
616 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline | |
617 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline | |
618 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline | |
619 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline | |
620 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline | |
621 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline | |
622 \end{tabular} | |
623 \end{center} | |
624 \end{table} | |
625 | |
626 \section{Conclusions} | |
627 | |
628 \bibliography{strings,ml,aigaion,specials} | |
629 \bibliographystyle{mlapa} | |
630 %\bibliographystyle{apalike} | |
631 | |
632 \end{document} |