diff writeup/aistats2011_cameraready.tex @ 634:54e8958e963b

bib
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Sat, 19 Mar 2011 22:57:48 -0400
parents 510220effb14
children 83d53ffe3f25
line wrap: on
line diff
--- a/writeup/aistats2011_cameraready.tex	Sat Mar 19 22:44:53 2011 -0400
+++ b/writeup/aistats2011_cameraready.tex	Sat Mar 19 22:57:48 2011 -0400
@@ -208,7 +208,8 @@
 which is based on training with or without these transformed images and testing on 
 clean ones. 
 Code for generating these transformations as well as for the deep learning 
-algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
+algorithms are made available at 
+{\tt http://hg.assembla.com/ift6266}.
 
 %\vspace*{-3mm}
 %\newpage
@@ -226,13 +227,13 @@
 in number of classes and in the complexity of the transformations, hence
 in the complexity of the learning task.
 The code for these transformations (mostly Python) is available at 
-{\tt http://anonymous.url.net}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
+{\tt http://hg.assembla.com/ift6266}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
 amount of deformation or noise introduced. 
 There are two main parts in the pipeline. The first one,
 from thickness to pinch, performs transformations. The second
 part, from blur to contrast, adds different kinds of noise.
-More details can be found in~\citep{ift6266-tr-anonymous}.
+More details can be found in~\citep{ARXIV-2010}.
 
 \begin{figure*}[ht]
 \centering
@@ -801,7 +802,7 @@
 with deep learning and out-of-distribution examples.
  
 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) 
-can be executed on-line at the anonymous site {\tt http://deep.host22.com}.
+can be executed on-line at {\tt http://deep.host22.com}.
 
 \iffalse
 \section*{Appendix I: Detailed Numerical Results}