# HG changeset patch # User Yoshua Bengio # Date 1275585556 14400 # Node ID ae6ba0309bf92d775e32100ca432b02ca5ef29d9 # Parent 08709b62e5748afe09fa1441da13c5e884ddb697 nouveaux graphes diff -r 08709b62e574 -r ae6ba0309bf9 writeup/images/improvements_charts.pdf Binary file writeup/images/improvements_charts.pdf has changed diff -r 08709b62e574 -r ae6ba0309bf9 writeup/nips2010_submission.tex --- a/writeup/nips2010_submission.tex Thu Jun 03 13:16:53 2010 -0400 +++ b/writeup/nips2010_submission.tex Thu Jun 03 13:19:16 2010 -0400 @@ -742,7 +742,9 @@ The models are either trained on NIST (MLP0 and SDA0), NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested on either NIST, NISTP or P07, either on the 62-class task -or on the 10-digits task. +or on the 10-digits task. Training (including about half +for unsupervised pre-training, for DAs) on the larger +datasets takes around one day on a GPU-285. Figure~\ref{fig:error-rates-charts} summarizes the results obtained, comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, SDA2), along with the previous results on the digits NIST special database