Mercurial > ift6266
diff writeup/nips2010_submission.tex @ 568:ae6ba0309bf9
nouveaux graphes
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Thu, 03 Jun 2010 13:19:16 -0400 |
parents | b9b811e886ae |
children | 9d01280ff1c1 |
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex Thu Jun 03 13:16:53 2010 -0400 +++ b/writeup/nips2010_submission.tex Thu Jun 03 13:19:16 2010 -0400 @@ -742,7 +742,9 @@ The models are either trained on NIST (MLP0 and SDA0), NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested on either NIST, NISTP or P07, either on the 62-class task -or on the 10-digits task. +or on the 10-digits task. Training (including about half +for unsupervised pre-training, for DAs) on the larger +datasets takes around one day on a GPU-285. Figure~\ref{fig:error-rates-charts} summarizes the results obtained, comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, SDA2), along with the previous results on the digits NIST special database