# HG changeset patch # User Yoshua Bengio # Date 1281269781 14400 # Node ID 61aae4fd2da527d2e931ef87f45726569a540ef6 # Parent 685756a11fd23465de2a4a2b8c2c8900f78026d3 typo fixed, uploaded to CMT diff -r 685756a11fd2 -r 61aae4fd2da5 writeup/nips_rebuttal_clean.txt --- a/writeup/nips_rebuttal_clean.txt Sat Aug 07 22:56:46 2010 -0400 +++ b/writeup/nips_rebuttal_clean.txt Sun Aug 08 08:16:21 2010 -0400 @@ -14,7 +14,7 @@ Reviewer_5 about semi-supervised learning: In the unsupervised phase, no labels are used. In the supervised fine-tuning phase, all labels are used. So this is *not* the semi-supervised setting, which was already previously studied [5], showing the advantage of depth. Instead, we focus here on the out-of-distribution aspect of self-taught learning. -"...human errors may be present...": Indeed, there are variations across human labelings, which have have estimated (since each character was viewed by 3 different humans), and reported in the paper (the standard deviations across humans are large, but the standard error across a large test set is very small, so we believe the average error numbers to be fairly accurate). +"...human errors may be present...": Indeed, there are variations across human labelings, which have been estimated (since each character was viewed by 3 different humans), and reported in the paper (the standard deviations across humans are large, but the standard error across a large test set is very small, so we believe the average error numbers to be fairly accurate). "...supplement, but I did not have access to it...": strange! We could (and still can) access it. We will include a complete pseudo-code of SDAs in it.