Mercurial > ift6266
comparison deep/stacked_dae/v2/config.py.example @ 239:42005ec87747
Mergé (manuellement) les changements de Sylvain pour utiliser le code de dataset d'Arnaud, à cette différence près que je n'utilse pas les givens. J'ai probablement une approche différente pour limiter la taille du dataset dans mon débuggage, aussi.
author | fsavard |
---|---|
date | Mon, 15 Mar 2010 18:30:21 -0400 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
238:9fc641d7adda | 239:42005ec87747 |
---|---|
1 ''' | |
2 These are parameters used by nist_sda.py. They'll end up as globals in there. | |
3 | |
4 Rename this file to config.py and configure as needed. | |
5 DON'T add the renamed file to the repository, as others might use it | |
6 without realizing it, with dire consequences. | |
7 ''' | |
8 | |
9 # Set this to True when you want to run cluster tests, ie. you want | |
10 # to run on the cluster, many jobs, but want to reduce the training | |
11 # set size and the number of epochs, so you know everything runs | |
12 # fine on the cluster. | |
13 # Set this PRIOR to inserting your test jobs in the DB. | |
14 TEST_CONFIG = False | |
15 | |
16 NIST_ALL_LOCATION = '/data/lisa/data/nist/by_class/all' | |
17 NIST_ALL_TRAIN_SIZE = 649081 | |
18 # valid et test =82587 82587 | |
19 | |
20 # change "sandbox" when you're ready | |
21 JOBDB = 'postgres://ift6266h10@gershwin/ift6266h10_sandbox_db/yourtablenamehere' | |
22 EXPERIMENT_PATH = "ift6266.deep.stacked_dae.v2.nist_sda.jobman_entrypoint" | |
23 | |
24 # reduce training set to that many examples | |
25 REDUCE_TRAIN_TO = None | |
26 # that's a max, it usually doesn't get to that point | |
27 MAX_FINETUNING_EPOCHS = 1000 | |
28 # number of minibatches before taking means for valid error etc. | |
29 REDUCE_EVERY = 100 | |
30 | |
31 if TEST_CONFIG: | |
32 REDUCE_TRAIN_TO = 1000 | |
33 MAX_FINETUNING_EPOCHS = 2 | |
34 REDUCE_EVERY = 10 | |
35 | |
36 | |
37 # This is to configure insertion of jobs on the cluster. | |
38 # Possible values the hyperparameters can take. These are then | |
39 # combined with produit_cartesien_jobs so we get a list of all | |
40 # possible combinations, each one resulting in a job inserted | |
41 # in the jobman DB. | |
42 JOB_VALS = {'pretraining_lr': [0.1, 0.01],#, 0.001],#, 0.0001], | |
43 'pretraining_epochs_per_layer': [10,20], | |
44 'hidden_layers_sizes': [300,800], | |
45 'corruption_levels': [0.1,0.2,0.3], | |
46 'minibatch_size': [20], | |
47 'max_finetuning_epochs':[MAX_FINETUNING_EPOCHS], | |
48 'finetuning_lr':[0.1, 0.01], #0.001 was very bad, so we leave it out | |
49 'num_hidden_layers':[2,3]} | |
50 | |
51 # Just useful for tests... minimal number of epochs | |
52 # (This is used when running a single job, locally, when | |
53 # calling ./nist_sda.py test_jobman_entrypoint | |
54 DEFAULT_HP_NIST = DD({'finetuning_lr':0.1, | |
55 'pretraining_lr':0.1, | |
56 'pretraining_epochs_per_layer':2, | |
57 'max_finetuning_epochs':2, | |
58 'hidden_layers_sizes':800, | |
59 'corruption_levels':0.2, | |
60 'minibatch_size':20, | |
61 'reduce_train_to':10000, | |
62 'num_hidden_layers':1}) | |
63 | |
64 |