Mercurial > pylearn
view dataset.py @ 46:c5b07e87b0cb
comments modif made by Yoshua
author | Frederic Bastien <bastienf@iro.umontreal.ca> |
---|---|
date | Tue, 29 Apr 2008 12:37:11 -0400 |
parents | a5c70dc42972 |
children | b6730f9a336d ea7d8bc38b34 |
line wrap: on
line source
from lookup_list import LookupList Example = LookupList from misc import unique_elements_list_intersection from string import join from sys import maxint import numpy class AbstractFunction (Exception): """Derived class must override this function""" class NotImplementedYet (NotImplementedError): """Work in progress, this should eventually be implemented""" #class UnboundedDataSet (Exception): """Trying to obtain length of unbounded dataset (a stream)""" class DataSet(object): """A virtual base class for datasets. A DataSet can be seen as a generalization of a matrix, meant to be used in conjunction with learning algorithms (for training and testing them): rows/records are called examples, and columns/attributes are called fields. The field value for a particular example can be an arbitrary python object, which depends on the particular dataset. We call a DataSet a 'stream' when its length is unbounded (otherwise its __len__ method should raise an UnboundedDataSet exception). A DataSet is a generator of iterators; these iterators can run through the examples or the fields in a variety of ways. A DataSet need not necessarily have a finite or known length, so this class can be used to interface to a 'stream' which feeds on-line learning (however, as noted below, some operations are not feasible or not recommanded on streams). To iterate over examples, there are several possibilities: * for example in dataset([field1, field2,field3, ...]): * for val1,val2,val3 in dataset([field1, field2,field3]): * for minibatch in dataset.minibatches([field1, field2, ...],minibatch_size=N): * for mini1,mini2,mini3 in dataset.minibatches([field1, field2, ...],minibatch_size=N): * for example in dataset: print example['x'] * for x,y,z in dataset: Each of these is documented below. All of these iterators are expected to provide, in addition to the usual 'next()' method, a 'next_index()' method which returns a non-negative integer pointing to the position of the next example that will be returned by 'next()' (or of the first example in the next minibatch returned). This is important because these iterators can wrap around the dataset in order to do multiple passes through it, in possibly unregular ways if the minibatch size is not a divisor of the dataset length. To iterate over fields, one can do * for field in dataset.fields(): for field_value in field: # iterate over the values associated to that field for all the dataset examples * for fields in dataset(field1,field2,...).fields() to select a subset of fields * for fields in dataset.fields(field1,field2,...) to select a subset of fields and each of these fields is iterable over the examples: * for field_examples in dataset.fields(): for example_value in field_examples: ... but when the dataset is a stream (unbounded length), it is not recommanded to do such things because the underlying dataset may refuse to access the different fields in an unsynchronized ways. Hence the fields() method is illegal for streams, by default. The result of fields() is a DataSetFields object, which iterates over fields, and whose elements are iterable over examples. A DataSetFields object can be turned back into a DataSet with its examples() method: dataset2 = dataset1.fields().examples() and dataset2 should behave exactly like dataset1 (in fact by default dataset2==dataset1). Note: Fields are not mutually exclusive, i.e. two fields can overlap in their actual content. Note: The content of a field can be of any type. Field values can also be 'missing' (e.g. to handle semi-supervised learning), and in the case of numeric (numpy array) fields (i.e. an ArrayFieldsDataSet), NaN plays the role of a missing value. What about non-numeric values? None. Dataset elements can be indexed and sub-datasets (with a subset of examples) can be extracted. These operations are not supported by default in the case of streams. * dataset[:n] returns a dataset with the n first examples. * dataset[i1:i2:s] returns a dataset with the examples i1,i1+s,...i2-s. * dataset[i] returns an Example. * dataset[[i1,i2,...in]] returns a dataset with examples i1,i2,...in. * dataset['key'] returns a property associated with the given 'key' string. If 'key' is a fieldname, then the VStacked field values (iterable over field values) for that field is returned. Other keys may be supported by different dataset subclasses. The following key names are should be supported: - 'description': a textual description or name for the dataset - '<fieldname>.type': a type name or value for a given <fieldname> Datasets can be concatenated either vertically (increasing the length) or horizontally (augmenting the set of fields), if they are compatible, using the following operations (with the same basic semantics as numpy.hstack and numpy.vstack): * dataset1 | dataset2 | dataset3 == dataset.hstack([dataset1,dataset2,dataset3]) creates a new dataset whose list of fields is the concatenation of the list of fields of the argument datasets. This only works if they all have the same length. * dataset1 & dataset2 & dataset3 == dataset.vstack([dataset1,dataset2,dataset3]) creates a new dataset that concatenates the examples from the argument datasets (and whose length is the sum of the length of the argument datasets). This only works if they all have the same fields. According to the same logic, and viewing a DataSetFields object associated to a DataSet as a kind of transpose of it, fields1 & fields2 concatenates fields of a DataSetFields fields1 and fields2, and fields1 | fields2 concatenates their examples. A dataset can hold arbitrary key-value pairs that may be used to access meta-data or other properties of the dataset or associated with the dataset or the result of a computation stored in a dataset. These can be accessed through the [key] syntax when key is a string (or more specifically, neither an integer, a slice, nor a list). A DataSet sub-class should always redefine the following methods: * __len__ if it is not a stream * fieldNames * minibatches_nowrap (called by DataSet.minibatches()) * valuesHStack * valuesVStack For efficiency of implementation, a sub-class might also want to redefine * hasFields * __getitem__ may not be feasible with some streams * __iter__ """ def __init__(self,description=None,field_types=None): if description is None: # by default return "<DataSetType>(<SuperClass1>,<SuperClass2>,...)" description = type(self).__name__ + " ( " + join([x.__name__ for x in type(self).__bases__]) + " )" self.description=description self.field_types=field_types class MinibatchToSingleExampleIterator(object): """ Converts the result of minibatch iterator with minibatch_size==1 into single-example values in the result. Therefore the result of iterating on the dataset itself gives a sequence of single examples (whereas the result of iterating over minibatches gives in each Example field an iterable object over the individual examples in the minibatch). """ def __init__(self, minibatch_iterator): self.minibatch_iterator = minibatch_iterator self.minibatch = None def __iter__(self): #makes for loop work return self def next(self): size1_minibatch = self.minibatch_iterator.next() if not self.minibatch: self.minibatch = Example(size1_minibatch.keys(),[value[0] for value in size1_minibatch.values()]) else: self.minibatch._values = [value[0] for value in size1_minibatch.values()] return self.minibatch def next_index(self): return self.minibatch_iterator.next_index() def __iter__(self): """Supports the syntax "for i in dataset: ..." Using this syntax, "i" will be an Example instance (or equivalent) with all the fields of DataSet self. Every field of "i" will give access to a field of a single example. Fields should be accessible via i["fielname"] or i[3] (in the order defined by the elements of the Example returned by this iterator), but the derived class is free to accept any type of identifier, and add extra functionality to the iterator. The default implementation calls the minibatches iterator and extracts the first example of each field. """ return DataSet.MinibatchToSingleExampleIterator(self.minibatches(None, minibatch_size = 1)) class MinibatchWrapAroundIterator(object): """ An iterator for minibatches that handles the case where we need to wrap around the dataset because n_batches*minibatch_size > len(dataset). It is constructed from a dataset that provides a minibatch iterator that does not need to handle that problem. This class is a utility for dataset subclass writers, so that they do not have to handle this issue multiple times, nor check that fieldnames are valid, nor handle the empty fieldnames (meaning 'use all the fields'). """ def __init__(self,dataset,fieldnames,minibatch_size,n_batches,offset): self.dataset=dataset self.fieldnames=fieldnames self.minibatch_size=minibatch_size self.n_batches=n_batches self.n_batches_done=0 self.next_row=offset self.L=len(dataset) assert offset+minibatch_size<=self.L ds_nbatches = (self.L-offset)/minibatch_size if n_batches is not None: ds_nbatches = max(n_batches,ds_nbatches) if fieldnames: assert dataset.hasFields(*fieldnames) else: fieldnames=dataset.fieldNames() self.iterator = dataset.minibatches_nowrap(fieldnames,minibatch_size,ds_nbatches,offset) def __iter__(self): return self def next_index(self): return self.next_row def next(self): if self.n_batches and self.n_batches_done==self.n_batches: raise StopIteration upper = self.next_row+self.minibatch_size if upper <=self.L: minibatch = self.iterator.next() else: if not self.n_batches: raise StopIteration # we must concatenate (vstack) the bottom and top parts of our minibatch # first get the beginning of our minibatch (top of dataset) first_part = self.dataset.minibatches_nowrap(fieldnames,self.L-self.next_row,1,self.next_row).next() second_part = self.dataset.minibatches_nowrap(fieldnames,upper-self.L,1,0).next() minibatch = Example(self.fieldnames, [self.dataset.valuesVStack(name,[first_part[name],second_part[name]]) for name in self.fieldnames]) self.next_row=upper self.n_batches_done+=1 if upper >= self.L and self.n_batches: self.next_row -= self.L return minibatch minibatches_fieldnames = None minibatches_minibatch_size = 1 minibatches_n_batches = None def minibatches(self, fieldnames = minibatches_fieldnames, minibatch_size = minibatches_minibatch_size, n_batches = minibatches_n_batches, offset = 0): """ Return an iterator that supports three forms of syntax: for i in dataset.minibatches(None,**kwargs): ... for i in dataset.minibatches([f1, f2, f3],**kwargs): ... for i1, i2, i3 in dataset.minibatches([f1, f2, f3],**kwargs): ... Using the first two syntaxes, "i" will be an indexable object, such as a list, tuple, or Example instance. In both cases, i[k] is a list-like container of a batch of current examples. In the second case, i[0] is list-like container of the f1 field of a batch current examples, i[1] is a list-like container of the f2 field, etc. Using the first syntax, all the fields will be returned in "i". Using the third syntax, i1, i2, i3 will be list-like containers of the f1, f2, and f3 fields of a batch of examples on each loop iteration. The minibatches iterator is expected to return upon each call to next() a DataSetFields object, which is a LookupList (indexed by the field names) whose elements are iterable over the minibatch examples, and which keeps a pointer to a sub-dataset that can be used to iterate over the individual examples in the minibatch. Hence a minibatch can be converted back to a regular dataset or its fields can be looked at individually (and possibly iterated over). PARAMETERS - fieldnames (list of any type, default None): The loop variables i1, i2, i3 (in the example above) should contain the f1, f2, and f3 fields of the current batch of examples. If None, the derived class can choose a default, e.g. all fields. - minibatch_size (integer, default 1) On every iteration, the variables i1, i2, i3 will have exactly minibatch_size elements. e.g. len(i1) == minibatch_size - n_batches (integer, default None) The iterator will loop exactly this many times, and then stop. If None, the derived class can choose a default. If (-1), then the returned iterator should support looping indefinitely. - offset (integer, default 0) The iterator will start at example 'offset' in the dataset, rather than the default. Note: A list-like container is something like a tuple, list, numpy.ndarray or any other object that supports integer indexing and slicing. """ return DataSet.MinibatchWrapAroundIterator(self,fieldnames,minibatch_size,n_batches,offset) def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): """ This is the minibatches iterator generator that sub-classes must define. It does not need to worry about wrapping around multiple times across the dataset, as this is handled by MinibatchWrapAroundIterator when DataSet.minibatches() is called. The next() method of the returned iterator does not even need to worry about the termination condition (as StopIteration will be raised by DataSet.minibatches before an improper call to minibatches_nowrap's next() is made). That next() method can assert that its next row will always be within [0,len(dataset)). The iterator returned by minibatches_nowrap does not need to implement a next_index() method either, as this will be provided by MinibatchWrapAroundIterator. """ raise AbstractFunction() def __len__(self): """ len(dataset) returns the number of examples in the dataset. By default, a DataSet is a 'stream', i.e. it has an unbounded length (raises UnboundedDataSet). Sub-classes which implement finite-length datasets should redefine this method. Some methods only make sense for finite-length datasets. """ raise UnboundedDataSet() def hasFields(self,*fieldnames): """ Return true if the given field name (or field names, if multiple arguments are given) is recognized by the DataSet (i.e. can be used as a field name in one of the iterators). The default implementation may be inefficient (O(# fields in dataset)), as it calls the fieldNames() method. Many datasets may store their field names in a dictionary, which would allow more efficiency. """ return len(unique_elements_list_intersection(fieldnames,self.fieldNames()))>0 def fieldNames(self): """ Return the list of field names that are supported by the iterators, and for which hasFields(fieldname) would return True. """ raise AbstractFunction() def __call__(self,*fieldnames): """ Return a dataset that sees only the fields whose name are specified. """ assert self.hasFields(*fieldnames) return self.fields(*fieldnames).examples() def fields(self,*fieldnames): """ Return a DataSetFields object associated with this dataset. """ return DataSetFields(self,*fieldnames) def __getitem__(self,i): """ dataset[i] returns the (i+1)-th example of the dataset. dataset[i:j] returns the subdataset with examples i,i+1,...,j-1. dataset[i:j:s] returns the subdataset with examples i,i+2,i+4...,j-2. dataset[[i1,i2,..,in]] returns the subdataset with examples i1,i2,...,in. dataset['key'] returns a property associated with the given 'key' string. If 'key' is a fieldname, then the VStacked field values (iterable over field values) for that field is returned. Other keys may be supported by different dataset subclasses. The following key names are encouraged: - 'description': a textual description or name for the dataset - '<fieldname>.type': a type name or value for a given <fieldname> Note that some stream datasets may be unable to implement random access, i.e. arbitrary slicing/indexing because they can only iterate through examples one or a minibatch at a time and do not actually store or keep past (or future) examples. The default implementation of getitem uses the minibatches iterator to obtain one example, one slice, or a list of examples. It may not always be the most efficient way to obtain the result, especially if the data are actually stored in a memory array. """ # check for an index if type(i) is int: return DataSet.MinibatchToSingleExampleIterator( self.minibatches(minibatch_size=1,n_batches=1,offset=i)).next() rows=None # or a slice if type(i) is slice: if not i.start: i.start=0 if not i.step: i.step=1 if i.step is 1: return self.minibatches(minibatch_size=i.stop-i.start,n_batches=1,offset=i.start).next().examples() rows = range(i.start,i.stop,i.step) # or a list of indices elif type(i) is list: rows = i if rows is not None: fields_values = zip(*[self[row] for row in rows]) return MinibatchDataSet( Example(self.fieldNames(),[ self.valuesVStack(fieldname,field_values) for fieldname,field_values in zip(self.fieldNames(),fields_values)])) # else check for a fieldname if self.hasFields(i): return self.minibatches(fieldnames=[i],minibatch_size=len(self),n_batches=1,offset=0).next()[0] # else we are trying to access a property of the dataset assert i in self.__dict__ # else it means we are trying to access a non-existing property return self.__dict__[i] def valuesHStack(self,fieldnames,fieldvalues): """ Return a value that corresponds to concatenating (horizontally) several field values. This can be useful to merge some fields. The implementation of this operation is likely to involve a copy of the original values. When the values are numpy arrays, the result should be numpy.hstack(values). If it makes sense, this operation should work as well when each value corresponds to multiple examples in a minibatch e.g. if each value is a Ni-vector and a minibatch of length L is a LxNi matrix, then the result should be a Lx(N1+N2+..) matrix equal to numpy.hstack(values). The default is to use numpy.hstack for numpy.ndarray values, and a list pointing to the original values for other data types. """ all_numpy=True for value in fieldvalues: if not type(value) is numpy.ndarray: all_numpy=False if all_numpy: return numpy.hstack(fieldvalues) # the default implementation of horizontal stacking is to put values in a list return fieldvalues def valuesVStack(self,fieldname,values): """ Return a value that corresponds to concatenating (vertically) several values of the same field. This can be important to build a minibatch out of individual examples. This is likely to involve a copy of the original values. When the values are numpy arrays, the result should be numpy.vstack(values). The default is to use numpy.vstack for numpy.ndarray values, and a list pointing to the original values for other data types. """ all_numpy=True for value in values: if not type(value) is numpy.ndarray: all_numpy=False if all_numpy: return numpy.vstack(values) # the default implementation of vertical stacking is to put values in a list return values def __or__(self,other): """ dataset1 | dataset2 returns a dataset whose list of fields is the concatenation of the list of fields of the argument datasets. This only works if they all have the same length. """ return HStackedDataSet(self,other) def __and__(self,other): """ dataset1 & dataset2 is a dataset that concatenates the examples from the argument datasets (and whose length is the sum of the length of the argument datasets). This only works if they all have the same fields. """ return VStackedDataSet(self,other) def hstack(datasets): """ hstack(dataset1,dataset2,...) returns dataset1 | datataset2 | ... which is a dataset whose fields list is the concatenation of the fields of the individual datasets. """ assert len(datasets)>0 if len(datasets)==1: return datasets[0] return HStackedDataSet(datasets) def vstack(datasets): """ vstack(dataset1,dataset2,...) returns dataset1 & datataset2 & ... which is a dataset which iterates first over the examples of dataset1, then over those of dataset2, etc. """ assert len(datasets)>0 if len(datasets)==1: return datasets[0] return VStackedDataSet(datasets) class FieldsSubsetDataSet(DataSet): """ A sub-class of DataSet that selects a subset of the fields. """ def __init__(self,src,fieldnames): self.src=src self.fieldnames=fieldnames assert src.hasFields(*fieldnames) self.valuesHStack = src.valuesHStack self.valuesVStack = src.valuesVStack def __len__(self): return len(self.src) def fieldNames(self): return self.fieldnames def __iter__(self): class FieldsSubsetIterator(object): def __init__(self,ds): self.ds=ds self.src_iter=ds.src.__iter__() self.example=None def __iter__(self): return self def next(self): complete_example = self.src_iter.next() if self.example: self.example._values=[complete_example[field] for field in self.ds.fieldnames] else: self.example=Example(self.ds.fieldnames, [complete_example[field] for field in self.ds.fieldnames]) return self.example return FieldsSubsetIterator(self) def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): assert self.hasFields(*fieldnames) return self.src.minibatches_nowrap(fieldnames,minibatch_size,n_batches,offset) def __getitem__(self,i): return FieldsSubsetDataSet(self.src[i],self.fieldnames) class DataSetFields(LookupList): """ Although a DataSet iterates over examples (like rows of a matrix), an associated DataSetFields iterates over fields (like columns of a matrix), and can be understood as a transpose of the associated dataset. To iterate over fields, one can do * for fields in dataset.fields() * for fields in dataset(field1,field2,...).fields() to select a subset of fields * for fields in dataset.fields(field1,field2,...) to select a subset of fields and each of these fields is iterable over the examples: * for field_examples in dataset.fields(): for example_value in field_examples: ... but when the dataset is a stream (unbounded length), it is not recommanded to do such things because the underlying dataset may refuse to access the different fields in an unsynchronized ways. Hence the fields() method is illegal for streams, by default. The result of fields() is a DataSetFields object, which iterates over fields, and whose elements are iterable over examples. A DataSetFields object can be turned back into a DataSet with its examples() method: dataset2 = dataset1.fields().examples() and dataset2 should behave exactly like dataset1 (in fact by default dataset2==dataset1). DataSetFields can be concatenated vertically or horizontally. To be consistent with the syntax used for DataSets, the | concatenates the fields and the & concatenates the examples. """ def __init__(self,dataset,*fieldnames): if not fieldnames: fieldnames=dataset.fieldNames() elif fieldnames is not dataset.fieldNames(): dataset = FieldsSubsetDataSet(dataset,fieldnames) assert dataset.hasFields(*fieldnames) self.dataset=dataset minibatch_iterator = dataset.minibatches(fieldnames, minibatch_size=len(dataset), n_batches=1) minibatch=minibatch_iterator.next() LookupList.__init__(self,fieldnames,minibatch) def examples(self): return self.dataset def __or__(self,other): """ fields1 | fields2 is a DataSetFields that whose list of examples is the concatenation of the list of examples of DataSetFields fields1 and fields2. """ return (self.examples() + other.examples()).fields() def __and__(self,other): """ fields1 + fields2 is a DataSetFields that whose list of fields is the concatenation of the fields of DataSetFields fields1 and fields2. """ return (self.examples() | other.examples()).fields() class MinibatchDataSet(DataSet): """ Turn a LookupList of same-length fields into an example-iterable dataset. Each element of the lookup-list should be an iterable and sliceable, all of the same length. """ def __init__(self,fields_lookuplist,values_vstack=DataSet().valuesVStack, values_hstack=DataSet().valuesHStack): """ The user can (and generally should) also provide values_vstack(fieldname,fieldvalues) and a values_hstack(fieldnames,fieldvalues) functions behaving with the same semantics as the DataSet methods of the same name (but without the self argument). """ self.fields=fields_lookuplist assert len(fields_lookuplist)>0 self.length=len(fields_lookuplist[0]) for field in fields_lookuplist[1:]: assert self.length==len(field) self.values_vstack=values_vstack self.values_hstack=values_hstack def __len__(self): return self.length def __getitem__(self,i): return DataSetFields(MinibatchDataSet( Example(self.fields.keys(),[field[i] for field in self.fields])),self.fields) def fieldNames(self): return self.fields.keys() def hasFields(self,*fieldnames): for fieldname in fieldnames: if fieldname not in self.fields: return False return True def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): class Iterator(object): def __init__(self,ds): self.ds=ds self.next_example=offset assert minibatch_size > 0 if offset+minibatch_size > ds.length: raise NotImplementedError() def __iter__(self): return self def next(self): upper = next_example+minibatch_size assert upper<=self.ds.length minibatch = Example(self.ds.fields.keys(), [field[next_example:upper] for field in self.ds.fields]) self.next_example+=minibatch_size return DataSetFields(MinibatchDataSet(minibatch),fieldnames) return Iterator(self) def valuesVStack(self,fieldname,fieldvalues): return self.values_vstack(fieldname,fieldvalues) def valuesHStack(self,fieldnames,fieldvalues): return self.values_hstack(fieldnames,fieldvalues) class HStackedDataSet(DataSet): """ A DataSet that wraps several datasets and shows a view that includes all their fields, i.e. whose list of fields is the concatenation of their lists of fields. If a field name is found in more than one of the datasets, then either an error is raised or the fields are renamed (either by prefixing the __name__ attribute of the dataset + ".", if it exists, or by suffixing the dataset index in the argument list). TODO: automatically detect a chain of stacked datasets due to A | B | C | D ... """ def __init__(self,datasets,accept_nonunique_names=False,description=None,field_types=None): DataSet.__init__(self,description,field_types) self.datasets=datasets self.accept_nonunique_names=accept_nonunique_names self.fieldname2dataset={} def rename_field(fieldname,dataset,i): if hasattr(dataset,"__name__"): return dataset.__name__ + "." + fieldname return fieldname+"."+str(i) # make sure all datasets have the same length and unique field names self.length=None names_to_change=[] for i in xrange(len(datasets)): dataset = datasets[i] length=len(dataset) if self.length: assert self.length==length else: self.length=length for fieldname in dataset.fieldNames(): if fieldname in self.fieldname2dataset: # name conflict! if accept_nonunique_names: fieldname=rename_field(fieldname,dataset,i) names2change.append((fieldname,i)) else: raise ValueError("Incompatible datasets: non-unique field name = "+fieldname) self.fieldname2dataset[fieldname]=i for fieldname,i in names_to_change: del self.fieldname2dataset[fieldname] self.fieldname2dataset[rename_field(fieldname,self.datasets[i],i)]=i def hasFields(self,*fieldnames): for fieldname in fieldnames: if not fieldname in self.fieldname2dataset: return False return True def fieldNames(self): return self.fieldname2dataset.keys() def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): class HStackedIterator(object): def __init__(self,hsds,iterators): self.hsds=hsds self.iterators=iterators def __iter__(self): return self def next(self): # concatenate all the fields of the minibatches minibatch = reduce(LookupList.__add__,[iterator.next() for iterator in self.iterators]) # and return a DataSetFields whose dataset is the transpose (=examples()) of this minibatch return DataSetFields(MinibatchDataSet(minibatch,self.hsds.valuesVStack, self.hsds.valuesHStack), fieldnames if fieldnames else hsds.fieldNames()) assert self.hasfields(fieldnames) # find out which underlying datasets are necessary to service the required fields # and construct corresponding minibatch iterators if fieldnames: datasets=set([]) fields_in_dataset=dict([(dataset,[]) for dataset in datasets]) for fieldname in fieldnames: dataset=self.datasets[self.fieldnames2dataset[fieldname]] datasets.add(dataset) fields_in_dataset[dataset].append(fieldname) datasets=list(datasets) iterators=[dataset.minibatches(fields_in_dataset[dataset],minibatch_size,n_batches,offset) for dataset in datasets] else: datasets=self.datasets iterators=[dataset.minibatches(None,minibatch_size,n_batches,offset) for dataset in datasets] return HStackedIterator(self,iterators) def valuesVStack(self,fieldname,fieldvalues): return self.datasets[self.fieldname2dataset[fieldname]].valuesVStack(fieldname,fieldvalues) def valuesHStack(self,fieldnames,fieldvalues): """ We will use the sub-dataset associated with the first fieldname in the fieldnames list to do the work, hoping that it can cope with the other values (i.e. won't care about the incompatible fieldnames). Hence this heuristic will always work if all the fieldnames are of the same sub-dataset. """ return self.datasets[self.fieldname2dataset[fieldnames[0]]].valuesHStack(fieldnames,fieldvalues) class VStackedDataSet(DataSet): """ A DataSet that wraps several datasets and shows a view that includes all their examples, in the order provided. This clearly assumes that they all have the same field names and all (except possibly the last one) are of finite length. TODO: automatically detect a chain of stacked datasets due to A + B + C + D ... """ def __init__(self,datasets): self.datasets=datasets self.length=0 self.index2dataset={} assert len(datasets)>0 fieldnames = datasets[-1].fieldNames() self.datasets_start_row=[] # We use this map from row index to dataset index for constant-time random access of examples, # to avoid having to search for the appropriate dataset each time and slice is asked for. for dataset,k in enumerate(datasets[0:-1]): try: L=len(dataset) except UnboundedDataSet: print "All VStacked datasets (except possibly the last) must be bounded (have a length)." assert False for i in xrange(L): self.index2dataset[self.length+i]=k self.datasets_start_row.append(self.length) self.length+=L assert dataset.fieldNames()==fieldnames self.datasets_start_row.append(self.length) self.length+=len(datasets[-1]) # If length is very large, we should use a more memory-efficient mechanism # that does not store all indices if self.length>1000000: # 1 million entries would require about 60 meg for the index2dataset map # TODO print "A more efficient mechanism for index2dataset should be implemented" def __len__(self): return self.length def fieldNames(self): return self.datasets[0].fieldNames() def hasFields(self,*fieldnames): return self.datasets[0].hasFields(*fieldnames) def locate_row(self,row): """Return (dataset_index, row_within_dataset) for global row number""" dataset_index = self.index2dataset[row] row_within_dataset = self.datasets_start_row[dataset_index] return dataset_index, row_within_dataset def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): class VStackedIterator(object): def __init__(self,vsds): self.vsds=vsds self.next_row=offset self.next_dataset_index,self.next_dataset_row=self.vsds.locate_row(offset) self.current_iterator,self.n_left_at_the_end_of_ds,self.n_left_in_mb= \ self.next_iterator(vsds.datasets[0],offset,n_batches) def next_iterator(self,dataset,starting_offset,batches_left): L=len(dataset) ds_nbatches = (L-starting_offset)/minibatch_size if batches_left is not None: ds_nbatches = max(batches_left,ds_nbatches) if minibatch_size>L: ds_minibatch_size=L n_left_in_mb=minibatch_size-L ds_nbatches=1 else: n_left_in_mb=0 return dataset.minibatches(fieldnames,minibatch_size,ds_nbatches,starting_offset), \ L-(starting_offset+ds_nbatches*minibatch_size), n_left_in_mb def move_to_next_dataset(self): if self.n_left_at_the_end_of_ds>0: self.current_iterator,self.n_left_at_the_end_of_ds,self.n_left_in_mb= \ self.next_iterator(vsds.datasets[self.next_dataset_index], self.n_left_at_the_end_of_ds,1) else: self.next_dataset_index +=1 if self.next_dataset_index==len(self.vsds.datasets): self.next_dataset_index = 0 self.current_iterator,self.n_left_at_the_end_of_ds,self.n_left_in_mb= \ self.next_iterator(vsds.datasets[self.next_dataset_index],starting_offset,n_batches) def __iter__(self): return self def next(self): dataset=self.vsds.datasets[self.next_dataset_index] mb = self.next_iterator.next() if self.n_left_in_mb: extra_mb = [] while self.n_left_in_mb>0: self.move_to_next_dataset() extra_mb.append(self.next_iterator.next()) examples = Example(names, [dataset.valuesVStack(name, [mb[name]]+[b[name] for b in extra_mb]) for name in fieldnames]) mb = DataSetFields(MinibatchDataSet(examples),fieldnames) self.next_row+=minibatch_size self.next_dataset_row+=minibatch_size if self.next_row+minibatch_size>len(dataset): self.move_to_next_dataset() return examples return VStackedIterator(self) class ArrayFieldsDataSet(DataSet): """ Virtual super-class of datasets whose field values are numpy array, thus defining valuesHStack and valuesVStack for sub-classes. """ def __init__(self,description=None,field_types=None): DataSet.__init__(self,description,field_types) def valuesHStack(self,fieldnames,fieldvalues): """Concatenate field values horizontally, e.g. two vectors become a longer vector, two matrices become a wider matrix, etc.""" return numpy.hstack(fieldvalues) def valuesVStack(self,fieldname,values): """Concatenate field values vertically, e.g. two vectors become a two-row matrix, two matrices become a longer matrix, etc.""" return numpy.vstack(values) class ArrayDataSet(ArrayFieldsDataSet): """ An ArrayDataSet stores the fields as groups of columns in a numpy tensor, whose first axis iterates over examples, second axis determines fields. If the underlying array is N-dimensional (has N axes), then the field values are (N-2)-dimensional objects (i.e. ordinary numbers if N=2). """ """ Construct an ArrayDataSet from the underlying numpy array (data) and a map (fields_columns) from fieldnames to field columns. The columns of a field are specified using the standard arguments for indexing/slicing: integer for a column index, slice for an interval of columns (with possible stride), or iterable of column indices. """ def __init__(self, data_array, fields_columns): self.data=data_array self.fields_columns=fields_columns # check consistency and complete slices definitions for fieldname, fieldcolumns in self.fields_columns.items(): if type(fieldcolumns) is int: assert fieldcolumns>=0 and fieldcolumns<data_array.shape[1] elif type(fieldcolumns) is slice: start,step=None,None if not fieldcolumns.start: start=0 if not fieldcolumns.step: step=1 if start or step: self.fields_columns[fieldname]=slice(start,fieldcolumns.stop,step) elif hasattr(fieldcolumns,"__iter__"): # something like a list for i in fieldcolumns: assert i>=0 and i<data_array.shape[1] def fieldNames(self): return self.fields_columns.keys() def __len__(self): return len(self.data) #def __getitem__(self,i): # """More efficient implementation than the default""" def minibatches_nowrap(self,fieldnames,minibatch_size,n_batches,offset): class ArrayDataSetIterator(object): def __init__(self,dataset,fieldnames,minibatch_size,n_batches,offset): if fieldnames is None: fieldnames = dataset.fieldNames() # store the resulting minibatch in a lookup-list of values self.minibatch = LookupList(fieldnames,[0]*len(fieldnames)) self.dataset=dataset self.minibatch_size=minibatch_size assert offset>=0 and offset<len(dataset.data) assert offset+minibatch_size<=len(dataset.data) self.current=offset def __iter__(self): return self def next(self): sub_data = self.dataset.data[self.current:self.current+self.minibatch_size] self.minibatch._values = [sub_data[:,self.dataset.fields_columns[f]] for f in self.minibatch._names] self.current+=self.minibatch_size return self.minibatch return ArrayDataSetIterator(self,fieldnames,minibatch_size,n_batches,offset) def supervised_learning_dataset(src_dataset,input_fields,target_fields,weight_field=None): """ Wraps an arbitrary DataSet into one for supervised learning tasks by forcing the user to define a set of fields as the 'input' field and a set of fields as the 'target' field. Optionally, a single weight_field can also be defined. """ args = ((input_fields,'input'),(output_fields,'target')) if weight_field: args+=(([weight_field],'weight')) return src_dataset.merge_fields(*args)