pyemma.coordinates.data._base.transformer.StreamingTransformer¶
-
class
pyemma.coordinates.data._base.transformer.StreamingTransformer(chunksize=None)¶ Basis class for pipelined Transformers.
This class derives from DataSource, so follow up pipeline elements can stream the output of this class.
- Parameters
chunksize (int (optional)) – the chunksize used to batch process underlying data.
-
__init__(chunksize=None)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
_Loggable__create_logger()__delattr__(name, /)Implement delattr(self, name).
__dir__()Default dir() implementation.
__eq__(value, /)Return self==value.
__format__(format_spec, /)Default object formatter.
__ge__(value, /)Return self>=value.
__getattribute__(name, /)Return getattr(self, name).
__gt__(value, /)Return self>value.
__hash__()Return hash(self).
__init__([chunksize])Initialize self.
__init_subclass__This method is called when a class is subclassed.
__iter__()__le__(value, /)Return self<=value.
__lt__(value, /)Return self<value.
__ne__(value, /)Return self!=value.
__new__(**kwargs)Create and return a new object.
__reduce__()Helper for pickle.
__reduce_ex__(protocol, /)Helper for pickle.
__repr__()Return repr(self).
__setattr__(name, value, /)Implement setattr(self, name, value).
__sizeof__()Size of object in memory, in bytes.
__str__()Return str(self).
__subclasshook__Abstract classes can override this to customize issubclass().
_chunk_finite(data)_cleanup_logger(logger_id, logger_name)_clear_in_memory()_compute_default_cs(dim, itemsize[, logger])_create_iterator([skip, chunk, stride, …])Should be implemented by non-abstract subclasses.
_data_flow_chain()Get a list of all elements in the data flow graph.
_get_traj_info(filename)_logger_is_active(level)@param level: int log level (debug=10, info=20, warn=30, error=40, critical=50)
_map_to_memory([stride])Maps results to memory.
_set_random_access_strategies()_source_from_memory([data_producer])_transform_array(X)Initializes the parametrization.
describe()Get a descriptive string representation of this class.
dimension()fit_transform(X[, y])Fit to data, then transform it.
get_output([dimensions, stride, skip, chunk])Maps all input data of this transformer and returns it as an array or list of arrays
iterator([stride, lag, chunk, …])creates an iterator to stream over the (transformed) data.
n_chunks(chunksize[, stride, skip])how many chunks an iterator of this sourcde will output, starting (eg.
n_frames_total([stride, skip])Returns total number of frames.
number_of_trajectories([stride])Returns the number of trajectories.
output_type()By default transformers return single precision floats.
trajectory_length(itraj[, stride, skip])Returns the length of trajectory of the requested index.
trajectory_lengths([stride, skip])Returns the length of each trajectory.
transform(X)Maps the input data through the transformer to correspondingly shaped output data array/list.
write_to_csv([filename, extension, …])write all data to csv with numpy.savetxt
write_to_hdf5(filename[, group, …])writes all data of this Iterable to a given HDF5 file.
Attributes
_DataSource__serialize_fields_FALLBACK_CHUNKSIZE_InMemoryMixin__serialize_fields_InMemoryMixin__serialize_version_Loggable__ids_Loggable__refs__abstractmethods____dict____doc____module____weakref__list of weak references to the object (if defined)
_abc_impl_loglevel_CRITICAL_loglevel_DEBUG_loglevel_ERROR_loglevel_INFO_loglevel_WARN_serialize_versionchunksizechunksize defines how much data is being processed at once.
data_producerThe data producer for this data source object (can be another data source object).
default_chunksizeHow much data will be processed at once, in case no chunksize has been provided.
filenameslist of file names the data is originally being read from.
in_memoryare results stored in memory?
is_random_accessibleCheck if self._is_random_accessible is set to true and if all the random access strategies are implemented.
is_readerProperty telling if this data source is a reader or not.
loggerThe logger for this class instance
nameThe name of this instance
ndimntrajra_itraj_cuboidImplementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.
ra_itraj_jaggedBehaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.
ra_itraj_linearImplementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous.
ra_linearImplementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions.