pyemma.coordinates.cluster_regspace¶
-
pyemma.coordinates.cluster_regspace(data=None, dmin=-1, max_centers=1000, stride=1, metric='euclidean', n_jobs=None, chunk_size=5000, skip=0)¶ Regular space clustering
If given data, it performs a regular space clustering [1] and returns a
RegularSpaceClusteringobject that can be used to extract the discretized data sequences, or to assign other data points to the same partition. If data is not given, an emptyRegularSpaceClusteringwill be created that still needs to be parametrized, e.g. in apipeline().Regular space clustering is very similar to Hartigan’s leader algorithm [2]. It consists of two passes through the data. Initially, the first data point is added to the list of centers. For every subsequent data point, if it has a greater distance than dmin from every center, it also becomes a center. In the second pass, a Voronoi discretization with the computed centers is used to partition the data.
Parameters: - data (ndarray (T, d) or list of ndarray (T_i, d) or a reader created by :func:`source) – input data, if available in memory
- dmin (float) – the minimal distance between cluster centers
- max_centers (int (optional), default=1000) – If max_centers is reached, the algorithm will stop to find more centers, but it is possible that parts of the state space are not properly ` discretized. This will generate a warning. If that happens, it is suggested to increase dmin such that the number of centers stays below max_centers.
- stride (int, optional, default = 1) – If set to 1, all input data will be used for estimation. Note that this could cause this calculation to be very slow for large data sets. Since molecular dynamics data is usually correlated at short timescales, it is often sufficient to estimate transformations at a longer stride. Note that the stride option in the get_output() function of the returned object is independent, so you can parametrize at a long stride, and still map all frames through the transformer.
- metric (str) – metric to use during clustering (‘euclidean’, ‘minRMSD’)
- n_jobs (int or None, default None) – Number of threads to use during assignment of the data. If None, all available CPUs will be used.
- chunk_size (int, default=5000) – Number of data frames to process at once. Choose a higher value here, to optimize thread usage and gain processing speed.
Returns: regSpace – Object for regular space clustering. It holds discrete trajectories and cluster center information.
Return type: a
RegularSpaceClusteringclustering object-
class
pyemma.coordinates.clustering.regspace.RegularSpaceClustering(dmin, max_centers=1000, metric='euclidean', stride=1, n_jobs=None, skip=0)¶ Regular space clustering
Methods
assign([X, stride])Assigns the given trajectory or list of trajectories to cluster centers by using the discretization defined by this clustering method (usually a Voronoi tesselation). describe()Get a descriptive string representation of this class. dimension()output dimension of clustering algorithm (always 1). estimate(X, \*\*kwargs)fit(X)Estimates parameters - for compatibility with sklearn. fit_predict(X[, y])Performs clustering on X and returns cluster labels. fit_transform(X[, y])Fit to data, then transform it. get_model_params([deep])Get parameters for this model. get_output([dimensions, stride, skip, chunk])get_params([deep])Get parameters for this estimator. iterator([stride, lag, chunk, ...])creates an iterator to stream over the (transformed) data. n_chunks(chunksize[, stride, skip])how many chunks an iterator of this sourcde will output, starting (eg. after calling reset()) n_frames_total([stride, skip])number_of_trajectories()output_type()By default transformers return single precision floats. parametrize([stride])register_progress_callback(call_back[, stage])Registers the progress reporter. sample_indexes_by_cluster(clusters, nsample)Samples trajectory/time indexes according to the given sequence of states. save_dtrajs([trajfiles, prefix, output_dir, ...])saves calculated discrete trajectories. Filenames are taken from set_params(\*\*params)Set the parameters of this estimator. trajectory_length(itraj[, stride, skip])trajectory_lengths([stride, skip])transform(X)Maps the input data through the transformer to correspondingly shaped output data array/list. update_model_params(\*\*params)Update given model parameter if they are set to specific values write_to_csv([filename, extension, ...])write all data to csv with numpy.savetxt Attributes
chunksizechunksize defines how much data is being processed at once. clustercentersArray containing the coordinates of the calculated cluster centers. data_producerdefault_chunksizeHow much data will be processed at once, in case no chunksize has been provided. dminMinimum distance between cluster centers. dtrajsDiscrete trajectories (assigned data to cluster centers). filenamesProperty which returns a list of filenames the data is originally from. in_memoryare results stored in memory? index_clustersReturns trajectory/time indexes for all the clusters is_random_accessibleCheck if self._is_random_accessible is set to true and if all the random access strategies are implemented. is_readerProperty telling if this data source is a reader or not. labels_Array containing the coordinates of the calculated cluster centers. loggerThe logger for this class instance max_centersCutoff during clustering. modelThe model estimated by this Estimator n_clustersn_jobsReturns number of jobs/threads to use during assignment of data. nameThe name of this instance ndimntrajoverwrite_dtrajsShould existing dtraj files be overwritten. ra_itraj_cuboidImplementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames. ra_itraj_jaggedBehaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list. ra_itraj_linearImplementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. ra_linearImplementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. show_progresswhether to show the progress of heavy calculations on this object. -
assign(X=None, stride=1)¶ Assigns the given trajectory or list of trajectories to cluster centers by using the discretization defined by this clustering method (usually a Voronoi tesselation).
You can assign multiple times with different strides. The last result of assign will be saved and is available as the attribute
dtrajs().Parameters: - X (ndarray(T, n) or list of ndarray(T_i, n), optional, default = None) – Optional input data to map, where T is the number of time steps and n is the number of dimensions. When a list is provided they can have differently many time steps, but the number of dimensions need to be consistent. When X is not provided, the result of assign is identical to get_output(), i.e. the data used for clustering will be assigned. If X is given, the stride argument is not accepted.
- stride (int, optional, default = 1) – If set to 1, all frames of the input data will be assigned. Note that this could cause this calculation to be very slow for large data sets. Since molecular dynamics data is usually correlated at short timescales, it is often sufficient to obtain the discretization at a longer stride. Note that the stride option used to conduct the clustering is independent of the assign stride. This argument is only accepted if X is not given.
Returns: Y – The discretized trajectory: int-array with the indexes of the assigned clusters, or list of such int-arrays. If called with a list of trajectories, Y will also be a corresponding list of discrete trajectories
Return type: ndarray(T, dtype=int) or list of ndarray(T_i, dtype=int)
-
chunksize¶ chunksize defines how much data is being processed at once.
-
clustercenters¶ Array containing the coordinates of the calculated cluster centers.
-
data_producer¶
-
default_chunksize¶ How much data will be processed at once, in case no chunksize has been provided.
-
describe()¶ Get a descriptive string representation of this class.
-
dimension()¶ output dimension of clustering algorithm (always 1).
-
dmin¶ Minimum distance between cluster centers.
-
dtrajs¶ Discrete trajectories (assigned data to cluster centers).
-
estimate(X, **kwargs)¶
-
filenames¶ Property which returns a list of filenames the data is originally from. :returns: list of str :rtype: list of filenames if data is originating from a file based reader
-
fit(X)¶ Estimates parameters - for compatibility with sklearn.
Parameters: X (object) – A reference to the data from which the model will be estimated Returns: estimator – The estimator (self) with estimated model. Return type: object
-
fit_predict(X, y=None)¶ Performs clustering on X and returns cluster labels. :param X: Input data. :type X: ndarray, shape (n_samples, n_features)
Returns: y – cluster labels Return type: ndarray, shape (n_samples,)
-
fit_transform(X, y=None, **fit_params)¶ Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. :param X: Training set. :type X: numpy array of shape [n_samples, n_features] :param y: Target values. :type y: numpy array of shape [n_samples]
Returns: X_new – Transformed array. Return type: numpy array of shape [n_samples, n_features_new]
-
get_model_params(deep=True)¶ Get parameters for this model.
Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params – Parameter names mapped to their values. Return type: mapping of string to any
-
get_output(dimensions=slice(0, None, None), stride=1, skip=0, chunk=None)¶
-
get_params(deep=True)¶ Get parameters for this estimator.
Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params – Parameter names mapped to their values. Return type: mapping of string to any
-
in_memory¶ are results stored in memory?
-
index_clusters¶ Returns trajectory/time indexes for all the clusters
Returns: indexes – For each state, all trajectory and time indexes where this cluster occurs. Each matrix has a number of rows equal to the number of occurrences of the corresponding state, with rows consisting of a tuple (i, t), where i is the index of the trajectory and t is the time index within the trajectory. Return type: list of ndarray( (N_i, 2) )
-
is_random_accessible¶ Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. :returns: bool :rtype: Returns True if random accessible via strategies and False otherwise.
-
is_reader¶ Property telling if this data source is a reader or not. :returns: bool :rtype: True if this data source is a reader and False otherwise
-
iterator(stride=1, lag=0, chunk=None, return_trajindex=True, cols=None, skip=0)¶ creates an iterator to stream over the (transformed) data.
If your data is too large to fit into memory and you want to incrementally compute some quantities on it, you can create an iterator on a reader or transformer (eg. TICA) to avoid memory overflows.
Parameters: - stride (int, default=1) – Take only every stride’th frame.
- lag (int, default=0) – how many frame to omit for each file.
- chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.
- return_trajindex (boolean, default=True) – a chunk of data if return_trajindex is False, otherwise a tuple of (trajindex, data).
- cols (array like, default=None) – return only the given columns.
- skip (int, default=0) – skip ‘n’ first frames of each trajectory.
Returns: iter – a implementation of a DataSourceIterator to stream over the data
Return type: instance of DataSourceIterator
Examples
>>> from pyemma.coordinates import source; import numpy as np >>> data = [np.arange(3), np.arange(4, 7)] >>> reader = source(data) >>> iterator = reader.iterator(chunk=1) >>> for array_index, chunk in iterator: ... print(array_index, chunk) 0 [[0]] 0 [[1]] 0 [[2]] 1 [[4]] 1 [[5]] 1 [[6]]
-
labels_¶ Array containing the coordinates of the calculated cluster centers.
-
logger¶ The logger for this class instance
-
max_centers¶ Cutoff during clustering. If reached no more data is taken into account. You might then consider a larger value or a larger dmin value.
-
model¶ The model estimated by this Estimator
-
n_chunks(chunksize, stride=1, skip=0)¶ how many chunks an iterator of this sourcde will output, starting (eg. after calling reset())
Parameters: - chunksize –
- stride –
- skip –
-
n_clusters¶
-
n_frames_total(stride=1, skip=0)¶
-
n_jobs¶ Returns number of jobs/threads to use during assignment of data.
Returns: Return type: If None it will return number of processors /or cores or the setting of ‘OMP_NUM_THREADS’ env variable. Notes
By setting the environment variable ‘OMP_NUM_THREADS’ to an integer, one will override the default argument of n_jobs (currently None).
-
name¶ The name of this instance
-
ndim¶
-
ntraj¶
-
number_of_trajectories()¶
-
output_type()¶ By default transformers return single precision floats.
-
overwrite_dtrajs¶ Should existing dtraj files be overwritten. Set this property to True to overwrite.
-
parametrize(stride=1)¶
-
ra_itraj_cuboid¶ Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.
The with the frame slice selected frames will be loaded from each in the trajectory-slice selected trajectories and then sliced with the dimension slice. For example: The data consists out of three trajectories with length 10, 20, 10, respectively. The slice data[:, :15, :3] returns a 3D array of shape (3, 10, 3), where the first component corresponds to the three trajectories, the second component corresponds to 10 frames (note that the last 5 frames are being truncated as the other two trajectories only have 10 frames) and the third component corresponds to the selected first three dimensions.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_jagged¶ Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_linear¶ Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. Therefore, it returns a simple 2D array.
Returns: A 2D array of the sliced data containing [frames, dims].
-
ra_linear¶ Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. Here it is assumed that the frame indexing is contiguous, i.e., the first frame of the second trajectory has the index of the last frame of the first trajectory plus one.
Returns: Returns an object that allows access by slices in the described manner.
-
register_progress_callback(call_back, stage=0)¶ Registers the progress reporter.
Parameters: - call_back (function) –
This function will be called with the following arguments:
- stage (int)
- instance of pyemma.utils.progressbar.ProgressBar
- optional *args and named keywords (**kw), for future changes
- stage (int, optional, default=0) – The stage you want the given call back function to be fired.
- call_back (function) –
-
sample_indexes_by_cluster(clusters, nsample, replace=True)¶ Samples trajectory/time indexes according to the given sequence of states.
Parameters: - clusters (iterable of integers) – It contains the cluster indexes to be sampled
- nsample (int) – Number of samples per cluster. If replace = False, the number of returned samples per cluster could be smaller if less than nsample indexes are available for a cluster.
- replace (boolean, optional) – Whether the sample is with or without replacement
Returns: indexes – List of the sampled indices by cluster. Each element is an index array with a number of rows equal to N=len(sequence), with rows consisting of a tuple (i, t), where i is the index of the trajectory and t is the time index within the trajectory.
Return type: list of ndarray( (N, 2) )
-
save_dtrajs(trajfiles=None, prefix='', output_dir='.', output_format='ascii', extension='.dtraj')¶ saves calculated discrete trajectories. Filenames are taken from given reader. If data comes from memory dtrajs are written to a default filename.
Parameters: - trajfiles (list of str (optional)) – names of input trajectory files, will be used generate output files.
- prefix (str) – prepend prefix to filenames.
- output_dir (str) – save files to this directory.
- output_format (str) – if format is ‘ascii’ dtrajs will be written as csv files, otherwise they will be written as NumPy .npy files.
- extension (str) – file extension to append (eg. ‘.itraj’)
-
set_params(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form
<component>__<parameter>so that it’s possible to update each component of a nested object. :returns: :rtype: self
-
show_progress¶ whether to show the progress of heavy calculations on this object.
-
trajectory_length(itraj, stride=1, skip=None)¶
-
trajectory_lengths(stride=1, skip=0)¶
-
transform(X)¶ Maps the input data through the transformer to correspondingly shaped output data array/list.
Parameters: X (ndarray(T, n) or list of ndarray(T_i, n)) – The input data, where T is the number of time steps and n is the number of dimensions. If a list is provided, the number of time steps is allowed to vary, but the number of dimensions are required to be to be consistent. Returns: Y – The mapped data, where T is the number of time steps of the input data and d is the output dimension of this transformer. If called with a list of trajectories, Y will also be a corresponding list of trajectories Return type: ndarray(T, d) or list of ndarray(T_i, d)
-
update_model_params(**params)¶ Update given model parameter if they are set to specific values
-
write_to_csv(filename=None, extension='.dat', overwrite=False, stride=1, chunksize=100, **kw)¶ write all data to csv with numpy.savetxt
Parameters: - filename (str, optional) –
filename string, which may contain placeholders {itraj} and {stride}:
- itraj will be replaced by trajetory index
- stride is stride argument of this method
If filename is not given, it is being tried to obtain the filenames from the data source of this iterator.
- extension (str, optional, default='.dat') – filename extension of created files
- overwrite (bool, optional, default=False) – shall existing files be overwritten? If a file exists, this method will raise.
- stride (int) – omit every n’th frame
- chunksize (int) – how many frames to process at once
- kw (dict) – named arguments passed into numpy.savetxt (header, seperator etc.)
Example
Assume you want to save features calculated by some FeatureReader to ASCII:
>>> import numpy as np, pyemma >>> from pyemma.util.files import TemporaryDirectory >>> import os >>> data = [np.random.random((10,3))] * 3 >>> reader = pyemma.coordinates.source(data) >>> filename = "distances_{itraj}.dat" >>> with TemporaryDirectory() as td: ... out = os.path.join(td, filename) ... reader.write_to_csv(out, header='', delimiter=';') ... print(sorted(os.listdir(td))) ['distances_0.dat', 'distances_1.dat', 'distances_2.dat']
- filename (str, optional) –
-
References
[1] Prinz J-H, Wu H, Sarich M, Keller B, Senne M, Held M, Chodera JD, Schuette Ch and Noe F. 2011. Markov models of molecular kinetics: Generation and Validation. J. Chem. Phys. 134, 174105. [2] Hartigan J. Clustering algorithms. New York: Wiley; 1975.