decode#

class analysis.decode.Decoder(features: ~pandas.core.frame.DataFrame | None = None, label: ~numpy.ndarray | None = None, label_name: str | None = None, used_chs: list[str] = [], model=LinearRegression(), eval_method: ~typing.Callable = <function r2_score>, cv_method=KFold(n_splits=3, random_state=None, shuffle=False), use_nested_cv: bool = False, threshold_score=True, mov_detection_threshold: float = 0.5, TRAIN_VAL_SPLIT: bool = False, RUN_BAY_OPT: bool = False, STACK_FEATURES_N_SAMPLES: bool = False, time_stack_n_samples: int = 5, save_coef: bool = False, get_movement_detection_rate: bool = False, min_consequent_count: int = 3, bay_opt_param_space: list = [], VERBOSE: bool = False, sfreq: int | None = None, undersampling: bool = False, oversampling: bool = False, mrmr_select: bool = False, pca: bool = False, cca: bool = False, model_save: bool = False)[source]#
exception ClassMissingException(message='Only one class present.')[source]#
append_previous_n_samples(y: ndarray, n: int = 5)[source]#

stack feature vector for n samples

bay_opt_wrapper(model_train, X_train, y_train)[source]#

Run bayesian optimization and test best params to model_train Save best params into self.best_bay_opt_params

calc_movement_detection_rate(y_label, prediction, threshold=0.5, min_consequent_count=3)[source]#

Given a label and prediction, return the movement detection rate on the basis of movements classified in blocks of ‘min_consequent_count’.

Parameters:
  • y_label ([type]) – [description]

  • prediction ([type]) – [description]

  • threshold (float, optional) – threshold to be applied to ‘prediction’, by default 0.5

  • min_consequent_count (int, optional) – minimum required consective samples higher than ‘threshold’, by default 3

Returns:

  • mov_detection_rate (float) – movement detection rate, where at least ‘min_consequent_count’ samples where high in prediction

  • fpr (np.ndarray) – sklearn.metrics false positive rate np.ndarray

  • tpr (np.ndarray) – sklearn.metrics true positive rate np.ndarray

get_movement_grouped_array(prediction, threshold=0.5, min_consequent_count=5)[source]#

Return given a 1D numpy array, an array of same size with grouped consective blocks

Parameters:
  • prediction (np.ndarray) – numpy array of either predictions or labels, that is going to be grouped

  • threshold (float, optional) – threshold to be applied to ‘prediction’, by default 0.5

  • min_consequent_count (int, optional) – minimum required consective samples higher than ‘threshold’, by default 5

Returns:

  • labeled_array (np.ndarray) – grouped vector with incrementing number for movement blocks

  • labels_count (int) – count of individual movement blocks

run_Bay_Opt(X_train, y_train, X_test, y_test, rounds=30, base_estimator='GP', acq_func='EI', acq_optimizer='sampling', initial_point_generator='lhs')[source]#

Run skopt bayesian optimization skopt.Optimizer: https://scikit-optimize.github.io/stable/modules/generated/skopt.Optimizer.html#skopt.Optimizer

example: https://scikit-optimize.github.io/stable/auto_examples/ask-and-tell.html#sphx-glr-auto-examples-ask-and-tell-py

Special attention needs to be made with the run_CV output, some metrics are minimized (MAE), some are maximized (r^2)

Parameters:
  • X_train (np.ndarray)

  • y_train (np.ndarray)

  • X_test (np.ndarray)

  • y_test (np.ndarray)

  • rounds (int, optional) – optimizing rounds, by default 10

  • base_estimator (str, optional) – surrogate model, used as optimization function instead of cross validation, by default “GP”

  • acq_func (str, optional) – function to minimize over the posterior distribution, by default “EI”

  • acq_optimizer (str, optional) – method to minimize the acquisition function, by default “sampling”

  • initial_point_generator (str, optional) – sets a initial point generator, by default “lhs”

Return type:

skopt result parameters

run_CV(data, label)[source]#

Evaluate model performance on the specified cross validation. If no data and label is specified, use whole feature class attributes.

Parameters:
  • (np.ndarray) (label) – data to train and test with shape samples, features

  • (np.ndarray) – label to train and test with shape samples, features

run_CV_caller(feature_contacts: str = 'ind_channels')[source]#

Wrapper that call for all channels / grid points / combined channels the CV function

Parameters:

feature_contacts (str, optional) – “grid_points”, “ind_channels” or “all_channels_combined” , by default “ind_channels”

save(feature_path: str, feature_file: str, str_save_add=None) None[source]#

Save decoder object to pickle

set_CV_results(attr_name, contact_point=None)[source]#

set CV results in respectie nm_decode attributes The reference is first stored in obj_set, and the used lateron

Parameters:
  • attr_name (string) – is either all_ch_results, ch_ind_results, gridpoint_ind_results

  • contact_point (object, optional) – usually an int specifying the grid_point or string, specifying the used channel, by default None

set_data_grid_points(cortex_only=False, subcortex_only=False)[source]#

Read the run_analysis Projected data has the shape (samples, grid points, features)

set_data_ind_channels()[source]#

specified channel individual data