.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_1_example_BIDS.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_1_example_BIDS.py: ECoG Movement decoding example ============================== .. GENERATED FROM PYTHON SOURCE LINES 8-17 This example notebook read openly accessible data from the publication *Electrocorticography is superior to subthalamic local field potentials for movement decoding in Parkinson’s disease* (`Merk et al. 2022 _`). The dataset is available `here `_. For simplicity one example subject is automatically shipped within this repo at the *py_neuromodulation/data* folder, stored in `iEEG BIDS `_ format. .. GENERATED FROM PYTHON SOURCE LINES 19-24 .. code-block:: Python from sklearn import metrics, model_selection, linear_model import matplotlib.pyplot as plt import py_neuromodulation as nm .. GENERATED FROM PYTHON SOURCE LINES 25-28 Let's read the example using `mne_bids `_. The resulting raw object is of type `mne.RawArray `_. We can use the properties such as sampling frequency, channel names, channel types all from the mne array and create the *nm_channels* DataFrame: .. GENERATED FROM PYTHON SOURCE LINES 28-57 .. code-block:: Python ( RUN_NAME, PATH_RUN, PATH_BIDS, PATH_OUT, datatype, ) = nm.io.get_paths_example_data() ( raw, data, sfreq, line_noise, coord_list, coord_names, ) = nm.io.read_BIDS_data(PATH_RUN=PATH_RUN) channels = nm.utils.set_channels( ch_names=raw.ch_names, ch_types=raw.get_channel_types(), reference="default", bads=raw.info["bads"], new_names="default", used_types=("ecog", "dbs", "seeg"), target_keywords=["MOV_RIGHT"], ) .. rst-class:: sphx-glr-script-out .. code-block:: none Extracting parameters from /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_task-gripforce_run-0_ieeg.vhdr... Setting channel info structure... /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/utils/io.py:61: RuntimeWarning: Did not find any events.tsv associated with sub-testsub_ses-EphysMedOff_task-gripforce_run-0. The search_str was "/opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/**/ieeg/sub-testsub_ses-EphysMedOff*events.tsv" raw_arr = read_raw_bids(bids_path) Reading channel info from /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_task-gripforce_run-0_channels.tsv. /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/utils/io.py:61: RuntimeWarning: The unit for channel(s) MOV_RIGHT has changed from V to NA. raw_arr = read_raw_bids(bids_path) /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/utils/io.py:61: RuntimeWarning: Other is not an MNE-Python coordinate frame for IEEG data and so will be set to 'unknown' raw_arr = read_raw_bids(bids_path) Reading electrode coords from /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/data/sub-testsub/ses-EphysMedOff/ieeg/sub-testsub_ses-EphysMedOff_space-mni_electrodes.tsv. /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/utils/io.py:61: RuntimeWarning: There are channels without locations (n/a) that are not marked as bad: ['MOV_RIGHT'] raw_arr = read_raw_bids(bids_path) /opt/hostedtoolcache/Python/3.11.10/x64/lib/python3.11/site-packages/py_neuromodulation/utils/io.py:61: RuntimeWarning: Not setting position of 1 misc channel found in montage: ['MOV_RIGHT'] Consider setting the channel types to be of EEG/sEEG/ECoG/DBS/fNIRS using inst.set_channel_types before calling inst.set_montage, or omit these channels when creating your montage. raw_arr = read_raw_bids(bids_path) .. GENERATED FROM PYTHON SOURCE LINES 58-60 This example contains the grip force movement traces, we'll use the *MOV_RIGHT* channel as a decoding target channel. Let's check some of the raw feature and time series traces: .. GENERATED FROM PYTHON SOURCE LINES 60-78 .. code-block:: Python plt.figure(figsize=(12, 4), dpi=300) plt.subplot(121) plt.plot(raw.times, data[-1, :]) plt.xlabel("Time [s]") plt.ylabel("a.u.") plt.title("Movement label") plt.xlim(0, 20) plt.subplot(122) for idx, ch_name in enumerate(channels.query("used == 1").name): plt.plot(raw.times, data[idx, :] + idx * 300, label=ch_name) plt.legend(bbox_to_anchor=(1, 0.5), loc="center left") plt.title("ECoG + STN-LFP time series") plt.xlabel("Time [s]") plt.ylabel("Voltage a.u.") plt.xlim(0, 20) .. image-sg:: /auto_examples/images/sphx_glr_plot_1_example_BIDS_001.png :alt: Movement label, ECoG + STN-LFP time series :srcset: /auto_examples/images/sphx_glr_plot_1_example_BIDS_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none (0.0, 20.0) .. GENERATED FROM PYTHON SOURCE LINES 79-95 .. code-block:: Python settings = nm.NMSettings.get_fast_compute() settings.features.welch = True settings.features.fft = True settings.features.bursts = True settings.features.sharpwave_analysis = True settings.features.coherence = True settings.coherence_settings.channels = [["LFP_RIGHT_0", "ECOG_RIGHT_0"]] settings.coherence_settings.frequency_bands = ["high beta", "low gamma"] settings.sharpwave_analysis_settings.estimator["mean"] = [] settings.sharpwave_analysis_settings.sharpwave_features.enable_all() for sw_feature in settings.sharpwave_analysis_settings.sharpwave_features.list_all(): settings.sharpwave_analysis_settings.estimator["mean"].append(sw_feature) .. GENERATED FROM PYTHON SOURCE LINES 96-106 .. code-block:: Python stream = nm.Stream( sfreq=sfreq, channels=channels, settings=settings, line_noise=line_noise, coord_list=coord_list, coord_names=coord_names, verbose=True, ) .. GENERATED FROM PYTHON SOURCE LINES 107-114 .. code-block:: Python features = stream.run( data=data, out_dir=PATH_OUT, experiment_name=RUN_NAME, save_csv=True, ) .. GENERATED FROM PYTHON SOURCE LINES 115-118 Feature Analysis Movement ------------------------- The obtained performances can now be read and visualized using the :class:`analysis.Feature_Reader`. .. GENERATED FROM PYTHON SOURCE LINES 118-127 .. code-block:: Python # initialize analyzer feature_reader = nm.analysis.FeatureReader( feature_dir=PATH_OUT, feature_file=RUN_NAME, ) feature_reader.label_name = "MOV_RIGHT" feature_reader.label = feature_reader.feature_arr["MOV_RIGHT"] .. GENERATED FROM PYTHON SOURCE LINES 128-130 .. code-block:: Python feature_reader.feature_arr.iloc[100:108, -6:] .. raw:: html
ECOG_RIGHT_5_avgref_bursts_low_gamma_amplitude_mean ECOG_RIGHT_5_avgref_bursts_low_gamma_amplitude_max ECOG_RIGHT_5_avgref_bursts_low_gamma_burst_rate_per_s ECOG_RIGHT_5_avgref_bursts_low_gamma_in_burst time MOV_RIGHT
100 -0.588752 -1.065146 -1.597785 -0.793116 11000 -0.150107
101 -0.684612 -1.058021 -1.440749 -0.786796 11100 -0.280238
102 -0.369665 -0.399318 -0.784263 1.254990 11200 -0.305062
103 -0.864353 -1.078232 -2.376770 -0.790569 11300 -0.312851
104 -0.785546 -1.094515 -1.881985 -0.784465 11400 -0.305154
105 -0.849785 -0.987497 -2.058529 1.259113 11500 -0.305200
106 -1.095902 -1.288603 -2.455767 -0.788170 11600 -0.311440
107 -1.048095 -1.244034 -2.067352 -0.782266 11700 -0.306840


.. GENERATED FROM PYTHON SOURCE LINES 131-133 .. code-block:: Python print(feature_reader.feature_arr.shape) .. rst-class:: sphx-glr-script-out .. code-block:: none (181, 570) .. GENERATED FROM PYTHON SOURCE LINES 134-136 .. code-block:: Python feature_reader._get_target_ch() .. rst-class:: sphx-glr-script-out .. code-block:: none 'MOV_RIGHT' .. GENERATED FROM PYTHON SOURCE LINES 137-147 .. code-block:: Python feature_reader.plot_target_averaged_channel( ch="ECOG_RIGHT_0", list_feature_keywords=None, epoch_len=4, threshold=0.5, ytick_labelsize=7, figsize_x=12, figsize_y=12, ) .. image-sg:: /auto_examples/images/sphx_glr_plot_1_example_BIDS_002.png :alt: Movement aligned features channel: ECOG_RIGHT_0, MOV_RIGHT :srcset: /auto_examples/images/sphx_glr_plot_1_example_BIDS_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 148-159 .. code-block:: Python feature_reader.plot_all_features( ytick_labelsize=6, clim_low=-2, clim_high=2, ch_used="ECOG_RIGHT_0", time_limit_low_s=0, time_limit_high_s=20, normalize=True, save=True, ) .. image-sg:: /auto_examples/images/sphx_glr_plot_1_example_BIDS_003.png :alt: Feature Plot sub-testsub_ses-EphysMedOff_task-gripforce_run-0 :srcset: /auto_examples/images/sphx_glr_plot_1_example_BIDS_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 160-171 .. code-block:: Python nm.analysis.plot_corr_matrix( feature=feature_reader.feature_arr.filter(regex="ECOG_RIGHT_0"), ch_name="ECOG_RIGHT_0_avgref", feature_names=list( feature_reader.feature_arr.filter(regex="ECOG_RIGHT_0_avgref").columns ), feature_file=feature_reader.feature_file, show_plot=True, figsize=(15, 15), ) .. image-sg:: /auto_examples/images/sphx_glr_plot_1_example_BIDS_004.png :alt: Correlation matrix features channel: ECOG_RIGHT_0_avgref :srcset: /auto_examples/images/sphx_glr_plot_1_example_BIDS_004.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none .. GENERATED FROM PYTHON SOURCE LINES 172-185 Decoding -------- The main focus of the *py_neuromodulation* pipeline is feature estimation. Nevertheless, the user can also use the pipeline for machine learning decoding. It can be used for regression and classification problems and also dimensionality reduction such as PCA and CCA. Here, we show an example using a sklearn linear regression model. The used labels came from a continuous grip force movement target, named "MOV_RIGHT". First we initialize the :class:`~decode.Decoder` class, which the specified *validation method*, here being a simple 3-fold cross validation, the evaluation metric, used machine learning model, and the channels we want to evaluate performances for. There are many more implemented methods, but we will here limit it to the ones presented. .. GENERATED FROM PYTHON SOURCE LINES 185-198 .. code-block:: Python model = linear_model.LinearRegression() feature_reader.decoder = nm.analysis.Decoder( features=feature_reader.feature_arr, label=feature_reader.label, label_name=feature_reader.label_name, used_chs=feature_reader.used_chs, model=model, eval_method=metrics.r2_score, cv_method=model_selection.KFold(n_splits=3, shuffle=True), ) .. GENERATED FROM PYTHON SOURCE LINES 199-206 .. code-block:: Python performances = feature_reader.run_ML_model( estimate_channels=True, estimate_gridpoints=False, estimate_all_channels_combined=True, save_results=True, ) .. GENERATED FROM PYTHON SOURCE LINES 207-208 The performances are a dictionary that can be transformed into a DataFrame: .. GENERATED FROM PYTHON SOURCE LINES 208-213 .. code-block:: Python df_per = feature_reader.get_dataframe_performances(performances) df_per .. raw:: html
performance_test performance_train sub ch ch_type
0 0.143489 0.835419 testsub LFP_RIGHT_0_LFP_RIGHT_2 electrode ch
1 0.022372 0.677681 testsub LFP_RIGHT_1_LFP_RIGHT_0 electrode ch
2 0.000000 0.658683 testsub LFP_RIGHT_2_LFP_RIGHT_1 electrode ch
3 0.000000 0.752291 testsub ECOG_RIGHT_0_avgref electrode ch
4 0.000000 0.761981 testsub ECOG_RIGHT_1_avgref electrode ch
5 0.000000 0.788443 testsub ECOG_RIGHT_2_avgref electrode ch
6 0.080584 0.772870 testsub ECOG_RIGHT_3_avgref electrode ch
7 0.067392 0.748681 testsub ECOG_RIGHT_4_avgref electrode ch
8 0.010491 0.755723 testsub ECOG_RIGHT_5_avgref electrode ch
9 0.544443 1.000000 testsub all_ch_combined all ch combinded


.. GENERATED FROM PYTHON SOURCE LINES 214-226 .. code-block:: Python ax = nm.analysis.plot_df_subjects( df_per, x_col="sub", y_col="performance_test", hue="ch_type", PATH_SAVE=PATH_OUT / RUN_NAME / (RUN_NAME + "_decoding_performance.png"), figsize_tuple=(8, 5), ) ax.set_ylabel(r"$R^2$ Correlation") ax.set_xlabel("Subject 000") ax.set_title("Performance comparison Movement decoding") plt.tight_layout() .. image-sg:: /auto_examples/images/sphx_glr_plot_1_example_BIDS_005.png :alt: Performance comparison Movement decoding :srcset: /auto_examples/images/sphx_glr_plot_1_example_BIDS_005.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 9.470 seconds) .. _sphx_glr_download_auto_examples_plot_1_example_BIDS.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_1_example_BIDS.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_1_example_BIDS.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_1_example_BIDS.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_