.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_6_real_time_demo.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_6_real_time_demo.py: Real-time feature estimation ============================ .. GENERATED FROM PYTHON SOURCE LINES 8-46 Implementation of individual nm_streams --------------------------------------- *py_neuromodulation* was optimized for computation of real-time data streams. There are however center -and lab specific hardware acquisition systems. Therefore, each experiment requires modules to interact with hardware platforms which periodically acquire data. Given the raw data, data can be analyzed using *py_neuromodulation*. Preprocessing methods, such as re-referencing and normalization, feature computation and decoding can be performed then in real-time. For online as well as as offline analysis, the :class:`~nm_stream_abc` class needs to be instantiated. Here the `nm_settings` and `channels` are required to be defined. Previously for the offline analysis, an offline :class:`~nm_generator` object was defined that periodically yielded data. For online data, the :meth:`~stream_abc.run` function therefore needs to be overwritten, which first acquires data and then calls the :meth:`~run_analysis.process` function. The following illustrates in pseudo-code how such a stream could be initialized: .. code-block:: python from py_neuromodulation import nm_stream_abc class MyStream(nm_stream_abc): def __init__(self, settings, channels): super().__init__(settings, channels) def run(self): features_ = [] while True: data = self.acquire_data() features_.append(self.run_analysis.process(data)) # potentially use machine learning model for decoding Computation time examples ------------------------- The following example calculates for six channels, CAR re-referencing, z-score normalization and FFT features results the following computation time: .. GENERATED FROM PYTHON SOURCE LINES 48-103 .. code-block:: Python import py_neuromodulation as nm from py_neuromodulation import NMSettings import numpy as np import timeit def get_fast_compute_settings(): settings = NMSettings.get_fast_compute() settings.preprocessing = ["re_referencing", "notch_filter"] settings.features.fft = True settings.postprocessing.feature_normalization = True return settings data = np.random.random([1, 1000]) print("FFT Features, CAR re-referencing, z-score normalization") print() print("Computation time for single ECoG channel: ") stream = nm.Stream( sfreq=1000, data=data, sampling_rate_features_hz=10, verbose=False, settings=get_fast_compute_settings(), ) print( f"{np.round(timeit.timeit(lambda: stream.data_processor.process(data), number=100)/100, 3)} s" ) print("Computation time for 6 ECoG channels: ") data = np.random.random([6, 1000]) stream = nm.Stream( sfreq=1000, data=data, sampling_rate_features_hz=10, verbose=False, settings=get_fast_compute_settings(), ) print( f"{np.round(timeit.timeit(lambda: stream.data_processor.process(data), number=100)/100, 3)} s" ) print( "\nFFT Features & Temporal Waveform Shape & Hjorth & Bursts, CAR re-referencing, z-score normalization" ) print("Computation time for single ECoG channel: ") data = np.random.random([1, 1000]) stream = nm.Stream(sfreq=1000, data=data, sampling_rate_features_hz=10, verbose=False) print( f"{np.round(timeit.timeit(lambda: stream.data_processor.process(data), number=10)/10, 3)} s" ) .. rst-class:: sphx-glr-script-out .. code-block:: none FFT Features, CAR re-referencing, z-score normalization Computation time for single ECoG channel: 0.001 s Computation time for 6 ECoG channels: 0.001 s FFT Features & Temporal Waveform Shape & Hjorth & Bursts, CAR re-referencing, z-score normalization Computation time for single ECoG channel: 0.003 s .. GENERATED FROM PYTHON SOURCE LINES 104-120 Those results show that the computation time for a typical pipeline (FFT, re-referencing, notch-filtering, feature normalization) is well below 10 ms, which is fast enough for real-time analysis with feature sampling rates below 100 Hz. Computation of more complex features could still result in feature sampling rates of more than 30 Hz. Real-time movement decoding using the TMSi-SAGA amplifier --------------------------------------------------------- In the following example, we will show how we setup a real-time movement decoding experiment using the TMSi-SAGA amplifier. First, we relied on different software modules for data streaming and visualization. `LabStreamingLayer `_ allows for real-time data streaming and synchronization across multiple devices. We used `timeflux `_ for real-time data visualization of features, decoded output. For raw data visualization we used `Brain Streaming Layer `_. The code for real-time movement decoding is added in the GitHub branch `realtime_decoding `_. Here we relied on the `TMSI SAGA Python interface `_. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.329 seconds) .. _sphx_glr_download_auto_examples_plot_6_real_time_demo.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_6_real_time_demo.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_6_real_time_demo.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_6_real_time_demo.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_