{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Spike Analysis Tutorial\n", "\n", "This tutorial covers working with spike-sorted neural data and integrating it with EEG features in NeuRodent.\n", "\n", "## Overview\n", "\n", "Spike Analysis Results (SAR) integrates spike-sorted single-unit data with continuous EEG analysis:\n", "\n", "1. Load spike-sorted data from MountainSort or other spike sorters\n", "2. Compute peri-event time histograms (PETH)\n", "3. Analyze spike-LFP relationships\n", "4. Integrate with WAR for combined analysis" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "from pathlib import Path\n", "import logging\n", "\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "from neurodent import core, visualization\n", "\n", "logging.basicConfig(level=logging.INFO)\n", "logger = logging.getLogger()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Loading Spike-Sorted Data\n", "\n", "NeuRodent works with spike-sorted data from various spike sorters via SpikeInterface:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load data\n", "data_path = Path(\"/path/to/spike/data\")\n", "animal_id = \"animal_001\"\n", "\n", "lro = core.LongRecordingOrganizer(\n", " base_folder=data_path,\n", " animal_id=animal_id,\n", " mode=\"bin\"\n", ")\n", "\n", "ao = visualization.AnimalOrganizer(lro)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. MountainSort Analysis\n", "\n", "If using MountainSort for spike sorting:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create MountainSort analyzer\n", "msa = core.MountainSortAnalyzer(\n", " lro=lro,\n", " output_folder=Path(\"./mountainsort_output\")\n", ")\n", "\n", "# Run spike sorting (if not already done)\n", "# msa.run_mountainsort()\n", "\n", "# Load sorting results\n", "sorting = msa.load_sorting()\n", "print(f\"Found {len(sorting.get_unit_ids())} units\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Computing Spike Analysis\n", "\n", "Compute spike-related features:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute spike analysis\n", "sar = ao.compute_spike_analysis(\n", " multiprocess_mode='serial'\n", ")\n", "\n", "print(f\"Spike analysis computed for {len(sar)} sessions\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Converting to MNE Format\n", "\n", "SAR can be converted to MNE format for advanced analysis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert each SAR to MNE format\n", "for sar_session in sar:\n", " # Convert to MNE Raw object\n", " sar_session.convert_to_mne(\n", " chunk_len=1440 # Length in minutes\n", " )\n", " \n", " # Save as FIF format\n", " output_path = Path(\"./mne_output\") / sar_session.animal_day\n", " sar_session.save_fif_and_json(output_path)\n", " \n", "print(\"Converted to MNE format\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Loading Saved SAR\n", "\n", "Load previously saved spike analysis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load SAR from MNE format\n", "sar_path = Path(\"./mne_output/animal_001_2023-12-15\")\n", "sar_loaded = visualization.SpikeAnalysisResult.load_fif_and_json(sar_path)\n", "\n", "# Access MNE object\n", "mne_obj = sar_loaded.result_mne\n", "print(f\"Loaded MNE object: {mne_obj}\")\n", "print(f\"Channels: {mne_obj.ch_names}\")\n", "print(f\"Sample rate: {mne_obj.info['sfreq']} Hz\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Peri-Event Analysis with MNE\n", "\n", "Analyze EEG around spike times:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import mne\n", "\n", "# Extract events from annotations (spike times)\n", "events, event_dict = mne.events_from_annotations(mne_obj)\n", "\n", "print(f\"Found {len(events)} events\")\n", "print(f\"Event types: {list(event_dict.keys())}\")\n", "\n", "# Create epochs around spike times\n", "epochs = mne.Epochs(\n", " mne_obj,\n", " events,\n", " event_id=event_dict,\n", " tmin=-0.5, # 500 ms before spike\n", " tmax=1.0, # 1 s after spike\n", " baseline=(-0.5, -0.1),\n", " preload=True\n", ")\n", "\n", "print(f\"Created {len(epochs)} epochs\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Spike-Triggered Averages\n", "\n", "Compute spike-triggered average of LFP:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute evoked response (spike-triggered average)\n", "evoked = epochs.average()\n", "\n", "# Plot spike-triggered average\n", "fig = evoked.plot(spatial_colors=True, gfp=True)\n", "plt.title(\"Spike-Triggered Average\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Time-Frequency Analysis\n", "\n", "Analyze oscillations around spike times:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute time-frequency representation\n", "freqs = np.arange(1, 50, 1) # 1-50 Hz\n", "n_cycles = freqs / 2 # Number of cycles increases with frequency\n", "\n", "# Compute TFR using Morlet wavelets\n", "power = epochs.compute_tfr(\n", " method='morlet',\n", " freqs=freqs,\n", " n_cycles=n_cycles,\n", " use_fft=True,\n", " average=True\n", ")\n", "\n", "# Plot TFR\n", "power.plot(\n", " picks='eeg',\n", " baseline=(-0.5, -0.1),\n", " mode='percent',\n", " title=\"Peri-Spike Time-Frequency\"\n", ")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9. Spike-LFP Phase Locking\n", "\n", "Analyze phase relationships between spikes and LFP oscillations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Filter for specific frequency band (e.g., theta: 4-8 Hz)\n", "epochs_filtered = epochs.copy().filter(\n", " l_freq=4,\n", " h_freq=8,\n", " method='iir'\n", ")\n", "\n", "# Apply Hilbert transform to get instantaneous phase\n", "epochs_hilbert = epochs_filtered.apply_hilbert()\n", "\n", "# Extract phase at spike time (t=0)\n", "spike_phases = np.angle(epochs_hilbert.get_data()[:, :, epochs.time_as_index(0)[0]])\n", "\n", "# Plot phase distribution\n", "plt.figure(figsize=(10, 5))\n", "plt.subplot(1, 2, 1)\n", "plt.hist(spike_phases.flatten(), bins=50, density=True)\n", "plt.xlabel(\"Phase (radians)\")\n", "plt.ylabel(\"Density\")\n", "plt.title(\"Spike Phase Distribution\")\n", "\n", "# Polar plot\n", "plt.subplot(1, 2, 2, projection='polar')\n", "plt.hist(spike_phases.flatten(), bins=50, density=True)\n", "plt.title(\"Spike Phase (Polar)\")\n", "plt.tight_layout()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 10. Integrating with WAR\n", "\n", "Combine spike analysis with windowed EEG analysis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute both WAR and SAR\n", "war = ao.compute_windowed_analysis(\n", " features=['rms', 'psdband', 'nspike'], # Include spike count\n", " multiprocess_mode='serial'\n", ")\n", "\n", "# Access spike counts from WAR\n", "spike_counts = war.nspike\n", "print(f\"Spike count data shape: {spike_counts.shape}\")\n", "\n", "# Plot spike counts over time\n", "ap = visualization.AnimalPlotter(war)\n", "fig = ap.plot_feature_over_time('nspike')\n", "plt.title(\"Spike Counts Over Time\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "This tutorial covered:\n", "\n", "1. Loading spike-sorted data\n", "2. Computing spike analysis (SAR)\n", "3. Converting to MNE format\n", "4. Peri-event analysis\n", "5. Spike-triggered averages\n", "6. Time-frequency analysis around spikes\n", "7. Spike-LFP phase locking\n", "8. Integrating spikes with windowed analysis\n", "\n", "## Next Steps\n", "\n", "- **[Visualization Tutorial](visualization.ipynb)**: Advanced plotting for spike data\n", "- **[Windowed Analysis Tutorial](windowed_analysis.ipynb)**: Combine spike counts with other features" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.0" } }, "nbformat": 4, "nbformat_minor": 4 }