Windowed Analysis Tutorial#
This tutorial provides a deep dive into Neurodent’s windowed analysis capabilities for extracting features from continuous EEG data.
Overview#
Windowed Analysis Results (WAR) is the core feature extraction system in Neurodent. It:
Divides continuous EEG data into time windows
Computes features for each window
Aggregates results across time and channels
Provides filtering and quality control methods
This approach is efficient for long recordings and enables parallel processing.
import sys
from pathlib import Path
import logging
from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from neurodent import core, visualization, constants
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
/home/runner/work/neurodent/neurodent/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
1. Feature Categories#
Neurodent extracts four main categories of features:
Linear Features (per channel)#
Single-value metrics for each channel in each time window:
# Available linear features
print("Linear features:")
for feature in constants.LINEAR_FEATURES:
print(f" - {feature}")
# Examples:
# - rms: Root mean square amplitude
# - logrms: Log of RMS amplitude
# - ampvar: Amplitude variance
# - psdtot: Total power spectral density
# - psdslope: Slope of PSD on log-log scale
Linear features:
- rms
- ampvar
- psdtotal
- nspike
- logrms
- logampvar
- logpsdtotal
- lognspike
Band Features (per frequency band)#
Features computed for each frequency band (delta, theta, alpha, beta, gamma):
# Available band features
print("\nBand features:")
for feature in constants.BAND_FEATURES:
print(f" - {feature}")
# Frequency bands
print("\nFrequency bands:")
for band, (lo, hi) in constants.FREQ_BANDS.items():
print(f" {band.capitalize()}: {lo}-{hi} Hz")
Band features:
- psdband
- psdfrac
- logpsdband
- logpsdfrac
Frequency bands:
Delta: 1-4 Hz
Theta: 4-8 Hz
Alpha: 8-13 Hz
Beta: 13-25 Hz
Gamma: 25-40 Hz
Matrix Features (connectivity)#
Features measuring relationships between channels:
# Available matrix features
print("\nMatrix features:")
for feature in constants.MATRIX_FEATURES:
print(f" - {feature}")
# Examples:
# - cohere: Spectral coherence between channel pairs
# - pcorr: Pearson correlation between channels
Matrix features:
- cohere
- zcohere
- imcoh
- zimcoh
- pcorr
- zpcorr
2. Computing Windowed Analysis#
Basic Usage#
# Using included test data
data_path = Path("../../.tests/integration/data/A10/A10_recording.edf")
animal_id = "A10"
# Create LongRecordingOrganizer with SpikeInterface mode
# mode options: 'si' (SpikeInterface), 'mne', or None
lro = core.LongRecordingOrganizer(
item=data_path,
mode="si",
extract_func="read_edf",
manual_datetimes=datetime(2023, 12, 13),
)
# Create AnimalOrganizer using pattern-based discovery
ao = visualization.AnimalOrganizer(
pattern="../../.tests/integration/data/{animal}/*.edf",
animal_id=animal_id,
assume_from_number=True,
lro_kwargs={
"mode": "si",
"extract_func": "read_edf",
"manual_datetimes": datetime(2023, 12, 13),
},
)
# Compute all features
war_all = ao.compute_windowed_analysis(
features=['all'],
exclude=['nspike', 'lognspike'], # Exclude spike features if no spikes
multiprocess_mode='serial'
)
INFO:root:Applying scale_to_uV to convert raw ADC data to microvolts
INFO:root:Recording already at target sampling rate (1000 Hz) or unable to determine, no resampling needed
INFO:root:Finalizing file timestamps
INFO:root:Using manual timestamps: 1 file end times specified
INFO:root:Processing manual_datetimes configuration
INFO:root:Processing global manual datetimes starting at 2023-12-13 00:00:00
INFO:root:Computing continuous timeline for 1 animaldays (1 total items) starting at 2023-12-13 00:00:00
INFO:root:Ordered items for timeline: ['A10_recording.edf']
INFO:root:Applying scale_to_uV to convert raw ADC data to microvolts
INFO:root:Recording already at target sampling rate (1000 Hz) or unable to determine, no resampling needed
INFO:root:Finalizing file timestamps
INFO:root:Using manual timestamps: 1 file end times specified
INFO:root:Item A10_recording.edf: duration = 5.0s (loaded with manual timestamp)
INFO:root:Timeline computed: 1 items, total duration 5.0s
INFO:root:Applying scale_to_uV to convert raw ADC data to microvolts
INFO:root:Recording already at target sampling rate (1000 Hz) or unable to determine, no resampling needed
INFO:root:Finalizing file timestamps
INFO:root:Using manual timestamps: 1 file end times specified
INFO:root:AnimalOrganizer Timeline Summary:
LRO 0: 2023-12-13 00:00:00 -> 2023-12-13 00:00:05 (duration: 5.0s, items: 1, item: A10_recording.edf)
INFO:root:Computing windowed analysis for A10_recording.edf
Processing rows: 0%| | 0/1 [00:00<?, ?it/s]
Processing rows: 100%|██████████| 1/1 [00:00<00:00, 5.47it/s]
Processing rows: 100%|██████████| 1/1 [00:00<00:00, 5.44it/s]
WARNING:root:Missing LOF scores for A10_unknown! LOF computation may have failed or compute_bad_channels() was not called for this LRO.
INFO:root:Total LOF scores collected: 0 animal days
WARNING:root:WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:1384: UserWarning: WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
warnings.warn(warning_msg)
INFO:root:Added missing animalday to bad_channels_dict: A10 Unknown Dec-13-2023
WARNING:root:Added missing animalday to lof_scores_dict: A10 Unknown Dec-13-2023. This indicates LOF scores were not computed for this session. It will be excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:Channel names: ['C-009', 'C-010', 'C-012', 'C-014', 'C-015', 'C-016', 'C-017', 'C-019', 'C-021', 'C-022']
INFO:root:Channel abbreviations: ['LAud', 'LVis', 'LHip', 'LBar', 'LMot', 'RMot', 'RBar', 'RHip', 'RVis', 'RAud']
Access information in WindowedAnalysis object#
You can access the summary of WindowedAnalysis object using the get_info method, and access the computed features in the WindowedAnalysis object using the get_result method.
war_info = war_all.get_info()
print(war_info)
result = war_all.get_result(
features=["all"],
exclude=['nspike', 'lognspike']
)
display(result)
feature names: rms, ampvar, psdtotal, logrms, logampvar, logpsdtotal, psdslope, psdband, psdfrac, logpsdband, logpsdfrac, cohere, zcohere, imcoh, zimcoh, pcorr, zpcorr, psd
animaldays: A10 Unknown Dec-13-2023
animal_id: A10
genotype: Unknown
sex: Unknown
channel_names: C-009, C-010, C-012, C-014, C-015, C-016, C-017, C-019, C-021, C-022
| animalday | animal | day | genotype | sex | duration | endfile | timestamp | isday | rms | ... | psdfrac | logpsdband | logpsdfrac | cohere | zcohere | imcoh | zimcoh | pcorr | zpcorr | psd | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | A10 Unknown Dec-13-2023 | A10 | Dec-13-2023 | Unknown | Unknown | 5.0 | None | 2023-12-13 | False | [38.504265, 38.390694, 38.427185, 38.452934, 3... | ... | {'delta': [0.8488484144216835, 0.0720915818188... | {'delta': [7.050211939202073, 4.66124746203268... | {'delta': [0.6145629665789544, 0.0696114898007... | {'delta': [[1.0, 0.08394989122152391, 0.137669... | {'delta': [[7.254328619247669, 0.0841479440180... | {'delta': [[0.0, -0.0030291077770633348, 0.125... | {'delta': [[0.0, -0.0030291170416343414, 0.126... | [[1.0, 0.013654095743739356, 0.020008524721897... | [[7.254328619247669, 0.013654944369402369, 0.0... | ([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0,... |
1 rows × 27 columns
Selective Feature Computation#
For faster processing, compute only needed features:
# Compute specific features
war_selective = ao.compute_windowed_analysis(
features=['rms', 'logrms', 'psdband', 'cohere'],
multiprocess_mode='serial'
)
result = war_selective.get_result(
features=['rms', 'logrms', 'psdband', 'cohere']
)
print(f"Computed features: {list(result.keys())}")
Computed features: ['animalday', 'animal', 'day', 'genotype', 'sex', 'duration', 'endfile', 'timestamp', 'isday', 'rms', 'logrms', 'psdband', 'cohere']
INFO:root:Computing windowed analysis for A10_recording.edf
Processing rows: 0%| | 0/1 [00:00<?, ?it/s]
Processing rows: 100%|██████████| 1/1 [00:00<00:00, 29.81it/s]
WARNING:root:Missing LOF scores for A10_unknown! LOF computation may have failed or compute_bad_channels() was not called for this LRO.
INFO:root:Total LOF scores collected: 0 animal days
WARNING:root:WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:1384: UserWarning: WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
warnings.warn(warning_msg)
INFO:root:Added missing animalday to bad_channels_dict: A10 Unknown Dec-13-2023
WARNING:root:Added missing animalday to lof_scores_dict: A10 Unknown Dec-13-2023. This indicates LOF scores were not computed for this session. It will be excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:Channel names: ['C-009', 'C-010', 'C-012', 'C-014', 'C-015', 'C-016', 'C-017', 'C-019', 'C-021', 'C-022']
INFO:root:Channel abbreviations: ['LAud', 'LVis', 'LHip', 'LBar', 'LMot', 'RMot', 'RBar', 'RHip', 'RVis', 'RAud']
Parallel Processing#
For large datasets, use parallel processing:
# Option 1: Multiprocessing (uses all CPU cores)
war_mp = ao.compute_windowed_analysis(
features=['rms', 'psdband'],
multiprocess_mode='multiprocess'
)
# Option 2: Dask (for distributed computing)
# Requires Dask cluster setup
war_dask = ao.compute_windowed_analysis(
features=['rms', 'psdband'],
multiprocess_mode='dask'
)
INFO:root:Computing windowed analysis for A10_recording.edf
Processing rows: 0%| | 0/1 [00:00<?, ?it/s]
Processing rows: 100%|██████████| 1/1 [00:00<00:00, 116.75it/s]
WARNING:root:Missing LOF scores for A10_unknown! LOF computation may have failed or compute_bad_channels() was not called for this LRO.
INFO:root:Total LOF scores collected: 0 animal days
WARNING:root:WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:1384: UserWarning: WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
warnings.warn(warning_msg)
INFO:root:Added missing animalday to bad_channels_dict: A10 Unknown Dec-13-2023
WARNING:root:Added missing animalday to lof_scores_dict: A10 Unknown Dec-13-2023. This indicates LOF scores were not computed for this session. It will be excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:Channel names: ['C-009', 'C-010', 'C-012', 'C-014', 'C-015', 'C-016', 'C-017', 'C-019', 'C-021', 'C-022']
INFO:root:Channel abbreviations: ['LAud', 'LVis', 'LHip', 'LBar', 'LMot', 'RMot', 'RBar', 'RHip', 'RVis', 'RAud']
INFO:root:Computing windowed analysis for A10_recording.edf
WARNING:root:Missing LOF scores for A10_unknown! LOF computation may have failed or compute_bad_channels() was not called for this LRO.
INFO:root:Total LOF scores collected: 0 animal days
WARNING:root:WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:1384: UserWarning: WARNING: 1 animalday(s) are missing LOF scores: ['A10_unknown']. Expected 1 but got 0. These sessions will be auto-populated with empty placeholders and excluded from LOF-based analysis.
warnings.warn(warning_msg)
INFO:root:Added missing animalday to bad_channels_dict: A10 Unknown Dec-13-2023
WARNING:root:Added missing animalday to lof_scores_dict: A10 Unknown Dec-13-2023. This indicates LOF scores were not computed for this session. It will be excluded from LOF-based analysis.
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:Channel names: ['C-009', 'C-010', 'C-012', 'C-014', 'C-015', 'C-016', 'C-017', 'C-019', 'C-021', 'C-022']
INFO:root:Channel abbreviations: ['LAud', 'LVis', 'LHip', 'LBar', 'LMot', 'RMot', 'RBar', 'RHip', 'RVis', 'RAud']
3. Data Quality and Filtering#
Method Chaining (Recommended)#
Apply multiple filters in sequence:
war_filtered = (
war_all
.filter_logrms_range(z_range=3) # Remove outliers (±3 SD)
.filter_high_rms(max_rms=500) # Remove high amplitude artifacts
.filter_low_rms(min_rms=10) # Remove low amplitude periods
# .filter_high_beta(max_beta_prop=0.4) # Remove high beta activity
.filter_reject_channels_by_session() # Reject bad channels
.filter_morphological_smoothing(smoothing_seconds=4.0) # Smooth filter mask (fill short gaps, remove brief artifacts)
)
print("Filtering completed!")
Filtering completed!
INFO:root:logrms_range: filtered 0/10
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:high_rms: filtered 0/10
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:low_rms: filtered 0/10
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
INFO:root:reject_channels_by_session: filtered 0/10
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
Configuration-Driven Filtering#
Alternative approach using configuration dictionary:
filter_config = {
'logrms_range': {'z_range': 3},
'high_rms': {'max_rms': 500},
'low_rms': {'min_rms': 10},
# 'high_beta': {'max_beta_prop': 0.4},
'reject_channels_by_session': {},
'morphological_smoothing': {'smoothing_seconds': 4.0}
}
war_filtered_config = war_all.apply_filters(
filter_config,
min_valid_channels=3
)
INFO:root:logrms_range: filtered 0/10
INFO:root:high_rms: filtered 0/10
INFO:root:low_rms: filtered 0/10
INFO:root:reject_channels_by_session: filtered 0/10
INFO:root:Applied morphological smoothing: 4.0s
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
Available Filters#
filter_logrms_range(z_range): Remove outliers based on log RMSfilter_high_rms(max_rms): Remove high amplitude artifactsfilter_low_rms(min_rms): Remove low amplitude periodsfilter_high_beta(max_beta_prop): Remove high beta activity (muscle artifacts)filter_reject_channels_by_session(): Identify and reject bad channelsmorphological_smoothing(smoothing_seconds): Smooth data morphologically
4. Data Aggregation#
Average across time windows, producing a single row per group (e.g., per recording session and light/dark phase). Channel information is preserved.
# Aggregate time windows
war_filtered.aggregate_time_windows()
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:4198: FutureWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.
aggregated_df = result_grouped.apply(
INFO:root:Setting suppress_short_interval_error to True
/home/runner/work/neurodent/neurodent/src/neurodent/visualization/results.py:2130: UserWarning: One or more channels do not match name aliases. Assuming alias from number in channel name.
core.parse_chname_to_abbrev(x, assume_from_number=self.assume_from_number)
5. Channel Management#
Reorder and Pad Channels#
Ensure consistent channel ordering across animals:
# Define standard channel order
standard_channels = [
"LMot", "RMot", # Motor cortex
"LBar", "RBar", # Barrel cortex
"LAud", "RAud", # Auditory cortex
"LVis", "RVis", # Visual cortex
"LHip", "RHip" # Hippocampus
]
war_filtered.reorder_and_pad_channels(
standard_channels,
use_abbrevs=True # Use abbreviated channel names
)
print(f"Channels: {war_filtered.channel_names}")
Channels: ['LMot', 'RMot', 'LBar', 'RBar', 'LAud', 'RAud', 'LVis', 'RVis', 'LHip', 'RHip']
6. Accessing Computed Features#
WAR objects store features in a pandas DataFrame. Use get_result() to retrieve features with full channel information, or get_channel_averaged_result() to average across channels (or channel pairs for connectivity features), producing scalar values per time window.
# Get the full result DataFrame
result_df = war_filtered.get_result(
features=['rms', 'psdband', 'cohere']
)
print(f"Result columns: {list(result_df.columns)}")
print(f"Result shape: {result_df.shape}")
print(f"\nRMS values (per-channel arrays):\n{result_df['rms'].iloc[0]}")
# Get channel-averaged result (scalars per time window)
df_avg = war_filtered.get_channel_averaged_result(
features=['logrms', 'logpsdband', 'zcohere']
)
print(f"\nChannel-averaged columns: {list(df_avg.columns)}")
print(f"\nSample logrms value: {df_avg['logrms'].iloc[0]}") # single float
Result columns: ['animalday', 'isday', 'animal', 'day', 'genotype', 'sex', 'duration', 'endfile', 'timestamp', 'rms', 'psdband', 'cohere']
Result shape: (1, 12)
RMS values (per-channel arrays):
[np.float64(38.367801666259766), np.float64(38.29277420043945), np.float64(38.45293426513672), np.float64(38.28055191040039), np.float64(38.50426483154297), np.float64(38.29888153076172), np.float64(38.39069366455078), np.float64(38.312103271484375), np.float64(38.42718505859375), np.float64(38.36119079589844)]
Channel-averaged columns: ['animalday', 'isday', 'animal', 'day', 'genotype', 'sex', 'duration', 'endfile', 'timestamp', 'logrms', 'logpsdband_delta', 'logpsdband_theta', 'logpsdband_alpha', 'logpsdband_beta', 'logpsdband_gamma', 'zcohere_delta', 'zcohere_theta', 'zcohere_alpha', 'zcohere_beta', 'zcohere_gamma']
Sample logrms value: 3.6729729890823366
7. Metadata and Grouping Variables#
WAR objects contain metadata for grouping and analysis:
# Access metadata
print(f"Animal ID: {war_filtered.animal_id}")
print(f"Genotype: {war_filtered.genotype}")
print(f"Animal days: {war_filtered.animaldays}")
print(f"Channel names: {war_filtered.channel_names}")
Animal ID: A10
Genotype: Unknown
Animal days: ['A10 Unknown Dec-13-2023']
Channel names: ['LMot', 'RMot', 'LBar', 'RBar', 'LAud', 'RAud', 'LVis', 'RVis', 'LHip', 'RHip']
8. Circadian Analysis (ZeitgeberAnalysisResult)#
Once your data is filtered and metadata (like genotype and timestamps) is verified, you can analyze circadian rhythms.
The ZeitgeberAnalysisResult wrapper uses this metadata to:
Shift Timestamps: Converts absolute timestamps to Zeitgeber Time (ZT), where ZT0 is “Lights On”.
Define Baseline: Subtracts a baseline period (e.g., the first 12 hours of the light phase) to normalization data.
Prepare for Visualization: Duplicates data for 48-hour “double-plotted” actograms.
For plotting these results, see the Visualization Tutorial.
from neurodent import core
# Wrap the result for circadian analysis
zar = core.ZeitgeberAnalysisResult(
war_filtered,
baseline_hours=12,
zeitgeber_shift_hours=6,
shift_for_48h=True
)
# Access the processed data
df_zar = zar.get_result(features=['rms'])
timestamps = war_filtered.result['timestamp']
print(f"Original Time Range: {timestamps.min()} to {timestamps.max()}")
print(f"ZT Coordinate Range: {df_zar.total_minutes.min()} to {df_zar.total_minutes.max()} min")
print(f"Baseline-Corrected Columns: {[c for c in df_zar.columns if '_nobase' in c][:3]}")
Original Time Range: 2023-12-13 00:00:00 to 2023-12-13 00:00:00
ZT Coordinate Range: 1080.0 to 2520.0 min
Baseline-Corrected Columns: ['duration_nobase']
INFO:neurodent.core.zeitgeber:Adding binned zeitgeber time columns with 60min bins
INFO:neurodent.core.zeitgeber:Running zeitgeber analysis pipeline...
INFO:neurodent.core.zeitgeber:Using default baseline: first 12 hours (0-720 min)
INFO:neurodent.core.zeitgeber:Processed data for 1 unique animals
9. Saving and Loading#
Save WAR objects for later analysis:
import tempfile
# Use a temporary directory for demonstration purposes
with tempfile.TemporaryDirectory() as tmpdir:
output_path = Path(tmpdir) / animal_id
output_path.mkdir(parents=True, exist_ok=True)
# Save WAR
war_filtered.save_pickle_and_json(output_path)
print(f"Saved to {output_path}")
# Load WAR
war_loaded = visualization.WindowAnalysisResult.load_pickle_and_json(output_path)
print(f"Loaded from {output_path}")
Saved to /tmp/tmpgycgbxye/A10
Loaded from /tmp/tmpgycgbxye/A10
INFO:root:Saved WAR to /tmp/tmpgycgbxye/A10/war.pkl
INFO:root:Saved WAR to /tmp/tmpgycgbxye/A10/war.parquet
INFO:root:Saved WAR to /tmp/tmpgycgbxye/A10/war.json
INFO:root:Channel names: ['LMot', 'RMot', 'LBar', 'RBar', 'LAud', 'RAud', 'LVis', 'RVis', 'LHip', 'RHip']
INFO:root:Channel abbreviations: ['LMot', 'RMot', 'LBar', 'RBar', 'LAud', 'RAud', 'LVis', 'RVis', 'LHip', 'RHip']
10. Best Practices#
Feature Selection#
Start with basic features (rms, psdband) before computing expensive ones (cohere, psd)
Exclude spike features if you don’t have spike data
Use selective feature computation for faster iteration
Filtering#
Always inspect data before and after filtering
Use conservative thresholds initially, then adjust
Consider biological significance (e.g., high beta may indicate muscle artifacts)
Processing#
Use serial mode for debugging
Use multiprocess for local analysis of large datasets
Use Dask for cluster computing
Quality Control#
Check channel consistency across animals
Verify metadata (genotype, day, etc.)
Save intermediate results frequently
Summary#
This tutorial covered:
Feature categories and types
Computing windowed analysis with different options
Data quality control and filtering
Channel management and standardization
Accessing computed features
Metadata and grouping variables
Saving and loading results
Best practices
Next Steps#
Visualization Tutorial: Plot and analyze WAR results
Spike Analysis Tutorial: Integrate spike-sorted data