AnimalOrganizer#

class neurodent.visualization.AnimalOrganizer(base_folder_path, anim_id: str, day_sep: str | None = None, mode: Literal['nest', 'concat', 'base', 'noday'] = 'concat', assume_from_number=False, skip_days: list[str] = [], truncate: bool | int = False, lro_kwargs: dict = {}) None[source]#

Bases: AnimalFeatureParser

Parameters:
  • anim_id (str)

  • day_sep (str | None)

  • mode (Literal['nest', 'concat', 'base', 'noday'])

  • skip_days (list[str])

  • truncate (bool | int)

  • lro_kwargs (dict)

__init__(base_folder_path, anim_id: str, day_sep: str | None = None, mode: Literal['nest', 'concat', 'base', 'noday'] = 'concat', assume_from_number=False, skip_days: list[str] = [], truncate: bool | int = False, lro_kwargs: dict = {}) None[source]#

AnimalOrganizer is used to organize data from a single animal into a format that can be used for analysis. It is used to organize data from a single animal into a format that can be used for analysis.

Parameters:
  • base_folder_path (str) – The path to the base folder of the animal data.

  • anim_id (str) – The ID of the animal. This should correspond to only one animal.

  • day_sep (str, optional) – Separator for day in folder name. Set to None or empty string to get all folders. Defaults to None.

  • mode (Literal["nest", "concat", "base", "noday"], optional) – The mode of the AnimalOrganizer. Defaults to “concat”. File structure patterns (where * indicates search location): “nest”: base_folder_path / animal_id / date_format (looks for folders/files within animal_id subdirectories) “concat”: base_folder_path / animal_id*date_format (looks for folders/files with animal_id+date in name at base level) “base”: base_folder_path / * (looks for folders/files directly in base_folder_path) “noday”: base_folder_path / animal_id (same as concat but expects single unique match, no date filtering)

  • assume_from_number (bool, optional) – Whether to assume the animal ID is a number. Defaults to False.

  • skip_days (list[str], optional) – The days to skip. Defaults to [].

  • truncate (bool|int, optional) – Whether to truncate the data. Defaults to False.

  • lro_kwargs (dict, optional) – Keyword arguments for LongRecordingOrganizer. Defaults to {}.

Return type:

None

get_timeline_summary() DataFrame[source]#

Get timeline summary as a DataFrame for user inspection and debugging.

Returns:

Timeline information with columns:
  • lro_index: Index of the LRO

  • start_time: Start datetime of the LRO

  • end_time: End datetime of the LRO

  • duration_s: Duration in seconds

  • n_files: Number of files in the LRO

  • folder_path: Base folder path

  • animalday: Parsed animalday identifier

Return type:

pd.DataFrame

convert_colbins_to_rowbins(overwrite=False, multiprocess_mode: Literal['dask', 'serial'] = 'serial')[source]#
Parameters:

multiprocess_mode (Literal['dask', 'serial'])

convert_rowbins_to_rec(multiprocess_mode: Literal['dask', 'serial'] = 'serial')[source]#
Parameters:

multiprocess_mode (Literal['dask', 'serial'])

cleanup_rec()[source]#
compute_bad_channels(lof_threshold: float | None = None, force_recompute: bool = False)[source]#

Compute bad channels using LOF analysis for all recordings.

Parameters:
  • lof_threshold (float, optional) – Threshold for determining bad channels from LOF scores. If None, only computes/loads scores without setting bad_channel_names.

  • force_recompute (bool) – Whether to recompute LOF scores even if they exist.

apply_lof_threshold(lof_threshold: float)[source]#

Apply threshold to existing LOF scores to determine bad channels for all recordings.

Parameters:

lof_threshold (float) – Threshold for determining bad channels.

get_all_lof_scores() dict[source]#

Get LOF scores for all recordings.

Returns:

Dictionary mapping animal days to LOF score dictionaries.

Return type:

dict

compute_windowed_analysis(features: list[str], exclude: list[str] = [], window_s=4, multiprocess_mode: Literal['dask', 'serial'] = 'serial', suppress_short_interval_error=False, apply_notch_filter=True, **kwargs) WindowAnalysisResult[source]#

Computes windowed analysis of animal recordings. The data is divided into windows (time bins), then features are extracted from each window. The result is formatted to a Dataframe and wrapped into a WindowAnalysisResult object.

Parameters:
  • features (list[str]) – List of features to compute. See individual compute_...() functions for output format

  • exclude (list[str], optional) – List of features to ignore. Will override the features parameter. Defaults to [].

  • window_s (int, optional) – Length of each window in seconds. Note that some features break with very short window times. Defaults to 4.

  • suppress_short_interval_error (bool, optional) – If True, suppress ValueError for short intervals between timestamps in resulting WindowAnalysisResult. Useful for aggregated WARs. Defaults to False.

  • apply_notch_filter (bool, optional) – Whether to apply notch filtering to remove line noise. Uses constants.LINE_FREQ. Defaults to True.

  • multiprocess_mode (Literal['dask', 'serial'])

Raises:

AttributeError – If a feature’s compute_...() function was not implemented, this error will be raised.

Returns:

A WindowAnalysisResult object containing extracted features for all recordings

Return type:

WindowAnalysisResult

compute_spike_analysis(multiprocess_mode: Literal['dask', 'serial'] = 'serial') list[SpikeAnalysisResult][source]#

Compute spike sorting on all long recordings and return a list of SpikeAnalysisResult objects

Parameters:

multiprocess_mode (Literal['dask', 'serial']) – Whether to use Dask for parallel processing. Defaults to ‘serial’.

Returns:

List of SpikeAnalysisResult objects. Each SpikeAnalysisResult object corresponds

to a LongRecording object, typically a different day or recording session.

Return type:

list[SpikeAnalysisResult]

Raises:

ImportError – If mountainsort5 is not available.

compute_frequency_domain_spike_analysis(detection_params: dict | None = None, max_length: int | None = None, multiprocess_mode: Literal['dask', 'serial'] = 'serial')[source]#

Compute frequency-domain spike detection on all long recordings.

Parameters:
  • detection_params (dict, optional) – Detection parameters. Uses defaults if None.

  • max_length (int, optional) – Maximum length in samples to analyze per recording

  • multiprocess_mode (Literal["dask", "serial"]) – Processing mode

Returns:

Results for each recording session

Return type:

list[FrequencyDomainSpikeAnalysisResult]

Raises:

ImportError – If SpikeInterface is not available