Configuration and Tips#
This guide covers optional configuration settings and best practices to enhance your experience with NeuRodent.
Logging Configuration#
By default, NeuRodent operations run quietly. If you want to monitor progress and see detailed information about what NeuRodent is doing, you can configure Python’s logging system.
This is especially useful for:
Long-running analyses to track progress
Debugging issues with data loading or processing
Understanding what happens during each step of the pipeline
import sys
import logging
# Configure logging to display progress and debug information
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(message)s",
level=logging.INFO, # Change to logging.DEBUG for more detailed output
stream=sys.stdout
)
logger = logging.getLogger()
print("Logging configured! You'll now see progress messages from NeuRodent.")
Logging Levels#
You can adjust the logging level to control how much information is displayed:
logging.DEBUG- Most detailed, shows all internal operationslogging.INFO- Shows progress and important steps (recommended)logging.WARNING- Only shows warnings and errorslogging.ERROR- Only shows errors
Temporary Directory Configuration#
NeuRodent uses a temporary directory for intermediate files during processing. You can configure this location if you need to:
Use a location with more disk space
Use a faster storage device (e.g., SSD)
Keep temporary files in a specific location
from neurodent import core
# Set a custom temporary directory
# core.set_temp_directory("/path/to/your/temp/dir")
# Check the current temp directory
print(f"Using temp directory: {core.get_temp_directory()}")
Performance Tips#
Memory Management#
For large datasets, consider:
Processing data in smaller time windows
Using the caching features to avoid recomputing results
Closing Python and restarting between analyses to free up memory
Parallel Processing#
Many NeuRodent functions support parallel processing. Check the function documentation for n_jobs parameters that allow you to use multiple CPU cores.
HPC and Workflow Management#
For large-scale analyses across many animals and conditions, consider using NeuRodent’s Snakemake pipeline on a high-performance computing (HPC) cluster. This allows you to:
Process multiple animals in parallel
Automatically manage dependencies between analysis steps
Resume interrupted workflows without recomputing completed steps
Scale to hundreds of recordings
See the Snakemake Pipeline Tutorial for a complete guide to setting up and running NeuRodent on an HPC cluster.
Next Steps#
Now that you’ve configured NeuRodent, explore the tutorials:
Basic Usage - Complete workflow from data loading to visualization
Data Loading - Learn about loading data from different formats
Tutorials - More advanced analysis techniques