Speaker
Description
In this work, we present a fully automated method for delineating the “Open Field” experimental arena to support subsequent tracking of laboratory animal behavior. The core of our approach is the SOLD2 neural network model from the Kornia library, which provides high-quality detection of the arena’s linear boundaries and interior sectors without any manual intervention.
The processing pipeline consists of four main stages:
1. Line Detection with SOLD2
The model extracts all prominent linear features in each video frame, capturing both the outer contour of the arena and its internal divider lines.
2. Geometric Reconstruction
By computing intersections and extensions of the detected line segments, we reconstruct the arena’s layout and generate a sector map aligned to a common real-world coordinate system.
3. Segmentation-Based Tracking with YOLO11
A YOLO11 segmentation model, fine-tuned on a small manually annotated dataset, locates the animal within the perspective-corrected frame and outputs its trajectory as a time-series of coordinates mapped to the delineated sectors.
4. End-to-End Automation
All stages are integrated into a single, automated pipeline—from initial video ingestion and homography calibration to the production of final movement trajectories, each tagged with its respective arena zone.
This neural-network-driven framework eliminates the need for hand-drawn annotations and guarantees robust, reproducible performance across variable lighting conditions, camera angles, and apparatus modifications. By dramatically reducing data preparation time, it enhances the throughput and reliability of quantitative behavioral analyses in neuroscience and pharmacology.