is applied to both the images from the reference and the prediction units. The process starts by finding the outline and spine of the cow in the raw uncorrected 3D images as indicated in Figure 1A. A calibration procedure converts the region within the cow outline to a point cloud, so each pixel in this region of the 3D image is transformed into the corresponding spatial 3D coordinates. See Figure 1B for an example of the corrected point cloud. The calibration procedure is primarily done to remove distortions due to perspective. Furthermore, the calibration allows combination of the point clouds information from two neighbouring cameras if a cow is placed on the border between the cameras field of view. Upon this a corrected depth image of the cow region is created by interpolating the point cloud back into a 2D depth image. The corrected depth image is shown in Figure 1C. The feature generation process starts by finding the points on the corrected depth image lying 3, 5, 10, and 15 cm below the spine level of the cow. The height is measured perpendicular to the spine of the cow to make the features invariant with respect to position and orientation. Cubic smooth splines are fitted to the points corresponding to each distance to reduce noise. An example of these contours are illustrated in Figure 1C. The raw spline features are generated by measuring distance between the intersection of the spine normal and the splines on each side of the cow. The raw spline features are normalized to correct for anatomically differences as seen in Figure 1D. Classification algorithm The classification algorithm is based on a linear discriminant analysis (Friedman, 1989) trained on the features from the reference unit. In this manner, each cow conforms a class in the linear discriminant model having the contours as numerous feature input. To increase the accuracy and robustness of the classifier a post processing step is included in the classifier where the most probably cow-id is estimated based the time period where the cow stays in the same region of the feeding area. Both the feature estimation and classification algorithm were build using the NumPy (van der Walt, 2011). Validation Experiment The accuracy of the prediction algorithm was validated by labelling 97 Jersey cows with a semi-permanent marker in the feeding area of a commercial loose housing production system. The cows were marked on their backs so the marks were visible in the images from the above cameras. The algorithm to identify the cows at the feeding table, did however not use these marks as they not were visible in the 3D image. A minimum of 18 images and a maximum of 50 images of each cow was obtained with the reference unit and the classification algorithm was trained on these. Images were chosen with the prediction units every 15 minutes over an approximate 5 days long period. The cow-id of each cow present in the images was manually annotated for comparison with the identification algorithm. This resulted in 6357 manually labelled cow images distributed over 97 different cow-id. Results The results from the evaluation of the cow-id prediction algorithm can be seen in Table 1. The table is the result of a comparison process, which is pairing the manual labels with the predicted labels, based on the position of the cow. A cow is placed in the “correctly predicted cow-id” category if the cow is detected correctly and the predicted cow-id match the manual
Download PDF file