Lassen et al.: INDIVIDUAL FEED INTAKE MEASURES WITH 3-DIMENSIONAL CAMERAS 9106 up for much more advanced modeling of the complex phenotype. It has also made it possible to make more proper genetic evaluation to be implemented in prac- tical breeding (Li et al., 2020; Khanal et al., 2022). However more data are still needed to make accuracies higher on the breeding values. New and emerging technologies has always been either implemented to measure new phenotypes or known phenotypes in a new way in dairy cattle produc- tion. Also, 3-dimensional ( 3D ) cameras can be used for generating data that can be used to improve manage- ment in dairy cattle production such as BCS (DeLaval Body Condition Scoring, DeLaval International AB, Tumba, Sweden). The development of 3D cameras has been remarkable over the last decades and is today, for example, installed in gaming consoles and can be purchased for relatively low cost while providing ac- curate data continuously. This can be used to surveil traits that needs to be recorded and stored throughout the day, such as feed intake and behavior. A 3D camera system to identify cows, predict BW and make individual feed intake records has been developed (WO 2014/166498, Borchersen, 2014; WO 2017/001538, Borchersen et al., 2017; WO/2020/260631, Lassen and Borchersen, 2020). The system (Cattle Feed Intake System) works without disturbing daily behavior of either the cows or the management of the daily routine in the barn. The cameras records data around the clock and based on image analysis cows are identified at the feeding table (Thomasen et al., 2018) and the amount of feed eaten is quantified (Lassen et al., 2018). In these studies, the data were collected in a limited time period and identification percent- age and repeatabilities of the feed intake between days and weeks were reported and showed promising results. Other camera-based systems have been initi- ated to make individual feed intake records. Bezen et al. (2020) used the convoluted neural network ( CNN ) approach to quantify feed intake and showed a mean squared error of 0.119 kg 2 feed per meal based on 63 meals recorded on 6 cows in 36 h. Identification relied on observing digits related to the cow identification on collars on the neck of the cow. The aim of this study was to analyze individual mea- sures of feed intake and predicted body weight recorded in commercial farms using 3D cameras. This was stud- ied by estimating the repeatability of the phenotypes recorded in 3 different dairy breeds. MATERIALS AND METHODS Because animals were not handled in any way or removed from their normal environment, no ethical ap- proval was required for this study. Cow Identification Unit and Weight Measurement The reference unit consists of a single 3D camera using time-of-flight technology (Microsoft Xbox One Kinect v2) to create a 3D image and an RFID reader (Agrident Sensor ASR550). A Dell T630 128 gigabytes random access memory server with 3090 RTX graphics card is used for the data analysis. These were installed in a narrow corridor with a time-based trigger system that allocated all images in 3 s after the RFID read to the specific ear tag, which ensured that one refer- ence image was obtained from each cow when they pass through (Figure 1). The corridor has been narrowed further than a normal exit corridor to ensure that no cows pass in obscure positions, go as 2 cows together, or even turn in the corridor. The 3D camera was placed directly above the passing cows 3.4 m above floor level. At the same position, a homemade walking scale (Ezi- weight S2) was installed to measure individual BW of the cow that was passing (Gebreyesus et al., 2023). Before any cows enter the system, the fixed interior in the image of an empty corridor is annotated. In that way, anything that enters an image will be noticed as a change from the annotated picture and considered a cow. Pictures were corrected in all 3 dimensions and stitched live as they were generated between cameras to make an image from one camera informative with the corresponding image from the camera next to it. All images were afterward sent to a central server were the remainder of the within-herd analysis were con- ducted. Three types of images are recorded from the 3D camera: RGB pictures, infrared ( IR ) pictures, and depth pictures indicating the distance from the camera to the object that is within the range of the camera, both in the lock after milking (Figure 2) and while the cows are eating at the feeding table (Figure 3). Direct sunlight in the image is a known challenge for an RGB camera as the Kinect camera. Therefore, herds were selected where the feed was indoor under a roof and no or limited direct sunlight on the feed were observed during the day. Cow Identification The first step in the image processing is to estimate features from the geometric information in the 3D im- ages, which are significant for separating the individu- als. A calibration procedure converts the region within the cow circumference to a point cloud, so each pixel in this region of the 3D image is transformed into the corresponding spatial 3D coordinates. The calibration procedure is primarily done to remove distortions due to perspective. Furthermore, the calibration allows a combination of the point clouds information from 2 Journal of Dairy Science Vol. 106 No. 12, 2023
Download PDF file