Insights on back marking for the automated identification of animals
To date, there is little research on how to design back marks to best support individual-level monitoring of uniform looking species like pigs. With the recent surge of machine learning-based monitoring solutions, there is a particular need for guide…
Authors: David Brunner, Marie Bordes, Elisabeth Mayrhuber
Insigh ts on bac k marking for the automated iden tification of animals ∗ D. Brunner 1 , 2 , † , M. Bordes 3 , E. Ma yrhuber 1 , 2 , S. M. Winkler 1 , V. Dorfer 1 and M. Oczak 3 , 4 1 Bioinformatics R ese ar ch Gr oup, PLFDo c, University of A pplie d Scienc es Upp er A ustria, Hagenb er g, A ustria 2 Computer V ision L ab, TU Wien, V ienna, A ustria 3 Centr e for A nimal Nutrition and W elfar e, The University of V eterinary Me dicine V ienna, Vienna, A ustria 4 Pr e cision Livesto ck F arming Hub, The University of V eterinary Me dicine V ienna, Vienna, A ustria † da vid.brunner@fh-hagenberg.at Abstract T o date, there is little research on ho w to design back marks to b est supp ort individual-lev el monitoring of uniform lo oking species lik e pigs. With the recent surge of mac hine learning-based monitoring solutions, there is a particular need for guidelines on the design of marks that can b e effectiv ely recognised by suc h algorithms. This study pro vides v aluable insights on effective bac k mark design, based on the analysis of a machine learning model, trained to distinguish pigs via their bac k marks. Specifically , a neural netw ork of type ResNet-50 was trained to classify ten pigs with unique bac k marks. The analysis of the mo del’s predictions highligh ts the significance of certain design choices, ev en in con trolled settings. Most imp ortan tly , the set of back marks m ust b e designed suc h that each mark remains unambiguous under conditions of motion blur, div erse view angles and o cclusions, caused by animal b eha viour. F urther, the back mark design must consider data augmen tation strategies commonly employ ed during mo del training, lik e colour, flip and crop augmentations. The generated insigh ts can supp ort individual-level monitoring in future studies and real-world applications b y optimizing back mark design. K eywor ds. precision livestock farming, computer vision, identification, back marks, pigs 1 In tro duction Observing animals is a w ell-established w ay of gathering information ab out their social b eha viour [1]. While, traditionally , this is done by a human exp ert, in recen t y ears, motiv ated by adv ancements in mac hine learning (ML), muc h research has b een devoted to the developmen t of automatic monitoring solutions. A plethora of studies show ML to b e capable of accurately detecting, tracking, and ultimately recognising the b eha viour of animals [3]. One obs tacle to wards practical application is the fact that man y of the presented metho ds work on group level and do not allow the identification of individual animals. This is in large part b ecause differen tiating individual animals is a very hard problem in general, and esp ecially so for uniform looking species lik e pigs. One common solution is to use additional physical markers like back marks. T o date, there are only few studies that pro vide insigh ts on their design [4]. The aim of this study is to pro vide insights on back mark design for automated identification of animals. T o this end, a machine learning mo del that is trained to recognise pigs via their unique bac k marks is analysed to un veil interesting error cases and to allow deriving design recommendations. ∗ This is a preprin t . The final version will b e published in the conference pro ceedings of the 12 th Europ ean Conference on Precision Livestock F arming (ECPLF) 2026. 1 2 Material and Metho ds 2.1 Exp erimen tal setup and back marks The data was collected during an observ ational study on social b eha viour in pigs, with a fo cus on helping b eha viour (“Let me out”, doi:10.55776/I6488). The study setup comprised tw o identical pens, observ ed by iden tical cameras (HIKVISION DS 2CD5046G0-AP , 1200x780@25, fisheye, Hikvision Co. Ltd., Hangzhou, Zhejiang) moun ted in an elev ated side-view angle. T o supp ort individual iden tification the pigs receiv ed regular bac k marks. Sym b ols w ere used instead of n um b ers for practicalit y (easy to apply to mo ving pigs) and repro ducibilit y (similar across reapplications). The bac k marks w ere inspired b y patterns used in mice [4]. Figure 1 shows the bac k marks. Figure 1: The back marks used in this study . F rom left to right: dot dot , dot line horizontal , i , line line horizontal , o , r everse t , s , v , vertic al line , x . 2.2 Data preparation and mo del training A human lab eller placed b ounding b oxes with class IDs on all frames of 11 video clips, eac h 10-30 seconds in duration. Nine of the clips were selected for the training set, and t wo of the clips for the v alidation and test set, resp ectively . The raw training set then amounted to a total of 3750 frames, the v alidation and test set to 250 and 750 frames, of which the b ounding b o x areas were then algorithmically extracted and manually filtered. The final training set comprised 26,260 crops, the v alidation and the test set were reduced to 500 crops, resp ectiv ely . A PyT orc h 1 implemen tation of a neural net work of type ResNet-50 [2] was selected as the image classification mo del. Extensiv e data augmentation was used in training, including horizon tal and vertical flipping, random rotation, brigh tness, contrast, saturation and hue mo difications, as w ell as a small probability for switching to grayscale and applying blurring. 3 Results and Discussion The trained mo del reached a classification accuracy of 91% and 69% on v alidation and test set, resp ectiv ely . There were conspicuous differences across classes (e.g. vertic al line : 88% vs. r everse t : 38% on the test set), which are mostly consisten t b etw een v alidation and test set. Esp ecially r everse t p oses a challenge for the mo del, which might b e explained by the fact that it is prone to lo ok similar to other back marks in sp ecific situations, which are often tied to sp ecific types of animal b ehaviour. In this study , three kinds of b eha viour w ere identified as esp ecially relev ant: fast mo vemen t, div erse p oses and proximit y to other animals, which can lead to motion blur, div erse 1 https://pytorch.org/ 2 view angles and o cclusions, resp ectively . The top line in Fig. 2 illustrates how motion blur can cause r everse t to lo ok like vertic al line . The pixel importance [5] sho ws that the blurred v ertical bar of the T do es not factor into the mo del’s decision. The middle line in Fig. 2 illustrates how a certain view angle can cause r everse t to lo ok more lik e v or x which is reflected in both the mo del confidence and the pixel imp ortance. The b ottom line of Fig. 2 shows how even minor o cclusions can make it imp ossible for the mo del to distinguish b etw een line line horizontal and o . Bac k marks m ust b e chosen, such that these situations are av oided. Figure 2: Behaviour-based error cases. F rom top to bottom: motion blur, angle, o cclusion. Data augmentation has established itself as an indisp ensable to ol for improving ML mo dels and is ubiquitous in practice. The back mark design needs to consider common photometrical augmen tations like colour, brigh tness and contrast adjustmen ts, as well as geometrical augmen tations lik e cropping and flipping. Fig. 3a shows colour jitter (left) and gra yscale (righ t), b oth of whic h migh t in terfere with coloured bac k marks. Fig. 3b sho ws ho w crop augmentation makes it impossible to distinguish r everse t from vertic al line (left) and line line horizontal from o (right). Fig. 3c illustrates wh y back marks that are mirror images of eac h other must b e a voided in the presence of flip augmen tation. In this example an instance of class s (left) lo oks like it b elongs to class 2 (middle; only for demonstration, not part of the set of back marks in this study) after a horizontal flip (right). 4 Conclusions This study aimed at in vestigating back mark design for identification tasks in automated animal monitoring. It could b e shown that there are imp ortan t design c hoices that impact a ML mo del’s abilit y to successfully differentiate a set of back marks. Sp ecifically , the insights of the study are 3 Figure 3: Illustration of common augmentation strategies and their impact on back mark c hoice. a) colour augmentation, b) cropping, c) flipping. t wofold. First, the bac k mark design should control for predictable animal b eha viour such as fast mo vemen t, div erse p oses and proximit y to other animals. Second, to ensure the data’s compatibility with a wide range of ML metho ds, the bac k mark design should consider common data augmentation strategies applied during mo del training such as colour, and flip augmentations. A c kno wledgemen ts This researc h w as funded in whole or in part b y the A ustrian Science F und (FWF) [ https: //doi.org/10.55776/DFH34 ]. F or op en access purp oses, the author has applied a CC BY public cop yright license to an y author-accepted manuscript version arising from this submission. The data used in this study originates from the "Let me out" pro ject, funded b y the Austrian Science F und (FWF) [ https://doi.org/10.55776/I6488 ]. References [1] Caroline Clouard, Auriane F oreau, Sébastien Goumon, Céline T allet, Elo die Merlot, and Rémi Resmond. Evidence of stable preferential affiliative relationships in the domestic pig. A nimal Behaviour , 213:95–105, 2024. [2] Kaiming He, Xiangyu Zhang, Shao qing Ren, and Jian Sun. Deep residual learning for image recognition. In Pr o c e e dings of the IEEE c onfer enc e on c omputer vision and p attern r e c o gnition , pages 770–778, 2016. [3] Dong Liu, Maciej Oczak, Kristina Masc hat, Johannes Baumgartner, Bernadette Pletzer, Dong jian He, and T omas Norton. A computer vision-based metho d for spatial-temp oral action recognition of tail-biting b ehaviour in group-housed pigs. Biosystems Engine ering , 195:27–41, 2020. [4] Sha y Oha yon, Ofer A vni, Adam L T a ylor, Pietro P erona, and SE Roian Egnor. Automated m ulti-day trac king of mark ed mice for the analysis of so cial b ehaviour. Journal of neur oscienc e metho ds , 219(1):10–19, 2013. [5] Matthew D Zeiler and Rob F ergus. Visualizing and understanding con volutional netw orks. In Eur op e an c onfer enc e on c omputer vision , pages 818–833. Springer, 2014. 4
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment