Quadcopters equipped with machine learning vision systems are bound to become an essential technique for precision agriculture applications in pastures in the near future. This paper presents a low-cost approach for livestock counting jointly with classification and semantic segmentation which provide the potential of biometrics and welfare monitoring in animals in real time. The method used in the paper adopts the state-of-the-art deep-learning technique known as Mask R-CNN for feature extraction and training in the images captured by quadcopters. Key parameters such as IoU (Intersection over Union) threshold, the quantity of the training data and the effect the proposed system performs on various densities have been evaluated to optimize the model. A real pasture surveillance dataset is used to evaluate the proposed method and experimental results show that our proposed system can accurately classify the livestock with an accuracy of 96% and estimate the number of cattle and sheep to within 92% of the visual ground truth, presenting competitive advantages of the approach feasible for monitoring the livestock.
Bibliographical note© 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
- Livestock classification
- aerial images
- Mask R-CNN
- livestock counting