CINXE.COM
Automated Detection of Cat Facial Landmarks | International Journal of Computer Vision
<!DOCTYPE html> <html lang="en" class="no-js"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="applicable-device" content="pc,mobile"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="robots" content="max-image-preview:large"> <meta name="access" content="Yes"> <meta name="360-site-verification" content="1268d79b5e96aecf3ff2a7dac04ad990" /> <title>Automated Detection of Cat Facial Landmarks | International Journal of Computer Vision </title> <meta name="twitter:site" content="@SpringerLink"/> <meta name="twitter:card" content="summary_large_image"/> <meta name="twitter:image:alt" content="Content cover image"/> <meta name="twitter:title" content="Automated Detection of Cat Facial Landmarks"/> <meta name="twitter:description" content="International Journal of Computer Vision - The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant..."/> <meta name="twitter:image" content="https://static-content.springer.com/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig1_HTML.png"/> <meta name="journal_id" content="11263"/> <meta name="dc.title" content="Automated Detection of Cat Facial Landmarks"/> <meta name="dc.source" content="International Journal of Computer Vision 2024 132:8"/> <meta name="dc.format" content="text/html"/> <meta name="dc.publisher" content="Springer"/> <meta name="dc.date" content="2024-03-05"/> <meta name="dc.type" content="OriginalPaper"/> <meta name="dc.language" content="En"/> <meta name="dc.copyright" content="2024 The Author(s)"/> <meta name="dc.rights" content="2024 The Author(s)"/> <meta name="dc.rightsAgent" content="journalpermissions@springernature.com"/> <meta name="dc.description" content="The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant challenges that researchers in the field currently face is the scarcity of high-quality, comprehensive datasets that allow the development of models for facial expressions analysis. One of the possible approaches is the utilisation of facial landmarks, which has been shown for humans and animals. In this paper we present a novel dataset of cat facial images annotated with bounding boxes and 48 facial landmarks grounded in cat facial anatomy. We also introduce a landmark detection convolution neural network-based model which uses a magnifying ensemble method. Our model shows excellent performance on cat faces and is generalizable to human and other animals facial landmark detection."/> <meta name="prism.issn" content="1573-1405"/> <meta name="prism.publicationName" content="International Journal of Computer Vision"/> <meta name="prism.publicationDate" content="2024-03-05"/> <meta name="prism.volume" content="132"/> <meta name="prism.number" content="8"/> <meta name="prism.section" content="OriginalPaper"/> <meta name="prism.startingPage" content="3103"/> <meta name="prism.endingPage" content="3118"/> <meta name="prism.copyright" content="2024 The Author(s)"/> <meta name="prism.rightsAgent" content="journalpermissions@springernature.com"/> <meta name="prism.url" content="https://link.springer.com/article/10.1007/s11263-024-02006-w"/> <meta name="prism.doi" content="doi:10.1007/s11263-024-02006-w"/> <meta name="citation_pdf_url" content="https://link.springer.com/content/pdf/10.1007/s11263-024-02006-w.pdf"/> <meta name="citation_fulltext_html_url" content="https://link.springer.com/article/10.1007/s11263-024-02006-w"/> <meta name="citation_journal_title" content="International Journal of Computer Vision"/> <meta name="citation_journal_abbrev" content="Int J Comput Vis"/> <meta name="citation_publisher" content="Springer US"/> <meta name="citation_issn" content="1573-1405"/> <meta name="citation_title" content="Automated Detection of Cat Facial Landmarks"/> <meta name="citation_volume" content="132"/> <meta name="citation_issue" content="8"/> <meta name="citation_publication_date" content="2024/08"/> <meta name="citation_online_date" content="2024/03/05"/> <meta name="citation_firstpage" content="3103"/> <meta name="citation_lastpage" content="3118"/> <meta name="citation_article_type" content="Article"/> <meta name="citation_fulltext_world_readable" content=""/> <meta name="citation_language" content="en"/> <meta name="dc.identifier" content="doi:10.1007/s11263-024-02006-w"/> <meta name="DOI" content="10.1007/s11263-024-02006-w"/> <meta name="size" content="251581"/> <meta name="citation_doi" content="10.1007/s11263-024-02006-w"/> <meta name="citation_springer_api_url" content="http://api.springer.com/xmldata/jats?q=doi:10.1007/s11263-024-02006-w&api_key="/> <meta name="description" content="The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant challenges"/> <meta name="dc.creator" content="Martvel, George"/> <meta name="dc.creator" content="Shimshoni, Ilan"/> <meta name="dc.creator" content="Zamansky, Anna"/> <meta name="dc.subject" content="Computer Imaging, Vision, Pattern Recognition and Graphics"/> <meta name="dc.subject" content="Artificial Intelligence"/> <meta name="dc.subject" content="Image Processing and Computer Vision"/> <meta name="dc.subject" content="Pattern Recognition"/> <meta name="citation_reference" content="Aghdam, H. H., Gonzalez-Garcia, A., Weijer, J. v. d., & López, A. M. (2019). Active learning for deep detection neural networks. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 3672–3680)."/> <meta name="citation_reference" content="citation_journal_title=Frontiers in Artificial Intelligence; citation_title=Convolutional neural network-based technique for gaze estimation on mobile devices; citation_author=AA Akinyelu, P Blignaut; citation_volume=4; citation_publication_date=2022; citation_id=CR2"/> <meta name="citation_reference" content="citation_journal_title=Applied Sciences; citation_title=Deep-learning-based models for pain recognition: A systematic review; citation_author=RM Al-Eidan, HS Al-Khalifa, AS Al-Salman; citation_volume=10; citation_publication_date=2020; citation_pages=5984; citation_id=CR3"/> <meta name="citation_reference" content="citation_journal_title=IEEE Transactions on Pattern Analysis and Machine Intelligence; citation_title=Localizing parts of faces using a consensus of exemplars; citation_author=PN Belhumeur, DW Jacobs, DJ Kriegman, N Kumar; citation_volume=35; citation_issue=12; citation_publication_date=2013; citation_pages=2930-2940; citation_id=CR4"/> <meta name="citation_reference" content="citation_journal_title=Behavioural Processes; citation_title=Facial correlates of emotional behaviour in the domestic cat (felis catus); citation_author=V Bennett, N Gourkow, DS Mills; citation_volume=141; citation_publication_date=2017; citation_pages=342-350; citation_id=CR5"/> <meta name="citation_reference" content="citation_journal_title=Nature Communications; citation_title=Behavioural individuality in clonal fish arises despite near-identical rearing conditions; citation_author=D Bierbach, KL Laskowski, M Wolf; citation_volume=8; citation_issue=1; citation_publication_date=2017; citation_pages=15361; citation_id=CR6"/> <meta name="citation_reference" content="citation_journal_title=Computers and Electronics in Agriculture; citation_title=Real-time goat face recognition using convolutional neural network; citation_author=M Billah, X Wang, J Yu, Y Jiang; citation_volume=194; citation_publication_date=2022; citation_id=CR7"/> <meta name="citation_reference" content="citation_journal_title=BMC Veterinary Research; citation_title=Validation of the English version of the UNESP-Botucatu multidimensional composite pain scale for assessing postoperative pain in cats; citation_author=JT Brondani, KR Mama, SP Luna, BD Wright, S Niyom, J Ambrosio, PR Vogel, CR Padovani; citation_volume=9; citation_issue=1; citation_publication_date=2013; citation_pages=1-15; citation_id=CR8"/> <meta name="citation_reference" content="citation_journal_title=International Journal of Computer Vision; citation_title=Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions; citation_author=S Broome, M Feighelstein, A Zamansky, CG Lencioni, HP Andersen, F Pessanha, M Mahmoud, H Kjellström, AA Salah; citation_volume=131; citation_issue=2; citation_publication_date=2023; citation_pages=572-590; citation_id=CR9"/> <meta name="citation_reference" content="citation_journal_title=Proceedings of the National Academy of Sciences; citation_title=A dictionary of behavioral motifs reveals clusters of genes affecting caenorhabditis elegans locomotion; citation_author=AE Brown, EI Yemini, LJ Grundy, T Jucikas, WR Schafer; citation_volume=110; citation_issue=2; citation_publication_date=2013; citation_pages=791-796; citation_id=CR10"/> <meta name="citation_reference" content="citation_journal_title=Applied Animal Behaviour Science; citation_title=Development and application of catfacs: Are human cat adopters influenced by cat facial expressions?; citation_author=CC Caeiro, AM Burrows, BM Waller; citation_volume=189; citation_publication_date=2017; citation_pages=66-78; citation_id=CR11"/> <meta name="citation_reference" content="Cao, J., Tang, H., Fang, H. -S., Shen, X., Lu, C., & Tai, Y. -W. (2019). Cross-domain adaptation for animal pose estimation. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 9498–9507)."/> <meta name="citation_reference" content="citation_journal_title=Ecology and Evolution; citation_title=A study on giant panda recognition based on images of a large proportion of captive pandas; citation_author=P Chen, P Swarup, WM Matkowski, AWK Kong, S Han, Z Zhang, H Rong; citation_volume=10; citation_issue=7; citation_publication_date=2020; citation_pages=3561-3573; citation_id=CR13"/> <meta name="citation_reference" content="citation_journal_title=Mammalian Biology; citation_title=Multispecies facial detection for individual identification of wildlife: A case study across ursids; citation_author=M Clapham, E Miller, M Nguyen, RC Horn; citation_volume=102; citation_issue=3; citation_publication_date=2022; citation_pages=943-955; citation_id=CR14"/> <meta name="citation_reference" content="Collins, B., Deng, J., Li, K., & Fei-Fei, L. (2008). Towards scalable dataset construction: An active learning approach. In: Proceedings of computer vision–ECCV 2008: 10th European conference on computer vision, Marseille, France, October 12-18, 2008, Part I 10 (pp. 86–98). Springer."/> <meta name="citation_reference" content="Dapogny, A., Bailly, K., & Cord, M. (2019). Decafa: Deep convolutional cascade for face alignment in the wild. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 6893–6901)."/> <meta name="citation_reference" content="citation_journal_title=Animal Welfare; citation_title=Humans can identify cats’ affective states from subtle facial expressions; citation_author=LC Dawson, J Cheal, L Niel, G Mason; citation_volume=28; citation_issue=4; citation_publication_date=2019; citation_pages=519-531; citation_id=CR17"/> <meta name="citation_reference" content="Deb, D., Wiper, S., Gong, S., Shi, Y., Tymoszek, C., Fletcher, A., & Jain, A. K. (2018). Face recognition: Primates in the wild. In: 2018 IEEE 9th international conference on biometrics theory, applications and systems (BTAS) (pp. 1–10). IEEE."/> <meta name="citation_reference" content="citation_journal_title=Animals; citation_title=Heads and tails: An analysis of visual signals in cats, felis catus; citation_author=BL Deputte, E Jumelet, C Gilbert, E Titeux; citation_volume=11; citation_issue=9; citation_publication_date=2021; citation_pages=2752; citation_id=CR19"/> <meta name="citation_reference" content="Elhamifar, E., Sapiro, G., Yang, A., & Sasrty, S. S. (2013). A convex optimization framework for active learning. In: Proceedings of the IEEE international conference on computer vision (pp. 209–216)."/> <meta name="citation_reference" content="citation_journal_title=PeerJ; citation_title=Clinical applicability of the feline grimace scale: Real-time versus image scoring and the influence of sedation and surgery; citation_author=MC Evangelista, J Benito, BP Monteiro, R Watanabe, GM Doodnaught, DS Pang, PV Steagall; citation_volume=8; citation_publication_date=2020; citation_pages=8967; citation_id=CR21"/> <meta name="citation_reference" content="citation_journal_title=Scientific Reports; citation_title=Facial expressions of pain in cats: The development and validation of a feline grimace scale; citation_author=MC Evangelista, R Watanabe, VS Leung, BP Monteiro, E O’Toole, DS Pang, PV Steagall; citation_volume=9; citation_issue=1; citation_publication_date=2019; citation_pages=1-11; citation_id=CR22"/> <meta name="citation_reference" content="citation_journal_title=Scientific Reports; citation_title=Explainable automated pain recognition in cats; citation_author=M Feighelstein, L Henze, S Meller, I Shimshoni, B Hermoni, M Berko, F Twele, A Schütter, N Dorn, S Kästner; citation_volume=13; citation_issue=1; citation_publication_date=2023; citation_pages=8973; citation_id=CR23"/> <meta name="citation_reference" content="citation_journal_title=Scientific Reports; citation_title=Automated recognition of pain in cats; citation_author=M Feighelstein, I Shimshoni, LR Finka, SP Luna, DS Mills, A Zamansky; citation_volume=12; citation_issue=1; citation_publication_date=2022; citation_pages=9575; citation_id=CR24"/> <meta name="citation_reference" content="citation_journal_title=Future Internet; citation_title=Predicting dog emotions based on posture analysis using deeplabcut; citation_author=K Ferres, T Schloesser, PA Gloor; citation_volume=14; citation_issue=4; citation_publication_date=2022; citation_pages=97; citation_id=CR25"/> <meta name="citation_reference" content="citation_journal_title=Scientific Reports; citation_title=Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar; citation_author=LR Finka, SP Luna, JT Brondani, Y Tzimiropoulos, J McDonagh, MJ Farnworth, M Ruta, DS Mills; citation_volume=9; citation_issue=1; citation_publication_date=2019; citation_pages=1-12; citation_id=CR26"/> <meta name="citation_reference" content="citation_journal_title=PLoS ONE; citation_title=Facial indicators of positive emotions in rats; citation_author=K Finlayson, JF Lampe, S Hintze, H Würbel, L Melotti; citation_volume=11; citation_issue=11; citation_publication_date=2016; citation_pages=0166446; citation_id=CR27"/> <meta name="citation_reference" content="citation_journal_title=Palo Alto; citation_title=Facial action coding system: A technique for the measurement of facial movement; citation_author=E Friesen, P Ekman; citation_volume=3; citation_issue=2; citation_publication_date=1978; citation_pages=5; citation_id=CR28"/> <meta name="citation_reference" content="citation_journal_title=PLoS ONE; citation_title=Multicow pose estimation based on keypoint extraction; citation_author=C Gong, Y Zhang, Y Wei, X Du, L Su, Z Weng; citation_volume=17; citation_issue=6; citation_publication_date=2022; citation_pages=0269259; citation_id=CR29"/> <meta name="citation_reference" content="citation_journal_title=Elife; citation_title=DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning; citation_author=JM Graving, D Chae, H Naik, L Li, B Koger, BR Costelloe, ID Couzin; citation_volume=8; citation_publication_date=2019; citation_pages=47994; citation_id=CR30"/> <meta name="citation_reference" content="Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., & Grundmann, M. (2020). Attention mesh: High-fidelity face mesh prediction in real-time. arXiv:2006.10962 ."/> <meta name="citation_reference" content="citation_journal_title=IET Computer Vision; citation_title=Active learning combining uncertainty and diversity for multi-class image classification; citation_author=Y Gu, Z Jin, SC Chiu; citation_volume=9; citation_issue=3; citation_publication_date=2015; citation_pages=400-407; citation_id=CR32"/> <meta name="citation_reference" content="Guo, S., Xu, P., Miao, Q., Shao, G., Chapman, C.A., Chen, X., He, G., Fang, D., Zhang, H., & Sun, Y., et al. (2020). Automatic identification of individual primates with deep learning techniques. Iscience, 23(8)."/> <meta name="citation_reference" content="He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. CVPR."/> <meta name="citation_reference" content="Hewitt, C., & Mahmoud, M. (2019). Pose-informed face alignment for extreme head pose variations in animals. In: 2019 8th international conference on affective computing and intelligent interaction (ACII) (pp. 1–6). IEEE."/> <meta name="citation_reference" content="citation_journal_title=Journal of Small Animal Practice; citation_title=Evaluation of facial expression in acute pain in cats; citation_author=E Holden, G Calvo, M Collins, A Bell, J Reid, E Scott, AM Nolan; citation_volume=55; citation_issue=12; citation_publication_date=2014; citation_pages=615-621; citation_id=CR36"/> <meta name="citation_reference" content="Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700–4708)."/> <meta name="citation_reference" content="Huang, Y., Yang, H., Li, C., Kim, J., & Wei, F. (2021). Adnet: Leveraging error-bias towards normal direction in face alignment. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 3080–3090)."/> <meta name="citation_reference" content="Hummel, H. I., Pessanha, F., Salah, A. A., van Loon, T.J ., & Veltkamp, R. C. (2020). Automatic pain detection on horse and donkey faces. In: 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020) (pp. 793–800). IEEE."/> <meta name="citation_reference" content="citation_journal_title=Scientific Reports; citation_title=The role of cat eye narrowing movements in cat-human communication; citation_author=T Humphrey, L Proops, J Forman, R Spooner, K McComb; citation_volume=10; citation_issue=1; citation_publication_date=2020; citation_pages=16503; citation_id=CR40"/> <meta name="citation_reference" content="citation_journal_title=International Journal of Computer Vision; citation_title=Pixel-in-pixel net: Towards efficient facial landmark detection in the wild; citation_author=H Jin, S Liao, L Shao; citation_volume=129; citation_publication_date=2021; citation_pages=3174-3194; citation_id=CR41"/> <meta name="citation_reference" content="Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics. https://github.com/ultralytics/ultralytics "/> <meta name="citation_reference" content="citation_journal_title=Nature Communications; citation_title=Leg-tracking and automated behavioural classification in drosophila; citation_author=J Kain, C Stokes, Q Gaudry, X Song, J Foley, R Wilson, B Bivort; citation_volume=4; citation_issue=1; citation_publication_date=2013; citation_pages=1910; citation_id=CR43"/> <meta name="citation_reference" content="citation_journal_title=IEEE Transactions on Geoscience and Remote Sensing; citation_title=Half a percent of labels is enough: Efficient animal detection in UAV imagery using deep CNNS and active learning; citation_author=B Kellenberger, D Marcos, S Lobry, D Tuia; citation_volume=57; citation_issue=12; citation_publication_date=2019; citation_pages=9524-9533; citation_id=CR44"/> <meta name="citation_reference" content="Khan, M. H., McDonagh, J., Khan, S., Shahabuddin, M., Arora, A., Khan, F. S., Shao, L., & Tzimiropoulos, G. (2020). Animalweb: A large-scale hierarchical dataset of annotated animal faces. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6939–6948)."/> <meta name="citation_reference" content="Körschens, M., Barz, B., & Denzler, J. (2018). Towards automatic identification of elephants in the wild. arXiv:1812.04418 ."/> <meta name="citation_reference" content="Kumar, A., Marks, T. K., Mou, W., Wang, Y., Jones, M., Cherian, A., Koike-Akino, T., Liu, X., & Feng, C. (2020). Luvli face alignment: Estimating landmarks’ location, uncertainty, and visibility likelihood. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8236–8246)."/> <meta name="citation_reference" content="Labelbox (2023). "Labelbox. https://labelbox.com ."/> <meta name="citation_reference" content="Labuguen, R., Bardeloza, D. K., Negrete, S. B., Matsumoto, J., Inoue, K., & Shibata, T. (2019). Primate markerless pose estimation and movement analysis using deeplabcut. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd international conference on imaging, vision & pattern recognition (icIVPR) (pp. 297–300). IEEE."/> <meta name="citation_reference" content="Lan, X., Hu, Q., Chen, Q., Xue, J., & Cheng, J. (2021). Hih: Towards more accurate face alignment via heatmap in heatmap. arXiv:2104.03100 ."/> <meta name="citation_reference" content="citation_journal_title=Journal of Feline Medicine and Surgery; citation_title=Djd-associated pain in cats: What can we do to promote patient comfort?; citation_author=BDX Lascelles, SA Robertson; citation_volume=12; citation_issue=3; citation_publication_date=2010; citation_pages=200-212; citation_id=CR51"/> <meta name="citation_reference" content="Le, V., Brandt, J., Lin, Z., Bourdev, L., & Huang, T. S. (2012). Interactive facial feature localization. In: Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Part III 12 (pp. 679–692). Springer."/> <meta name="citation_reference" content="Li, X., & Guo, Y. (2013). Adaptive active learning for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 859–866)."/> <meta name="citation_reference" content="Li, H., Guo, Z., Rhee, S. -M., Han, S., & Han, J. -J. (2022). Towards accurate facial landmark detection via cascaded transformers. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4176–4185)."/> <meta name="citation_reference" content="Li, J., Jin, H., Liao, S., Shao, L., & Heng, P.-A. (2022). Repformer: Refinement pyramid transformer for robust facial landmark detection. arXiv:2207.03917 ."/> <meta name="citation_reference" content="Li, W., Lu, Y., Zheng, K., Liao, H., Lin, C., Luo, J., Cheng, C. -T., Xiao, J., Lu, L., & Kuo, C. -F., et al. (2020). Structured landmark detection via topology-adapting deep graph learning. In: Computer vision–ECCV 2020: Proceedings of the 16th European conference, Glasgow, UK, August 23–28, 2020, Part IX 16 (pp. 266–283). Springer."/> <meta name="citation_reference" content="citation_journal_title=IEEE Transactions on Affective Computing; citation_title=Deep facial expression recognition: A survey; citation_author=S Li, W Deng; citation_volume=13; citation_issue=3; citation_publication_date=2020; citation_pages=1195-1215; citation_id=CR57"/> <meta name="citation_reference" content="Liu, Z., Ding, H., Zhong, H., Li, W., Dai, J., & He, C. (2021). Influence selection for active learning. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 9274–9283)."/> <meta name="citation_reference" content="Liu, J., Kanazawa, A., Jacobs, D., & Belhumeur, P. (2012). Dog breed classification using part localization. In: Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Part I 12 (pp. 172–185). Springer."/> <meta name="citation_reference" content="Liu, Z., Mao, H., Wu, C. -Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976–11986)."/> <meta name="citation_reference" content="Llewelyn, H., & Kiddie, J. (2022). Can a facial action coding system (catfacs) be used to determine the welfare state of cats with cerebellar hypoplasia? Veterinary Record, 190(8)."/> <meta name="citation_reference" content="citation_journal_title=Pattern Recognition Letters; citation_title=Head pose estimation using facial-landmarks classification for children rehabilitation games; citation_author=S Malek, S Rossi; citation_volume=152; citation_publication_date=2021; citation_pages=406-412; citation_id=CR62"/> <meta name="citation_reference" content="citation_journal_title=Mathematical Problems in Engineering; citation_title=Landmark-based facial feature construction and action unit intensity prediction; citation_author=J Ma, X Li, Y Ren, R Yang, Q Zhao; citation_volume=2021; citation_publication_date=2021; citation_pages=1-12; citation_id=CR63"/> <meta name="citation_reference" content="Mathis, A., Biasi, T., Schneider, S., Yuksekgonul, M., Rogers, B., Bethge, M., & Mathis, M. W. (2021). Pretraining boosts out-of-domain robustness for pose estimation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1859–1868)."/> <meta name="citation_reference" content="citation_journal_title=Nature Neuroscience; citation_title=Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning; citation_author=A Mathis, P Mamidanna, KM Cury, T Abe, VN Murthy, MW Mathis, M Bethge; citation_volume=21; citation_issue=9; citation_publication_date=2018; citation_pages=1281; citation_id=CR65"/> <meta name="citation_reference" content="citation_journal_title=Current Opinion in Neurobiology; citation_title=Deep learning tools for the measurement of animal behavior in neuroscience; citation_author=MW Mathis, A Mathis; citation_volume=60; citation_publication_date=2020; citation_pages=1-11; citation_id=CR66"/> <meta name="citation_reference" content="citation_journal_title=Animals; citation_title=Development of an automated pain facial expression detection system for sheep (ovis aries); citation_author=K McLennan, M Mahmoud; citation_volume=9; citation_issue=4; citation_publication_date=2019; citation_pages=196; citation_id=CR67"/> <meta name="citation_reference" content="citation_journal_title=Applied Animal Behaviour Science; citation_title=Development of a facial expression scale using footrot and mastitis as models of pain in sheep; citation_author=KM McLennan, CJ Rebelo, MJ Corke, MA Holmes, MC Leach, F Constantino-Casas; citation_volume=176; citation_publication_date=2016; citation_pages=19-26; citation_id=CR68"/> <meta name="citation_reference" content="citation_journal_title=PLoS ONE; citation_title=Behavioural signs of pain in cats: An expert consensus; citation_author=I Merola, DS Mills; citation_volume=11; citation_issue=2; citation_publication_date=2016; citation_pages=0150040; citation_id=CR69"/> <meta name="citation_reference" content="Micaelli, P., Vahdat, A., Yin, H., Kautz, J., & Molchanov, P. (2023). Recurrence without recurrence: Stable video landmark detection with deep equilibrium models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 22814–22825)."/> <meta name="citation_reference" content="Mougeot, G., Li, D., & Jia, S. (2019). A deep learning approach for dog face verification and recognition. In: PRICAI 2019: Trends in artificial intelligence: proceedings of 16th Pacific rim international conference on artificial intelligence, Cuvu, Yanuca Island, Fiji, August 26-30, 2019, Part III 16 (pp. 418–430). Springer."/> <meta name="citation_reference" content="citation_journal_title=Nature Protocols; citation_title=Using deeplabcut for 3D markerless pose estimation across species and behaviors; citation_author=T Nath, A Mathis, AC Chen, A Patel, M Bethge, MW Mathis; citation_volume=14; citation_issue=7; citation_publication_date=2019; citation_pages=2152-2176; citation_id=CR72"/> <meta name="citation_reference" content="Newell, A., Yang, K., & Deng, J. (2016). Stacked hourglass networks for human pose estimation, pp. 483–499. Springer."/> <meta name="citation_reference" content="citation_journal_title=Applied Animal Behaviour Science; citation_title=Animal emotion: Descriptive and prescriptive definitions and their implications for a comparative perspective; citation_author=ES Paul, MT Mendl; citation_volume=205; citation_publication_date=2018; citation_pages=202-209; citation_id=CR74"/> <meta name="citation_reference" content="citation_journal_title=Nature Methods; citation_title=Fast animal pose estimation using deep neural networks; citation_author=TD Pereira, DE Aldarondo, L Willmore, M Kislin, SS-H Wang, M Murthy, JW Shaevitz; citation_volume=16; citation_issue=1; citation_publication_date=2019; citation_pages=117-125; citation_id=CR75"/> <meta name="citation_reference" content="Prados-Torreblanca, A., Buenaposada, J. M., & Baumela, L. (2022). Shape preserving facial landmarks with graph attention networks. arXiv:2210.07233 ."/> <meta name="citation_reference" content="Quan, Q., Yao, Q., Li, J., & Zhou, S. K. (2022). Which images to label for few-shot medical landmark detection? In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 20606–20616)."/> <meta name="citation_reference" content="Reid, J., Scott, E., Calvo, G., & Nolan, A. (2017). Definitive glasgow acute pain scale for cats: Validation and intervention level. Veterinary Record, 108(18)."/> <meta name="citation_reference" content="Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. -C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510–4520)."/> <meta name="citation_reference" content="Scott, L., & Florkiewicz, B. N. (2023). Feline faces: Unraveling the social function of domestic cat facial signals. Behavioural Processes, 104959."/> <meta name="citation_reference" content="citation_journal_title=Psychological Bulletin; citation_title=Intraclass correlations: Uses in assessing rater reliability; citation_author=PE Shrout, JL Fleiss; citation_volume=86; citation_issue=2; citation_publication_date=1979; citation_pages=420; citation_id=CR81"/> <meta name="citation_reference" content="Sinha, S., Ebrahimi, S., & Darrell, T. (2019). Variational adversarial active learning. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 5972–5981)."/> <meta name="citation_reference" content="citation_journal_title=Molecular Pain; citation_title=The rat grimace scale: A partially automated method for quantifying pain in the laboratory rat via facial expressions; citation_author=SG Sotocina, RE Sorge, A Zaloum, AH Tuttle, LJ Martin, JS Wieskopf, JC Mapplebeck, P Wei, S Zhan, S Zhang; citation_volume=7; citation_publication_date=2011; citation_pages=1744-8069; citation_id=CR83"/> <meta name="citation_reference" content="Sun, Y., & Murata, N. (2020). Cafm: A 3d morphable model for animals. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision workshops (pp. 20–24)."/> <meta name="citation_reference" content="Sun, K., Zhao, Y., Jiang, B., Cheng, T., Xiao, B., Liu, D., Mu, Y., Wang, X., Liu, W., & Wang, J. (2019). High-resolution representations for labeling pixels and regions. arXiv:1904.04514 ."/> <meta name="citation_reference" content="Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning (pp. 6105–6114). PMLR."/> <meta name="citation_reference" content="Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In: International conference on machine learning (pp. 10096–10106). PMLR"/> <meta name="citation_reference" content="citation_journal_title=Procedia Computer Science; citation_title=Emotion recognition using facial expressions; citation_author=P Tarnowski, M Kołodziej, A Majkowski, RJ Rak; citation_volume=108; citation_publication_date=2017; citation_pages=1175-1184; citation_id=CR88"/> <meta name="citation_reference" content="Unsplash. https://unsplash.com . Accessed 6 Oct 2023."/> <meta name="citation_reference" content="citation_journal_title=Animals; citation_title=Methods of assessment of the welfare of shelter cats: A review; citation_author=V Vojtkovská, E Voslářová, V Večerek; citation_volume=10; citation_issue=9; citation_publication_date=2020; citation_pages=1527; citation_id=CR90"/> <meta name="citation_reference" content="Wang, X., Bo, L., & Fuxin, L. (2019). Adaptive wing loss for robust face alignment via heatmap regression. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 6971–6981)."/> <meta name="citation_reference" content="citation_journal_title=Neuron; citation_title=Mapping sub-second structure in mouse behavior; citation_author=AB Wiltschko, MJ Johnson, G Iurilli, RE Peterson, JM Katon, SL Pashkovski, VE Abraira, RP Adams, SR Datta; citation_volume=88; citation_issue=6; citation_publication_date=2015; citation_pages=1121-1135; citation_id=CR92"/> <meta name="citation_reference" content="Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., & Zhou, Q. (2018). Look at boundary: A boundary-aware face alignment algorithm. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2129–2138)."/> <meta name="citation_reference" content="citation_journal_title=International Journal of Computer Vision; citation_title=Facial landmark detection: A literature survey; citation_author=Y Wu, Q Ji; citation_volume=127; citation_issue=2; citation_publication_date=2019; citation_pages=115-142; citation_id=CR94"/> <meta name="citation_reference" content="Wu, M., Li, C., & Yao, Z. (2022). Deep active learning for computer vision tasks: Methodologies, applications, and challenges. Applied Sciences, 12(16), 8103."/> <meta name="citation_reference" content="Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1492–1500)."/> <meta name="citation_reference" content="Yang, Y., & Sinnott, R. O. (2023). Automated recognition and classification of cat pain through deep learning. Lecture Notes in Computer Science, 13864."/> <meta name="citation_reference" content="Yang, J., et al. (2003). Automatically labeling video data using multi-class active learning. In: Proceedings of ninth IEEE international conference on computer vision (pp. 516–523). IEEE."/> <meta name="citation_reference" content="Yang, H., Zhang, R., & Robinson, P. (2016). Human and sheep facial landmarks localisation by triplet interpolated features. In: 2016 IEEE winter conference on applications of computer vision (WACV) (pp. 1–8). IEEE."/> <meta name="citation_reference" content="Yang, J., Zhang, F., Chen, B., & Khan, S. U. (2019). Facial expression recognition based on facial action unit. In: 2019 tenth international green and sustainable computing conference (IGSC) (pp. 1–6). IEEE."/> <meta name="citation_reference" content="Ye, S., Filippova, A., Lauer, J., Vidal, M., Schneider, S., Qiu, T., Mathis, A., & Mathis, M. W. (2022). Superanimal models pretrained for plug-and-play analysis of animal behavior. arXiv:2203.07436 ."/> <meta name="citation_reference" content="Yoo, D., & Kweon, I. S. (2019) Learning loss for active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 93–102)."/> <meta name="citation_reference" content="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer."/> <meta name="citation_reference" content="citation_journal_title=Mathematical Problems in Engineering; citation_title=Key points tracking and grooming behavior recognition of bactrocera minax (diptera: Trypetidae) via deeplabcut; citation_author=W Zhan, Y Zou, Z He, Z Zhang; citation_volume=2021; citation_publication_date=2021; citation_pages=1-15; citation_id=CR104"/> <meta name="citation_reference" content="Zhou, Z., Li, H., Liu, H., Wang, N., Yu, G., & Ji, R. (2023). Star loss: Reducing semantic ambiguity in facial landmark detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 15475–15484)."/> <meta name="citation_author" content="Martvel, George"/> <meta name="citation_author_email" content="martvelge@gmail.com"/> <meta name="citation_author_institution" content="Information Systems Department, University of Haifa, Haifa, Israel"/> <meta name="citation_author" content="Shimshoni, Ilan"/> <meta name="citation_author_email" content="ishimshoni@is.haifa.ac.il"/> <meta name="citation_author_institution" content="Information Systems Department, University of Haifa, Haifa, Israel"/> <meta name="citation_author" content="Zamansky, Anna"/> <meta name="citation_author_email" content="annazam@is.haifa.ac.il"/> <meta name="citation_author_institution" content="Information Systems Department, University of Haifa, Haifa, Israel"/> <meta name="format-detection" content="telephone=no"/> <meta name="citation_cover_date" content="2024/08/01"/> <meta property="og:url" content="https://link.springer.com/article/10.1007/s11263-024-02006-w"/> <meta property="og:type" content="article"/> <meta property="og:site_name" content="SpringerLink"/> <meta property="og:title" content="Automated Detection of Cat Facial Landmarks - International Journal of Computer Vision"/> <meta property="og:description" content="The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant challenges that researchers in the field currently face is the scarcity of high-quality, comprehensive datasets that allow the development of models for facial expressions analysis. One of the possible approaches is the utilisation of facial landmarks, which has been shown for humans and animals. In this paper we present a novel dataset of cat facial images annotated with bounding boxes and 48 facial landmarks grounded in cat facial anatomy. We also introduce a landmark detection convolution neural network-based model which uses a magnifying ensemble method. Our model shows excellent performance on cat faces and is generalizable to human and other animals facial landmark detection."/> <meta property="og:image" content="https://static-content.springer.com/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig1_HTML.png"/> <meta name="format-detection" content="telephone=no"> <link rel="apple-touch-icon" sizes="180x180" href=/oscar-static/img/favicons/darwin/apple-touch-icon-92e819bf8a.png> <link rel="icon" type="image/png" sizes="192x192" href=/oscar-static/img/favicons/darwin/android-chrome-192x192-6f081ca7e5.png> <link rel="icon" type="image/png" sizes="32x32" href=/oscar-static/img/favicons/darwin/favicon-32x32-1435da3e82.png> <link rel="icon" type="image/png" sizes="16x16" href=/oscar-static/img/favicons/darwin/favicon-16x16-ed57f42bd2.png> <link rel="shortcut icon" data-test="shortcut-icon" href=/oscar-static/img/favicons/darwin/favicon-c6d59aafac.ico> <meta name="theme-color" content="#e6e6e6"> <!-- Please see discussion: https://github.com/springernature/frontend-open-space/issues/316--> <!--TODO: Implement alternative to CTM in here if the discussion concludes we do not continue with CTM as a practice--> <link rel="stylesheet" media="print" href=/oscar-static/app-springerlink/css/print-b8af42253b.css> <style> html{text-size-adjust:100%;line-height:1.15}body{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;line-height:1.8;margin:0}details,main{display:block}h1{font-size:2em;margin:.67em 0}a{background-color:transparent;color:#025e8d}sub{bottom:-.25em;font-size:75%;line-height:0;position:relative;vertical-align:baseline}img{border:0;height:auto;max-width:100%;vertical-align:middle}button,input{font-family:inherit;font-size:100%;line-height:1.15;margin:0;overflow:visible}button{text-transform:none}[type=button],[type=submit],button{-webkit-appearance:button}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}summary{display:list-item}[hidden]{display:none}button{cursor:pointer}svg{height:1rem;width:1rem} </style> <style>@media only print, only all and (prefers-color-scheme: no-preference), only all and (prefers-color-scheme: light), only all and (prefers-color-scheme: dark) { body{background:#fff;color:#222;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;line-height:1.8;min-height:100%}a{color:#025e8d;text-decoration:underline;text-decoration-skip-ink:auto}button{cursor:pointer}img{border:0;height:auto;max-width:100%;vertical-align:middle}html{box-sizing:border-box;font-size:100%;height:100%;overflow-y:scroll}h1{font-size:2.25rem}h2{font-size:1.75rem}h1,h2,h4{font-weight:700;line-height:1.2}h4{font-size:1.25rem}body{font-size:1.125rem}*{box-sizing:inherit}p{margin-bottom:2rem;margin-top:0}p:last-of-type{margin-bottom:0}.c-ad{text-align:center}@media only screen and (min-width:480px){.c-ad{padding:8px}}.c-ad--728x90{display:none}.c-ad--728x90 .c-ad__inner{min-height:calc(1.5em + 94px)}@media only screen and (min-width:876px){.js .c-ad--728x90{display:none}}.c-ad__label{color:#333;font-size:.875rem;font-weight:400;line-height:1.5;margin-bottom:4px}.c-ad__label,.c-status-message{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.c-status-message{align-items:center;box-sizing:border-box;display:flex;position:relative;width:100%}.c-status-message :last-child{margin-bottom:0}.c-status-message--boxed{background-color:#fff;border:1px solid #ccc;line-height:1.4;padding:16px}.c-status-message__heading{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:.875rem;font-weight:700}.c-status-message__icon{fill:currentcolor;display:inline-block;flex:0 0 auto;height:1.5em;margin-right:8px;transform:translate(0);vertical-align:text-top;width:1.5em}.c-status-message__icon--top{align-self:flex-start}.c-status-message--info .c-status-message__icon{color:#003f8d}.c-status-message--boxed.c-status-message--info{border-bottom:4px solid #003f8d}.c-status-message--error .c-status-message__icon{color:#c40606}.c-status-message--boxed.c-status-message--error{border-bottom:4px solid #c40606}.c-status-message--success .c-status-message__icon{color:#00b8b0}.c-status-message--boxed.c-status-message--success{border-bottom:4px solid #00b8b0}.c-status-message--warning .c-status-message__icon{color:#edbc53}.c-status-message--boxed.c-status-message--warning{border-bottom:4px solid #edbc53}.eds-c-header{background-color:#fff;border-bottom:2px solid #01324b;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:1rem;line-height:1.5;padding:8px 0 0}.eds-c-header__container{align-items:center;display:flex;flex-wrap:nowrap;gap:8px 16px;justify-content:space-between;margin:0 auto 8px;max-width:1280px;padding:0 8px;position:relative}.eds-c-header__nav{border-top:2px solid #c5e0f4;padding-top:4px;position:relative}.eds-c-header__nav-container{align-items:center;display:flex;flex-wrap:wrap;margin:0 auto 4px;max-width:1280px;padding:0 8px;position:relative}.eds-c-header__nav-container>:not(:last-child){margin-right:32px}.eds-c-header__link-container{align-items:center;display:flex;flex:1 0 auto;gap:8px 16px;justify-content:space-between}.eds-c-header__list{list-style:none;margin:0;padding:0}.eds-c-header__list-item{font-weight:700;margin:0 auto;max-width:1280px;padding:8px}.eds-c-header__list-item:not(:last-child){border-bottom:2px solid #c5e0f4}.eds-c-header__item{color:inherit}@media only screen and (min-width:768px){.eds-c-header__item--menu{display:none;visibility:hidden}.eds-c-header__item--menu:first-child+*{margin-block-start:0}}.eds-c-header__item--inline-links{display:none;visibility:hidden}@media only screen and (min-width:768px){.eds-c-header__item--inline-links{display:flex;gap:16px 16px;visibility:visible}}.eds-c-header__item--divider:before{border-left:2px solid #c5e0f4;content:"";height:calc(100% - 16px);margin-left:-15px;position:absolute;top:8px}.eds-c-header__brand{padding:16px 8px}.eds-c-header__brand a{display:block;line-height:1;text-decoration:none}.eds-c-header__brand img{height:1.5rem;width:auto}.eds-c-header__link{color:inherit;display:inline-block;font-weight:700;padding:16px 8px;position:relative;text-decoration-color:transparent;white-space:nowrap;word-break:normal}.eds-c-header__icon{fill:currentcolor;display:inline-block;font-size:1.5rem;height:1em;transform:translate(0);vertical-align:bottom;width:1em}.eds-c-header__icon+*{margin-left:8px}.eds-c-header__expander{background-color:#f0f7fc}.eds-c-header__search{display:block;padding:24px 0}@media only screen and (min-width:768px){.eds-c-header__search{max-width:70%}}.eds-c-header__search-container{position:relative}.eds-c-header__search-label{color:inherit;display:inline-block;font-weight:700;margin-bottom:8px}.eds-c-header__search-input{background-color:#fff;border:1px solid #000;padding:8px 48px 8px 8px;width:100%}.eds-c-header__search-button{background-color:transparent;border:0;color:inherit;height:100%;padding:0 8px;position:absolute;right:0}.has-tethered.eds-c-header__expander{border-bottom:2px solid #01324b;left:0;margin-top:-2px;top:100%;width:100%;z-index:10}@media only screen and (min-width:768px){.has-tethered.eds-c-header__expander--menu{display:none;visibility:hidden}}.has-tethered .eds-c-header__heading{display:none;visibility:hidden}.has-tethered .eds-c-header__heading:first-child+*{margin-block-start:0}.has-tethered .eds-c-header__search{margin:auto}.eds-c-header__heading{margin:0 auto;max-width:1280px;padding:16px 16px 0}.eds-c-pagination{align-items:center;display:flex;flex-wrap:wrap;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:.875rem;gap:16px 0;justify-content:center;line-height:1.4;list-style:none;margin:0;padding:32px 0}@media only screen and (min-width:480px){.eds-c-pagination{padding:32px 16px}}.eds-c-pagination__item{margin-right:8px}.eds-c-pagination__item--prev{margin-right:16px}.eds-c-pagination__item--next .eds-c-pagination__link,.eds-c-pagination__item--prev .eds-c-pagination__link{padding:16px 8px}.eds-c-pagination__item--next{margin-left:8px}.eds-c-pagination__item:last-child{margin-right:0}.eds-c-pagination__link{align-items:center;color:#222;cursor:pointer;display:inline-block;font-size:1rem;margin:0;padding:16px 24px;position:relative;text-align:center;transition:all .2s ease 0s}.eds-c-pagination__link:visited{color:#222}.eds-c-pagination__link--disabled{border-color:#555;color:#555;cursor:default}.eds-c-pagination__link--active{background-color:#01324b;background-image:none;border-radius:8px;color:#fff}.eds-c-pagination__link--active:focus,.eds-c-pagination__link--active:hover,.eds-c-pagination__link--active:visited{color:#fff}.eds-c-pagination__link-container{align-items:center;display:flex}.eds-c-pagination__icon{fill:#222;height:1.5rem;width:1.5rem}.eds-c-pagination__icon--disabled{fill:#555}.eds-c-pagination__visually-hidden{clip:rect(0,0,0,0);border:0;clip-path:inset(50%);height:1px;overflow:hidden;padding:0;position:absolute!important;white-space:nowrap;width:1px}.c-breadcrumbs{color:#333;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:1rem;list-style:none;margin:0;padding:0}.c-breadcrumbs>li{display:inline}svg.c-breadcrumbs__chevron{fill:#333;height:10px;margin:0 .25rem;width:10px}.c-breadcrumbs--contrast,.c-breadcrumbs--contrast .c-breadcrumbs__link{color:#fff}.c-breadcrumbs--contrast svg.c-breadcrumbs__chevron{fill:#fff}@media only screen and (max-width:479px){.c-breadcrumbs .c-breadcrumbs__item{display:none}.c-breadcrumbs .c-breadcrumbs__item:last-child,.c-breadcrumbs .c-breadcrumbs__item:nth-last-child(2){display:inline}}.c-skip-link{background:#01324b;bottom:auto;color:#fff;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:1rem;padding:8px;position:absolute;text-align:center;transform:translateY(-100%);width:100%;z-index:9999}@media (prefers-reduced-motion:reduce){.c-skip-link{transition:top .3s ease-in-out 0s}}@media print{.c-skip-link{display:none}}.c-skip-link:active,.c-skip-link:hover,.c-skip-link:link,.c-skip-link:visited{color:#fff}.c-skip-link:focus{transform:translateY(0)}.l-with-sidebar{display:flex;flex-wrap:wrap}.l-with-sidebar>*{margin:0}.l-with-sidebar__sidebar{flex-basis:var(--with-sidebar--basis,400px);flex-grow:1}.l-with-sidebar>:not(.l-with-sidebar__sidebar){flex-basis:0px;flex-grow:999;min-width:var(--with-sidebar--min,53%)}.l-with-sidebar>:first-child{padding-right:4rem}@supports (gap:1em){.l-with-sidebar>:first-child{padding-right:0}.l-with-sidebar{gap:var(--with-sidebar--gap,4rem)}}.c-header__link{color:inherit;display:inline-block;font-weight:700;padding:16px 8px;position:relative;text-decoration-color:transparent;white-space:nowrap;word-break:normal}.app-masthead__colour-4{--background-color:#ff9500;--gradient-light:rgba(0,0,0,.5);--gradient-dark:rgba(0,0,0,.8)}.app-masthead{background:var(--background-color,#0070a8);position:relative}.app-masthead:after{background:radial-gradient(circle at top right,var(--gradient-light,rgba(0,0,0,.4)),var(--gradient-dark,rgba(0,0,0,.7)));bottom:0;content:"";left:0;position:absolute;right:0;top:0}@media only screen and (max-width:479px){.app-masthead:after{background:linear-gradient(225deg,var(--gradient-light,rgba(0,0,0,.4)),var(--gradient-dark,rgba(0,0,0,.7)))}}.app-masthead__container{color:var(--masthead-color,#fff);margin:0 auto;max-width:1280px;padding:0 16px;position:relative;z-index:1}.u-button{align-items:center;background-color:#01324b;background-image:none;border:4px solid transparent;border-radius:32px;cursor:pointer;display:inline-flex;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:.875rem;font-weight:700;justify-content:center;line-height:1.3;margin:0;padding:16px 32px;position:relative;transition:all .2s ease 0s;width:auto}.u-button svg,.u-button--contrast svg,.u-button--primary svg,.u-button--secondary svg,.u-button--tertiary svg{fill:currentcolor}.u-button,.u-button:visited{color:#fff}.u-button,.u-button:hover{box-shadow:0 0 0 1px #01324b;text-decoration:none}.u-button:hover{border:4px solid #fff}.u-button:focus{border:4px solid #fc0;box-shadow:none;outline:0;text-decoration:none}.u-button:focus,.u-button:hover{background-color:#fff;background-image:none;color:#01324b}.app-masthead--pastel .c-pdf-download .u-button--primary:focus svg path,.app-masthead--pastel .c-pdf-download .u-button--primary:hover svg path,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:focus svg path,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:hover svg path,.u-button--primary:focus svg path,.u-button--primary:hover svg path,.u-button:focus svg path,.u-button:hover svg path{fill:#01324b}.u-button--primary{background-color:#01324b;background-image:none;border:4px solid transparent;box-shadow:0 0 0 1px #01324b;color:#fff;font-weight:700}.u-button--primary:visited{color:#fff}.u-button--primary:hover{border:4px solid #fff;box-shadow:0 0 0 1px #01324b;text-decoration:none}.u-button--primary:focus{border:4px solid #fc0;box-shadow:none;outline:0;text-decoration:none}.u-button--primary:focus,.u-button--primary:hover{background-color:#fff;background-image:none;color:#01324b}.u-button--secondary{background-color:#fff;border:4px solid #fff;color:#01324b;font-weight:700}.u-button--secondary:visited{color:#01324b}.u-button--secondary:hover{border:4px solid #01324b;box-shadow:none}.u-button--secondary:focus,.u-button--secondary:hover{background-color:#01324b;color:#fff}.app-masthead--pastel .c-pdf-download .u-button--secondary:focus svg path,.app-masthead--pastel .c-pdf-download .u-button--secondary:hover svg path,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary:focus svg path,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary:hover svg path,.u-button--secondary:focus svg path,.u-button--secondary:hover svg path,.u-button--tertiary:focus svg path,.u-button--tertiary:hover svg path{fill:#fff}.u-button--tertiary{background-color:#ebf1f5;border:4px solid transparent;box-shadow:none;color:#666;font-weight:700}.u-button--tertiary:visited{color:#666}.u-button--tertiary:hover{border:4px solid #01324b;box-shadow:none}.u-button--tertiary:focus,.u-button--tertiary:hover{background-color:#01324b;color:#fff}.u-button--contrast{background-color:transparent;background-image:none;color:#fff;font-weight:400}.u-button--contrast:visited{color:#fff}.u-button--contrast,.u-button--contrast:focus,.u-button--contrast:hover{border:4px solid #fff}.u-button--contrast:focus,.u-button--contrast:hover{background-color:#fff;background-image:none;color:#000}.u-button--contrast:focus svg path,.u-button--contrast:hover svg path{fill:#000}.u-button--disabled,.u-button:disabled{background-color:transparent;background-image:none;border:4px solid #ccc;color:#000;cursor:default;font-weight:400;opacity:.7}.u-button--disabled svg,.u-button:disabled svg{fill:currentcolor}.u-button--disabled:visited,.u-button:disabled:visited{color:#000}.u-button--disabled:focus,.u-button--disabled:hover,.u-button:disabled:focus,.u-button:disabled:hover{border:4px solid #ccc;text-decoration:none}.u-button--disabled:focus,.u-button--disabled:hover,.u-button:disabled:focus,.u-button:disabled:hover{background-color:transparent;background-image:none;color:#000}.u-button--disabled:focus svg path,.u-button--disabled:hover svg path,.u-button:disabled:focus svg path,.u-button:disabled:hover svg path{fill:#000}.u-button--small,.u-button--xsmall{font-size:.875rem;padding:2px 8px}.u-button--small{padding:8px 16px}.u-button--large{font-size:1.125rem;padding:10px 35px}.u-button--full-width{display:flex;width:100%}.u-button--icon-left svg{margin-right:8px}.u-button--icon-right svg{margin-left:8px}.u-clear-both{clear:both}.u-container{margin:0 auto;max-width:1280px;padding:0 16px}.u-justify-content-space-between{justify-content:space-between}.u-display-none{display:none}.js .u-js-hide,.u-hide{display:none;visibility:hidden}.u-visually-hidden{clip:rect(0,0,0,0);border:0;clip-path:inset(50%);height:1px;overflow:hidden;padding:0;position:absolute!important;white-space:nowrap;width:1px}.u-icon{fill:currentcolor;display:inline-block;height:1em;transform:translate(0);vertical-align:text-top;width:1em}.u-list-reset{list-style:none;margin:0;padding:0}.u-ma-16{margin:16px}.u-mt-0{margin-top:0}.u-mt-24{margin-top:24px}.u-mt-32{margin-top:32px}.u-mb-8{margin-bottom:8px}.u-mb-32{margin-bottom:32px}.u-button-reset{background-color:transparent;border:0;padding:0}.u-sans-serif{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.u-serif{font-family:Merriweather,serif}h1,h2,h4{-webkit-font-smoothing:antialiased}p{overflow-wrap:break-word;word-break:break-word}.u-h4{font-size:1.25rem;font-weight:700;line-height:1.2}.u-mbs-0{margin-block-start:0!important}.c-article-header{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.c-article-identifiers{color:#6f6f6f;display:flex;flex-wrap:wrap;font-size:1rem;line-height:1.3;list-style:none;margin:0 0 8px;padding:0}.c-article-identifiers__item{border-right:1px solid #6f6f6f;list-style:none;margin-right:8px;padding-right:8px}.c-article-identifiers__item:last-child{border-right:0;margin-right:0;padding-right:0}@media only screen and (min-width:876px){.c-article-title{font-size:1.875rem;line-height:1.2}}.c-article-author-list{display:inline;font-size:1rem;list-style:none;margin:0 8px 0 0;padding:0;width:100%}.c-article-author-list__item{display:inline;padding-right:0}.c-article-author-list__show-more{display:none;margin-right:4px}.c-article-author-list__button,.js .c-article-author-list__item--hide,.js .c-article-author-list__show-more{display:none}.js .c-article-author-list--long .c-article-author-list__show-more,.js .c-article-author-list--long+.c-article-author-list__button{display:inline}@media only screen and (max-width:767px){.js .c-article-author-list__item--hide-small-screen{display:none}.js .c-article-author-list--short .c-article-author-list__show-more,.js .c-article-author-list--short+.c-article-author-list__button{display:inline}}#uptodate-client,.js .c-article-author-list--expanded .c-article-author-list__show-more{display:none!important}.js .c-article-author-list--expanded .c-article-author-list__item--hide-small-screen{display:inline!important}.c-article-author-list__button,.c-button-author-list{background:#ebf1f5;border:4px solid #ebf1f5;border-radius:20px;color:#666;font-size:.875rem;line-height:1.4;padding:2px 11px 2px 8px;text-decoration:none}.c-article-author-list__button svg,.c-button-author-list svg{margin:1px 4px 0 0}.c-article-author-list__button:hover,.c-button-author-list:hover{background:#025e8d;border-color:transparent;color:#fff}.c-article-body .c-article-access-provider{padding:8px 16px}.c-article-body .c-article-access-provider,.c-notes{border:1px solid #d5d5d5;border-image:initial;border-left:none;border-right:none;margin:24px 0}.c-article-body .c-article-access-provider__text{color:#555}.c-article-body .c-article-access-provider__text,.c-notes__text{font-size:1rem;margin-bottom:0;padding-bottom:2px;padding-top:2px;text-align:center}.c-article-body .c-article-author-affiliation__address{color:inherit;font-weight:700;margin:0}.c-article-body .c-article-author-affiliation__authors-list{list-style:none;margin:0;padding:0}.c-article-body .c-article-author-affiliation__authors-item{display:inline;margin-left:0}.c-article-authors-search{margin-bottom:24px;margin-top:0}.c-article-authors-search__item,.c-article-authors-search__title{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.c-article-authors-search__title{color:#626262;font-size:1.05rem;font-weight:700;margin:0;padding:0}.c-article-authors-search__item{font-size:1rem}.c-article-authors-search__text{margin:0}.c-code-block{border:1px solid #fff;font-family:monospace;margin:0 0 24px;padding:20px}.c-code-block__heading{font-weight:400;margin-bottom:16px}.c-code-block__line{display:block;overflow-wrap:break-word;white-space:pre-wrap}.c-article-share-box{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;margin-bottom:24px}.c-article-share-box__description{font-size:1rem;margin-bottom:8px}.c-article-share-box__no-sharelink-info{font-size:.813rem;font-weight:700;margin-bottom:24px;padding-top:4px}.c-article-share-box__only-read-input{border:1px solid #d5d5d5;box-sizing:content-box;display:inline-block;font-size:.875rem;font-weight:700;height:24px;margin-bottom:8px;padding:8px 10px}.c-article-share-box__additional-info{color:#626262;font-size:.813rem}.c-article-share-box__button{background:#fff;box-sizing:content-box;text-align:center}.c-article-share-box__button--link-like{background-color:transparent;border:0;color:#025e8d;cursor:pointer;font-size:.875rem;margin-bottom:8px;margin-left:10px}.c-article-associated-content__container .c-article-associated-content__collection-label{font-size:.875rem;line-height:1.4}.c-article-associated-content__container .c-article-associated-content__collection-title{line-height:1.3}.c-reading-companion{clear:both;min-height:389px}.c-reading-companion__figures-list,.c-reading-companion__references-list{list-style:none;min-height:389px;padding:0}.c-reading-companion__references-list--numeric{list-style:decimal inside}.c-reading-companion__figure-item{border-top:1px solid #d5d5d5;font-size:1rem;padding:16px 8px 16px 0}.c-reading-companion__figure-item:first-child{border-top:none;padding-top:8px}.c-reading-companion__reference-item{font-size:1rem}.c-reading-companion__reference-item:first-child{border-top:none}.c-reading-companion__reference-item a{word-break:break-word}.c-reading-companion__reference-citation{display:inline}.c-reading-companion__reference-links{font-size:.813rem;font-weight:700;list-style:none;margin:8px 0 0;padding:0;text-align:right}.c-reading-companion__reference-links>a{display:inline-block;padding-left:8px}.c-reading-companion__reference-links>a:first-child{display:inline-block;padding-left:0}.c-reading-companion__figure-title{display:block;font-size:1.25rem;font-weight:700;line-height:1.2;margin:0 0 8px}.c-reading-companion__figure-links{display:flex;justify-content:space-between;margin:8px 0 0}.c-reading-companion__figure-links>a{align-items:center;display:flex}.c-article-section__figure-caption{display:block;margin-bottom:8px;word-break:break-word}.c-article-section__figure .video,p.app-article-masthead__access--above-download{margin:0 0 16px}.c-article-section__figure-description{font-size:1rem}.c-article-section__figure-description>*{margin-bottom:0}.c-cod{display:block;font-size:1rem;width:100%}.c-cod__form{background:#ebf0f3}.c-cod__prompt{font-size:1.125rem;line-height:1.3;margin:0 0 24px}.c-cod__label{display:block;margin:0 0 4px}.c-cod__row{display:flex;margin:0 0 16px}.c-cod__row:last-child{margin:0}.c-cod__input{border:1px solid #d5d5d5;border-radius:2px;flex-shrink:0;margin:0;padding:13px}.c-cod__input--submit{background-color:#025e8d;border:1px solid #025e8d;color:#fff;flex-shrink:1;margin-left:8px;transition:background-color .2s ease-out 0s,color .2s ease-out 0s}.c-cod__input--submit-single{flex-basis:100%;flex-shrink:0;margin:0}.c-cod__input--submit:focus,.c-cod__input--submit:hover{background-color:#fff;color:#025e8d}.save-data .c-article-author-institutional-author__sub-division,.save-data .c-article-equation__number,.save-data .c-article-figure-description,.save-data .c-article-fullwidth-content,.save-data .c-article-main-column,.save-data .c-article-satellite-article-link,.save-data .c-article-satellite-subtitle,.save-data .c-article-table-container,.save-data .c-blockquote__body,.save-data .c-code-block__heading,.save-data .c-reading-companion__figure-title,.save-data .c-reading-companion__reference-citation,.save-data .c-site-messages--nature-briefing-email-variant .serif,.save-data .c-site-messages--nature-briefing-email-variant.serif,.save-data .serif,.save-data .u-serif,.save-data h1,.save-data h2,.save-data h3{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.c-pdf-download__link{display:flex;flex:1 1 0%;padding:13px 24px}.c-pdf-download__link:hover{text-decoration:none}@media only screen and (min-width:768px){.c-context-bar--sticky .c-pdf-download__link{align-items:center;flex:1 1 183px}}@media only screen and (max-width:320px){.c-context-bar--sticky .c-pdf-download__link{padding:16px}}.c-article-body .c-article-recommendations-list,.c-book-body .c-article-recommendations-list{display:flex;flex-direction:row;gap:16px 16px;margin:0;max-width:100%;padding:16px 0 0}.c-article-body .c-article-recommendations-list__item,.c-book-body .c-article-recommendations-list__item{flex:1 1 0%}@media only screen and (max-width:767px){.c-article-body .c-article-recommendations-list,.c-book-body .c-article-recommendations-list{flex-direction:column}}.c-article-body .c-article-recommendations-card__authors{display:none;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:.875rem;line-height:1.5;margin:0 0 8px}@media only screen and (max-width:767px){.c-article-body .c-article-recommendations-card__authors{display:block;margin:0}}.c-article-body .c-article-history{margin-top:24px}.app-article-metrics-bar p{margin:0}.app-article-masthead{display:flex;flex-direction:column;gap:16px 16px;padding:16px 0 24px}.app-article-masthead__info{display:flex;flex-direction:column;flex-grow:1}.app-article-masthead__brand{border-top:1px solid hsla(0,0%,100%,.8);display:flex;flex-direction:column;flex-shrink:0;gap:8px 8px;min-height:96px;padding:16px 0 0}.app-article-masthead__brand img{border:1px solid #fff;border-radius:8px;box-shadow:0 4px 15px 0 hsla(0,0%,50%,.25);height:auto;left:0;position:absolute;width:72px}.app-article-masthead__journal-link{display:block;font-size:1.125rem;font-weight:700;margin:0 0 8px;max-width:400px;padding:0 0 0 88px;position:relative}.app-article-masthead__journal-title{-webkit-box-orient:vertical;-webkit-line-clamp:3;display:-webkit-box;overflow:hidden}.app-article-masthead__submission-link{align-items:center;display:flex;font-size:1rem;gap:4px 4px;margin:0 0 0 88px}.app-article-masthead__access{align-items:center;display:flex;flex-wrap:wrap;font-size:.875rem;font-weight:300;gap:4px 4px;margin:0}.app-article-masthead__buttons{display:flex;flex-flow:column wrap;gap:16px 16px}.app-article-masthead__access svg,.app-masthead--pastel .c-pdf-download .u-button--primary svg,.app-masthead--pastel .c-pdf-download .u-button--secondary svg,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary svg,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary svg{fill:currentcolor}.app-article-masthead a{color:#fff}.app-masthead--pastel .c-pdf-download .u-button--primary,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary{background-color:#025e8d;background-image:none;border:2px solid transparent;box-shadow:none;color:#fff;font-weight:700}.app-masthead--pastel .c-pdf-download .u-button--primary:visited,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:visited{color:#fff}.app-masthead--pastel .c-pdf-download .u-button--primary:hover,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:hover{text-decoration:none}.app-masthead--pastel .c-pdf-download .u-button--primary:focus,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:focus{border:4px solid #fc0;box-shadow:none;outline:0;text-decoration:none}.app-masthead--pastel .c-pdf-download .u-button--primary:focus,.app-masthead--pastel .c-pdf-download .u-button--primary:hover,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:focus,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:hover{background-color:#fff;background-image:none;color:#01324b}.app-masthead--pastel .c-pdf-download .u-button--primary:hover,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--primary:hover{background:0 0;border:2px solid #025e8d;box-shadow:none;color:#025e8d}.app-masthead--pastel .c-pdf-download .u-button--secondary,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary{background:0 0;border:2px solid #025e8d;color:#025e8d;font-weight:700}.app-masthead--pastel .c-pdf-download .u-button--secondary:visited,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary:visited{color:#01324b}.app-masthead--pastel .c-pdf-download .u-button--secondary:hover,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary:hover{background-color:#01324b;background-color:#025e8d;border:2px solid transparent;box-shadow:none;color:#fff}.app-masthead--pastel .c-pdf-download .u-button--secondary:focus,.c-context-bar--sticky .c-context-bar__container .c-pdf-download .u-button--secondary:focus{background-color:#fff;background-image:none;border:4px solid #fc0;color:#01324b}@media only screen and (min-width:768px){.app-article-masthead{flex-direction:row;gap:64px 64px;padding:24px 0}.app-article-masthead__brand{border:0;padding:0}.app-article-masthead__brand img{height:auto;position:static;width:auto}.app-article-masthead__buttons{align-items:center;flex-direction:row;margin-top:auto}.app-article-masthead__journal-link{display:flex;flex-direction:column;gap:24px 24px;margin:0 0 8px;padding:0}.app-article-masthead__submission-link{margin:0}}@media only screen and (min-width:1024px){.app-article-masthead__brand{flex-basis:400px}}.app-article-masthead .c-article-identifiers{font-size:.875rem;font-weight:300;line-height:1;margin:0 0 8px;overflow:hidden;padding:0}.app-article-masthead .c-article-identifiers--cite-list{margin:0 0 16px}.app-article-masthead .c-article-identifiers *{color:#fff}.app-article-masthead .c-cod{display:none}.app-article-masthead .c-article-identifiers__item{border-left:1px solid #fff;border-right:0;margin:0 17px 8px -9px;padding:0 0 0 8px}.app-article-masthead .c-article-identifiers__item--cite{border-left:0}.app-article-metrics-bar{display:flex;flex-wrap:wrap;font-size:1rem;padding:16px 0 0;row-gap:24px}.app-article-metrics-bar__item{padding:0 16px 0 0}.app-article-metrics-bar__count{font-weight:700}.app-article-metrics-bar__label{font-weight:400;padding-left:4px}.app-article-metrics-bar__icon{height:auto;margin-right:4px;margin-top:-4px;width:auto}.app-article-metrics-bar__arrow-icon{margin:4px 0 0 4px}.app-article-metrics-bar a{color:#000}.app-article-metrics-bar .app-article-metrics-bar__item--metrics{padding-right:0}.app-overview-section .c-article-author-list,.app-overview-section__authors{line-height:2}.app-article-metrics-bar{margin-top:8px}.c-book-toc-pagination+.c-book-section__back-to-top{margin-top:0}.c-article-body .c-article-access-provider__text--chapter{color:#222;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;padding:20px 0}.c-article-body .c-article-access-provider__text--chapter svg.c-status-message__icon{fill:#003f8d;vertical-align:middle}.c-article-body-section__content--separator{padding-top:40px}.c-pdf-download__link{max-height:44px}.app-article-access .u-button--primary,.app-article-access .u-button--primary:visited{color:#fff}.c-article-sidebar{display:none}@media only screen and (min-width:1024px){.c-article-sidebar{display:block}}.c-cod__form{border-radius:12px}.c-cod__label{font-size:.875rem}.c-cod .c-status-message{align-items:center;justify-content:center;margin-bottom:16px;padding-bottom:16px}@media only screen and (min-width:1024px){.c-cod .c-status-message{align-items:inherit}}.c-cod .c-status-message__icon{margin-top:4px}.c-cod .c-cod__prompt{font-size:1rem;margin-bottom:16px}.c-article-body .app-article-access,.c-book-body .app-article-access{display:block}@media only screen and (min-width:1024px){.c-article-body .app-article-access,.c-book-body .app-article-access{display:none}}.c-article-body .app-card-service{margin-bottom:32px}@media only screen and (min-width:1024px){.c-article-body .app-card-service{display:none}}.app-article-access .buybox__buy .u-button--secondary,.app-article-access .u-button--primary,.c-cod__row .u-button--primary{background-color:#025e8d;border:2px solid #025e8d;box-shadow:none;font-size:1rem;font-weight:700;gap:8px 8px;justify-content:center;line-height:1.5;padding:8px 24px}.app-article-access .buybox__buy .u-button--secondary,.app-article-access .u-button--primary:hover,.c-cod__row .u-button--primary:hover{background-color:#fff;color:#025e8d}.app-article-access .buybox__buy .u-button--secondary:hover{background-color:#025e8d;color:#fff}.buybox__buy .c-notes__text{color:#666;font-size:.875rem;padding:0 16px 8px}.c-cod__input{flex-basis:auto;width:100%}.c-article-title{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:2.25rem;font-weight:700;line-height:1.2;margin:12px 0}.c-reading-companion__figure-item figure{margin:0}@media only screen and (min-width:768px){.c-article-title{margin:16px 0}}.app-article-access{border:1px solid #c5e0f4;border-radius:12px}.app-article-access__heading{border-bottom:1px solid #c5e0f4;font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:1.125rem;font-weight:700;margin:0;padding:16px;text-align:center}.app-article-access .buybox__info svg{vertical-align:middle}.c-article-body .app-article-access p{margin-bottom:0}.app-article-access .buybox__info{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif;font-size:1rem;margin:0}.app-article-access{margin:0 0 32px}@media only screen and (min-width:1024px){.app-article-access{margin:0 0 24px}}.c-status-message{font-size:1rem}.c-article-body{font-size:1.125rem}.c-article-body dl,.c-article-body ol,.c-article-body p,.c-article-body ul{margin-bottom:32px;margin-top:0}.c-article-access-provider__text:last-of-type,.c-article-body .c-notes__text:last-of-type{margin-bottom:0}.c-article-body ol p,.c-article-body ul p{margin-bottom:16px}.c-article-section__figure-caption{font-family:Merriweather Sans,Helvetica Neue,Helvetica,Arial,sans-serif}.c-reading-companion__figure-item{border-top-color:#c5e0f4}.c-reading-companion__sticky{max-width:400px}.c-article-section .c-article-section__figure-description>*{font-size:1rem;margin-bottom:16px}.c-reading-companion__reference-item{border-top:1px solid #d5d5d5;padding:16px 0}.c-reading-companion__reference-item:first-child{padding-top:0}.c-article-share-box__button,.js .c-article-authors-search__item .c-article-button{background:0 0;border:2px solid #025e8d;border-radius:32px;box-shadow:none;color:#025e8d;font-size:1rem;font-weight:700;line-height:1.5;margin:0;padding:8px 24px;transition:all .2s ease 0s}.c-article-authors-search__item .c-article-button{width:100%}.c-pdf-download .u-button{background-color:#fff;border:2px solid #fff;color:#01324b;justify-content:center}.c-context-bar__container .c-pdf-download .u-button svg,.c-pdf-download .u-button svg{fill:currentcolor}.c-pdf-download .u-button:visited{color:#01324b}.c-pdf-download .u-button:hover{border:4px solid #01324b;box-shadow:none}.c-pdf-download .u-button:focus,.c-pdf-download .u-button:hover{background-color:#01324b}.c-pdf-download .u-button:focus svg path,.c-pdf-download .u-button:hover svg path{fill:#fff}.c-context-bar__container .c-pdf-download .u-button{background-image:none;border:2px solid;color:#fff}.c-context-bar__container .c-pdf-download .u-button:visited{color:#fff}.c-context-bar__container .c-pdf-download .u-button:hover{text-decoration:none}.c-context-bar__container .c-pdf-download .u-button:focus{box-shadow:none;outline:0;text-decoration:none}.c-context-bar__container .c-pdf-download .u-button:focus,.c-context-bar__container .c-pdf-download .u-button:hover{background-color:#fff;background-image:none;color:#01324b}.c-context-bar__container .c-pdf-download .u-button:focus svg path,.c-context-bar__container .c-pdf-download .u-button:hover svg path{fill:#01324b}.c-context-bar__container .c-pdf-download .u-button,.c-pdf-download .u-button{box-shadow:none;font-size:1rem;font-weight:700;line-height:1.5;padding:8px 24px}.c-context-bar__container .c-pdf-download .u-button{background-color:#025e8d}.c-pdf-download .u-button:hover{border:2px solid #fff}.c-pdf-download .u-button:focus,.c-pdf-download .u-button:hover{background:0 0;box-shadow:none;color:#fff}.c-context-bar__container .c-pdf-download .u-button:hover{border:2px solid #025e8d;box-shadow:none;color:#025e8d}.c-context-bar__container .c-pdf-download .u-button:focus,.c-pdf-download .u-button:focus{border:2px solid #025e8d}.c-article-share-box__button:focus:focus,.c-article__pill-button:focus:focus,.c-context-bar__container .c-pdf-download .u-button:focus:focus,.c-pdf-download .u-button:focus:focus{outline:3px solid #08c;will-change:transform}.c-pdf-download__link .u-icon{padding-top:0}.c-bibliographic-information__column button{margin-bottom:16px}.c-article-body .c-article-author-affiliation__list p,.c-article-body .c-article-author-information__list p,figure{margin:0}.c-article-share-box__button{margin-right:16px}.c-status-message--boxed{border-radius:12px}.c-article-associated-content__collection-title{font-size:1rem}.app-card-service__description,.c-article-body .app-card-service__description{color:#222;margin-bottom:0;margin-top:8px}.app-article-access__subscriptions a,.app-article-access__subscriptions a:visited,.app-book-series-listing__item a,.app-book-series-listing__item a:hover,.app-book-series-listing__item a:visited,.c-article-author-list a,.c-article-author-list a:visited,.c-article-buy-box a,.c-article-buy-box a:visited,.c-article-peer-review a,.c-article-peer-review a:visited,.c-article-satellite-subtitle a,.c-article-satellite-subtitle a:visited,.c-breadcrumbs__link,.c-breadcrumbs__link:hover,.c-breadcrumbs__link:visited{color:#000}.c-article-author-list svg{height:24px;margin:0 0 0 6px;width:24px}.c-article-header{margin-bottom:32px}@media only screen and (min-width:876px){.js .c-ad--conditional{display:block}}.u-lazy-ad-wrapper{background-color:#fff;display:none;min-height:149px}@media only screen and (min-width:876px){.u-lazy-ad-wrapper{display:block}}p.c-ad__label{margin-bottom:4px}.c-ad--728x90{background-color:#fff;border-bottom:2px solid #cedbe0} } </style> <style>@media only print, only all and (prefers-color-scheme: no-preference), only all and (prefers-color-scheme: light), only all and (prefers-color-scheme: dark) { .eds-c-header__brand img{height:24px;width:203px}.app-article-masthead__journal-link img{height:93px;width:72px}@media only screen and (min-width:769px){.app-article-masthead__journal-link img{height:161px;width:122px}} } </style> <link rel="stylesheet" data-test="critical-css-handler" data-inline-css-source="critical-css" href=/oscar-static/app-springerlink/css/core-darwin-9fe647df8f.css media="print" onload="this.media='all';this.onload=null"> <link rel="stylesheet" data-test="critical-css-handler" data-inline-css-source="critical-css" href="/oscar-static/app-springerlink/css/enhanced-darwin-article-8aaaca8a1c.css" media="print" onload="this.media='only print, only all and (prefers-color-scheme: no-preference), only all and (prefers-color-scheme: light), only all and (prefers-color-scheme: dark)';this.onload=null"> <script type="text/javascript"> config = { env: 'live', site: '11263.springer.com', siteWithPath: '11263.springer.com' + window.location.pathname, twitterHashtag: '11263', cmsPrefix: 'https://studio-cms.springernature.com/studio/', publisherBrand: 'Springer', mustardcut: false }; </script> <script> window.dataLayer = [{"GA Key":"UA-26408784-1","DOI":"10.1007/s11263-024-02006-w","Page":"article","springerJournal":true,"Publishing Model":"Hybrid Access","Country":"SG","japan":false,"doi":"10.1007-s11263-024-02006-w","Journal Id":11263,"Journal Title":"International Journal of Computer Vision","imprint":"Springer","Keywords":"Landmarks, Detection, Ensemble models","kwrd":["Landmarks","Detection","Ensemble_models"],"Labs":"Y","ksg":"Krux.segments","kuid":"Krux.uid","Has Body":"Y","Features":[],"Open Access":"Y","hasAccess":"Y","bypassPaywall":"N","user":{"license":{"businessPartnerID":[],"businessPartnerIDString":""}},"Access Type":"open","Bpids":"","Bpnames":"","BPID":["1"],"VG Wort Identifier":"vgzm.415900-10.1007-s11263-024-02006-w","Full HTML":"Y","Subject Codes":["SCI","SCI22005","SCI21000","SCI22021","SCI2203X"],"pmc":["I","I22005","I21000","I22021","I2203X"],"session":{"authentication":{"loginStatus":"N"},"attributes":{"edition":"academic"}},"content":{"serial":{"eissn":"1573-1405","pissn":"0920-5691"},"type":"Article","category":{"pmc":{"primarySubject":"Computer Science","primarySubjectCode":"I","secondarySubjects":{"1":"Computer Imaging, Vision, Pattern Recognition and Graphics","2":"Artificial Intelligence","3":"Image Processing and Computer Vision","4":"Pattern Recognition"},"secondarySubjectCodes":{"1":"I22005","2":"I21000","3":"I22021","4":"I2203X"}},"sucode":"SC6","articleType":"Article"},"attributes":{"deliveryPlatform":"oscar"}},"page":{"attributes":{"environment":"live"},"category":{"pageType":"article"}},"Event Category":"Article"}]; </script> <script data-test="springer-link-article-datalayer"> window.dataLayer = window.dataLayer || []; window.dataLayer.push({ ga4MeasurementId: 'G-B3E4QL2TPR', ga360TrackingId: 'UA-26408784-1', twitterId: 'o47a7', baiduId: 'aef3043f025ccf2305af8a194652d70b', ga4ServerUrl: 'https://collect.springer.com', imprint: 'springerlink', page: { attributes:{ featureFlags: [{ name: 'darwin-orion', active: true }, { name: 'chapter-books-recs', active: true } ], darwinAvailable: true } } }); </script> <script> (function(w, d) { w.config = w.config || {}; w.config.mustardcut = false; if (w.matchMedia && w.matchMedia('only print, only all and (prefers-color-scheme: no-preference), only all and (prefers-color-scheme: light), only all and (prefers-color-scheme: dark)').matches) { w.config.mustardcut = true; d.classList.add('js'); d.classList.remove('grade-c'); d.classList.remove('no-js'); } })(window, document.documentElement); </script> <script class="js-entry"> if (window.config.mustardcut) { (function(w, d) { window.Component = {}; window.suppressShareButton = false; window.onArticlePage = true; var currentScript = d.currentScript || d.head.querySelector('script.js-entry'); function catchNoModuleSupport() { var scriptEl = d.createElement('script'); return (!('noModule' in scriptEl) && 'onbeforeload' in scriptEl) } var headScripts = [ {'src': '/oscar-static/js/polyfill-es5-bundle-572d4fec60.js', 'async': false} ]; var bodyScripts = [ {'src': '/oscar-static/js/global-article-es5-bundle-dad1690b0d.js', 'async': false, 'module': false}, {'src': '/oscar-static/js/global-article-es6-bundle-e7d03c4cb3.js', 'async': false, 'module': true} ]; function createScript(script) { var scriptEl = d.createElement('script'); scriptEl.src = script.src; scriptEl.async = script.async; if (script.module === true) { scriptEl.type = "module"; if (catchNoModuleSupport()) { scriptEl.src = ''; } } else if (script.module === false) { scriptEl.setAttribute('nomodule', true) } if (script.charset) { scriptEl.setAttribute('charset', script.charset); } return scriptEl; } for (var i = 0; i < headScripts.length; ++i) { var scriptEl = createScript(headScripts[i]); currentScript.parentNode.insertBefore(scriptEl, currentScript.nextSibling); } d.addEventListener('DOMContentLoaded', function() { for (var i = 0; i < bodyScripts.length; ++i) { var scriptEl = createScript(bodyScripts[i]); d.body.appendChild(scriptEl); } }); // Webfont repeat view var config = w.config; if (config && config.publisherBrand && sessionStorage.fontsLoaded === 'true') { d.documentElement.className += ' webfonts-loaded'; } })(window, document); } </script> <script data-src="https://cdn.optimizely.com/js/27195530232.js" data-cc-script="C03"></script> <script data-test="gtm-head"> window.initGTM = function() { if (window.config.mustardcut) { (function (w, d, s, l, i) { w[l] = w[l] || []; w[l].push({'gtm.start': new Date().getTime(), event: 'gtm.js'}); var f = d.getElementsByTagName(s)[0], j = d.createElement(s), dl = l != 'dataLayer' ? '&l=' + l : ''; j.async = true; j.src = 'https://www.googletagmanager.com/gtm.js?id=' + i + dl; f.parentNode.insertBefore(j, f); })(window, document, 'script', 'dataLayer', 'GTM-MRVXSHQ'); } } </script> <script> (function (w, d, t) { function cc() { var h = w.location.hostname; var e = d.createElement(t), s = d.getElementsByTagName(t)[0]; if (h.indexOf('springer.com') > -1 && h.indexOf('biomedcentral.com') === -1 && h.indexOf('springeropen.com') === -1) { if (h.indexOf('link-qa.springer.com') > -1 || h.indexOf('test-www.springer.com') > -1) { e.src = 'https://cmp.springer.com/production_live/en/consent-bundle-17-52.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } else { e.src = 'https://cmp.springer.com/production_live/en/consent-bundle-17-52.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } } else if (h.indexOf('biomedcentral.com') > -1) { if (h.indexOf('biomedcentral.com.qa') > -1) { e.src = 'https://cmp.biomedcentral.com/production_live/en/consent-bundle-15-38.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } else { e.src = 'https://cmp.biomedcentral.com/production_live/en/consent-bundle-15-38.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } } else if (h.indexOf('springeropen.com') > -1) { if (h.indexOf('springeropen.com.qa') > -1) { e.src = 'https://cmp.springernature.com/production_live/en/consent-bundle-16-35.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } else { e.src = 'https://cmp.springernature.com/production_live/en/consent-bundle-16-35.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-MRVXSHQ')"); } } else if (h.indexOf('springernature.com') > -1) { if (h.indexOf('beta-qa.springernature.com') > -1) { e.src = 'https://cmp.springernature.com/production_live/en/consent-bundle-49-43.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-NK22KLS')"); } else { e.src = 'https://cmp.springernature.com/production_live/en/consent-bundle-49-43.js'; e.setAttribute('onload', "initGTM(window,document,'script','dataLayer','GTM-NK22KLS')"); } } else { e.src = '/oscar-static/js/cookie-consent-es5-bundle-cb57c2c98a.js'; e.setAttribute('data-consent', h); } s.insertAdjacentElement('afterend', e); } cc(); })(window, document, 'script'); </script> <link rel="canonical" href="https://link.springer.com/article/10.1007/s11263-024-02006-w"/> <script type="application/ld+json">{"mainEntity":{"headline":"Automated Detection of Cat Facial Landmarks","description":"The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant challenges that researchers in the field currently face is the scarcity of high-quality, comprehensive datasets that allow the development of models for facial expressions analysis. One of the possible approaches is the utilisation of facial landmarks, which has been shown for humans and animals. In this paper we present a novel dataset of cat facial images annotated with bounding boxes and 48 facial landmarks grounded in cat facial anatomy. We also introduce a landmark detection convolution neural network-based model which uses a magnifying ensemble method. Our model shows excellent performance on cat faces and is generalizable to human and other animals facial landmark detection.","datePublished":"2024-03-05T00:00:00Z","dateModified":"2024-03-05T00:00:00Z","pageStart":"3103","pageEnd":"3118","license":"http://creativecommons.org/licenses/by/4.0/","sameAs":"https://doi.org/10.1007/s11263-024-02006-w","keywords":["Landmarks","Detection","Ensemble models","Computer Imaging","Vision","Pattern Recognition and Graphics","Artificial Intelligence","Image Processing and Computer Vision","Pattern Recognition"],"image":["https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig1_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig2_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig3_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig4_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig5_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig6_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig7_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig8_HTML.png","https://media.springernature.com/lw1200/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig9_HTML.png"],"isPartOf":{"name":"International Journal of Computer Vision","issn":["1573-1405","0920-5691"],"volumeNumber":"132","@type":["Periodical","PublicationVolume"]},"publisher":{"name":"Springer US","logo":{"url":"https://www.springernature.com/app-sn/public/images/logo-springernature.png","@type":"ImageObject"},"@type":"Organization"},"author":[{"name":"George Martvel","url":"http://orcid.org/0009-0009-2602-2041","affiliation":[{"name":"University of Haifa","address":{"name":"Information Systems Department, University of Haifa, Haifa, Israel","@type":"PostalAddress"},"@type":"Organization"}],"email":"martvelge@gmail.com","@type":"Person"},{"name":"Ilan Shimshoni","affiliation":[{"name":"University of Haifa","address":{"name":"Information Systems Department, University of Haifa, Haifa, Israel","@type":"PostalAddress"},"@type":"Organization"}],"@type":"Person"},{"name":"Anna Zamansky","affiliation":[{"name":"University of Haifa","address":{"name":"Information Systems Department, University of Haifa, Haifa, Israel","@type":"PostalAddress"},"@type":"Organization"}],"@type":"Person"}],"isAccessibleForFree":true,"@type":"ScholarlyArticle"},"@context":"https://schema.org","@type":"WebPage"}</script> </head> <body class="" > <!-- Google Tag Manager (noscript) --> <noscript> <iframe src="https://www.googletagmanager.com/ns.html?id=GTM-MRVXSHQ" height="0" width="0" style="display:none;visibility:hidden"></iframe> </noscript> <!-- End Google Tag Manager (noscript) --> <!-- Google Tag Manager (noscript) --> <noscript data-test="gtm-body"> <iframe src="https://www.googletagmanager.com/ns.html?id=GTM-MRVXSHQ" height="0" width="0" style="display:none;visibility:hidden"></iframe> </noscript> <!-- End Google Tag Manager (noscript) --> <div class="u-visually-hidden" aria-hidden="true" data-test="darwin-icons"> <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><symbol id="icon-eds-i-accesses-medium" viewBox="0 0 24 24"><path d="M15.59 1a1 1 0 0 1 .706.291l5.41 5.385a1 1 0 0 1 .294.709v13.077c0 .674-.269 1.32-.747 1.796a2.549 2.549 0 0 1-1.798.742H15a1 1 0 0 1 0-2h4.455a.549.549 0 0 0 .387-.16.535.535 0 0 0 .158-.378V7.8L15.178 3H5.545a.543.543 0 0 0-.538.451L5 3.538v8.607a1 1 0 0 1-2 0V3.538A2.542 2.542 0 0 1 5.545 1h10.046ZM8 13c2.052 0 4.66 1.61 6.36 3.4l.124.141c.333.41.516.925.516 1.459 0 .6-.232 1.178-.64 1.599C12.666 21.388 10.054 23 8 23c-2.052 0-4.66-1.61-6.353-3.393A2.31 2.31 0 0 1 1 18c0-.6.232-1.178.64-1.6C3.34 14.61 5.948 13 8 13Zm0 2c-1.369 0-3.552 1.348-4.917 2.785A.31.31 0 0 0 3 18c0 .083.031.161.09.222C4.447 19.652 6.631 21 8 21c1.37 0 3.556-1.35 4.917-2.785A.31.31 0 0 0 13 18a.32.32 0 0 0-.048-.17l-.042-.052C11.553 16.348 9.369 15 8 15Zm0 1a2 2 0 1 1 0 4 2 2 0 0 1 0-4Z"/></symbol><symbol id="icon-eds-i-altmetric-medium" viewBox="0 0 24 24"><path d="M12 1c5.978 0 10.843 4.77 10.996 10.712l.004.306-.002.022-.002.248C22.843 18.23 17.978 23 12 23 5.925 23 1 18.075 1 12S5.925 1 12 1Zm-1.726 9.246L8.848 12.53a1 1 0 0 1-.718.461L8.003 13l-4.947.014a9.001 9.001 0 0 0 17.887-.001L16.553 13l-2.205 3.53a1 1 0 0 1-1.735-.068l-.05-.11-2.289-6.106ZM12 3a9.001 9.001 0 0 0-8.947 8.013l4.391-.012L9.652 7.47a1 1 0 0 1 1.784.179l2.288 6.104 1.428-2.283a1 1 0 0 1 .722-.462l.129-.008 4.943.012A9.001 9.001 0 0 0 12 3Z"/></symbol><symbol id="icon-eds-i-arrow-bend-down-medium" viewBox="0 0 24 24"><path d="m11.852 20.989.058.007L12 21l.075-.003.126-.017.111-.03.111-.044.098-.052.104-.074.082-.073 6-6a1 1 0 0 0-1.414-1.414L13 17.585v-12.2C13 4.075 11.964 3 10.667 3H4a1 1 0 1 0 0 2h6.667c.175 0 .333.164.333.385v12.2l-4.293-4.292a1 1 0 0 0-1.32-.083l-.094.083a1 1 0 0 0 0 1.414l6 6c.035.036.073.068.112.097l.11.071.114.054.105.035.118.025Z"/></symbol><symbol id="icon-eds-i-arrow-bend-down-small" viewBox="0 0 16 16"><path d="M1 2a1 1 0 0 0 1 1h5v8.585L3.707 8.293a1 1 0 0 0-1.32-.083l-.094.083a1 1 0 0 0 0 1.414l5 5 .063.059.093.069.081.048.105.048.104.035.105.022.096.01h.136l.122-.018.113-.03.103-.04.1-.053.102-.07.052-.043 5.04-5.037a1 1 0 1 0-1.415-1.414L9 11.583V3a2 2 0 0 0-2-2H2a1 1 0 0 0-1 1Z"/></symbol><symbol id="icon-eds-i-arrow-bend-up-medium" viewBox="0 0 24 24"><path d="m11.852 3.011.058-.007L12 3l.075.003.126.017.111.03.111.044.098.052.104.074.082.073 6 6a1 1 0 1 1-1.414 1.414L13 6.415v12.2C13 19.925 11.964 21 10.667 21H4a1 1 0 0 1 0-2h6.667c.175 0 .333-.164.333-.385v-12.2l-4.293 4.292a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414l6-6c.035-.036.073-.068.112-.097l.11-.071.114-.054.105-.035.118-.025Z"/></symbol><symbol id="icon-eds-i-arrow-bend-up-small" viewBox="0 0 16 16"><path d="M1 13.998a1 1 0 0 1 1-1h5V4.413L3.707 7.705a1 1 0 0 1-1.32.084l-.094-.084a1 1 0 0 1 0-1.414l5-5 .063-.059.093-.068.081-.05.105-.047.104-.035.105-.022L7.94 1l.136.001.122.017.113.03.103.04.1.053.102.07.052.043 5.04 5.037a1 1 0 1 1-1.415 1.414L9 4.415v8.583a2 2 0 0 1-2 2H2a1 1 0 0 1-1-1Z"/></symbol><symbol id="icon-eds-i-arrow-diagonal-medium" viewBox="0 0 24 24"><path d="M14 3h6l.075.003.126.017.111.03.111.044.098.052.096.067.09.08c.036.035.068.073.097.112l.071.11.054.114.035.105.03.148L21 4v6a1 1 0 0 1-2 0V6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414L17.584 5H14a1 1 0 0 1-.993-.883L13 4a1 1 0 0 1 1-1ZM4 13a1 1 0 0 1 1 1v3.584l4.293-4.291a1 1 0 1 1 1.414 1.414L6.414 19H10a1 1 0 0 1 .993.883L11 20a1 1 0 0 1-1 1l-6.075-.003-.126-.017-.111-.03-.111-.044-.098-.052-.096-.067-.09-.08a1.01 1.01 0 0 1-.097-.112l-.071-.11-.054-.114-.035-.105-.025-.118-.007-.058L3 20v-6a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-arrow-diagonal-small" viewBox="0 0 16 16"><path d="m2 15-.082-.004-.119-.016-.111-.03-.111-.044-.098-.052-.096-.067-.09-.08a1.008 1.008 0 0 1-.097-.112l-.071-.11-.031-.062-.034-.081-.024-.076-.025-.118-.007-.058L1 14.02V9a1 1 0 1 1 2 0v2.584l2.793-2.791a1 1 0 1 1 1.414 1.414L4.414 13H7a1 1 0 0 1 .993.883L8 14a1 1 0 0 1-1 1H2ZM14 1l.081.003.12.017.111.03.111.044.098.052.096.067.09.08c.036.035.068.073.097.112l.071.11.031.062.034.081.024.076.03.148L15 2v5a1 1 0 0 1-2 0V4.414l-2.96 2.96A1 1 0 1 1 8.626 5.96L11.584 3H9a1 1 0 0 1-.993-.883L8 2a1 1 0 0 1 1-1h5Z"/></symbol><symbol id="icon-eds-i-arrow-down-medium" viewBox="0 0 24 24"><path d="m20.707 12.728-7.99 7.98a.996.996 0 0 1-.561.281l-.157.011a.998.998 0 0 1-.788-.384l-7.918-7.908a1 1 0 0 1 1.414-1.416L11 17.576V4a1 1 0 0 1 2 0v13.598l6.293-6.285a1 1 0 0 1 1.32-.082l.095.083a1 1 0 0 1-.001 1.414Z"/></symbol><symbol id="icon-eds-i-arrow-down-small" viewBox="0 0 16 16"><path d="m1.293 8.707 6 6 .063.059.093.069.081.048.105.049.104.034.056.013.118.017L8 15l.076-.003.122-.017.113-.03.085-.032.063-.03.098-.058.06-.043.05-.043 6.04-6.037a1 1 0 0 0-1.414-1.414L9 11.583V2a1 1 0 1 0-2 0v9.585L2.707 7.293a1 1 0 0 0-1.32-.083l-.094.083a1 1 0 0 0 0 1.414Z"/></symbol><symbol id="icon-eds-i-arrow-left-medium" viewBox="0 0 24 24"><path d="m11.272 3.293-7.98 7.99a.996.996 0 0 0-.281.561L3 12.001c0 .32.15.605.384.788l7.908 7.918a1 1 0 0 0 1.416-1.414L6.424 13H20a1 1 0 0 0 0-2H6.402l6.285-6.293a1 1 0 0 0 .082-1.32l-.083-.095a1 1 0 0 0-1.414.001Z"/></symbol><symbol id="icon-eds-i-arrow-left-small" viewBox="0 0 16 16"><path d="m7.293 1.293-6 6-.059.063-.069.093-.048.081-.049.105-.034.104-.013.056-.017.118L1 8l.003.076.017.122.03.113.032.085.03.063.058.098.043.06.043.05 6.037 6.04a1 1 0 0 0 1.414-1.414L4.417 9H14a1 1 0 0 0 0-2H4.415l4.292-4.293a1 1 0 0 0 .083-1.32l-.083-.094a1 1 0 0 0-1.414 0Z"/></symbol><symbol id="icon-eds-i-arrow-right-medium" viewBox="0 0 24 24"><path d="m12.728 3.293 7.98 7.99a.996.996 0 0 1 .281.561l.011.157c0 .32-.15.605-.384.788l-7.908 7.918a1 1 0 0 1-1.416-1.414L17.576 13H4a1 1 0 0 1 0-2h13.598l-6.285-6.293a1 1 0 0 1-.082-1.32l.083-.095a1 1 0 0 1 1.414.001Z"/></symbol><symbol id="icon-eds-i-arrow-right-small" viewBox="0 0 16 16"><path d="m8.707 1.293 6 6 .059.063.069.093.048.081.049.105.034.104.013.056.017.118L15 8l-.003.076-.017.122-.03.113-.032.085-.03.063-.058.098-.043.06-.043.05-6.037 6.04a1 1 0 0 1-1.414-1.414L11.583 9H2a1 1 0 1 1 0-2h9.585L7.293 2.707a1 1 0 0 1-.083-1.32l.083-.094a1 1 0 0 1 1.414 0Z"/></symbol><symbol id="icon-eds-i-arrow-up-medium" viewBox="0 0 24 24"><path d="m3.293 11.272 7.99-7.98a.996.996 0 0 1 .561-.281L12.001 3c.32 0 .605.15.788.384l7.918 7.908a1 1 0 0 1-1.414 1.416L13 6.424V20a1 1 0 0 1-2 0V6.402l-6.293 6.285a1 1 0 0 1-1.32.082l-.095-.083a1 1 0 0 1 .001-1.414Z"/></symbol><symbol id="icon-eds-i-arrow-up-small" viewBox="0 0 16 16"><path d="m1.293 7.293 6-6 .063-.059.093-.069.081-.048.105-.049.104-.034.056-.013.118-.017L8 1l.076.003.122.017.113.03.085.032.063.03.098.058.06.043.05.043 6.04 6.037a1 1 0 0 1-1.414 1.414L9 4.417V14a1 1 0 0 1-2 0V4.415L2.707 8.707a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414Z"/></symbol><symbol id="icon-eds-i-article-medium" viewBox="0 0 24 24"><path d="M8 7a1 1 0 0 0 0 2h4a1 1 0 1 0 0-2H8ZM8 11a1 1 0 1 0 0 2h8a1 1 0 1 0 0-2H8ZM7 16a1 1 0 0 1 1-1h8a1 1 0 1 1 0 2H8a1 1 0 0 1-1-1Z"/><path d="M5.545 1A2.542 2.542 0 0 0 3 3.538v16.924A2.542 2.542 0 0 0 5.545 23h12.91A2.542 2.542 0 0 0 21 20.462V3.5A2.5 2.5 0 0 0 18.5 1H5.545ZM5 3.538C5 3.245 5.24 3 5.545 3H18.5a.5.5 0 0 1 .5.5v16.962c0 .293-.24.538-.546.538H5.545A.542.542 0 0 1 5 20.462V3.538Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-book-medium" viewBox="0 0 24 24"><path d="M18.5 1A2.5 2.5 0 0 1 21 3.5v12c0 1.16-.79 2.135-1.86 2.418l-.14.031V21h1a1 1 0 0 1 .993.883L21 22a1 1 0 0 1-1 1H6.5A3.5 3.5 0 0 1 3 19.5v-15A3.5 3.5 0 0 1 6.5 1h12ZM17 18H6.5a1.5 1.5 0 0 0-1.493 1.356L5 19.5A1.5 1.5 0 0 0 6.5 21H17v-3Zm1.5-15h-12A1.5 1.5 0 0 0 5 4.5v11.837l.054-.025a3.481 3.481 0 0 1 1.254-.307L6.5 16h12a.5.5 0 0 0 .492-.41L19 15.5v-12a.5.5 0 0 0-.5-.5ZM15 6a1 1 0 0 1 0 2H9a1 1 0 1 1 0-2h6Z"/></symbol><symbol id="icon-eds-i-book-series-medium" viewBox="0 0 24 24"><path fill-rule="evenodd" d="M1 3.786C1 2.759 1.857 2 2.82 2H6.18c.964 0 1.82.759 1.82 1.786V4h3.168c.668 0 1.298.364 1.616.938.158-.109.333-.195.523-.252l3.216-.965c.923-.277 1.962.204 2.257 1.187l4.146 13.82c.296.984-.307 1.957-1.23 2.234l-3.217.965c-.923.277-1.962-.203-2.257-1.187L13 10.005v10.21c0 1.04-.878 1.785-1.834 1.785H7.833c-.291 0-.575-.07-.83-.195A1.849 1.849 0 0 1 6.18 22H2.821C1.857 22 1 21.241 1 20.214V3.786ZM3 4v11h3V4H3Zm0 16v-3h3v3H3Zm15.075-.04-.814-2.712 2.874-.862.813 2.712-2.873.862Zm1.485-5.49-2.874.862-2.634-8.782 2.873-.862 2.635 8.782ZM8 20V6h3v14H8Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-calendar-acceptance-medium" viewBox="0 0 24 24"><path d="M17 2a1 1 0 0 1 1 1v1h1.5C20.817 4 22 5.183 22 6.5v13c0 1.317-1.183 2.5-2.5 2.5h-15C3.183 22 2 20.817 2 19.5v-13C2 5.183 3.183 4 4.5 4a1 1 0 1 1 0 2c-.212 0-.5.288-.5.5v13c0 .212.288.5.5.5h15c.212 0 .5-.288.5-.5v-13c0-.212-.288-.5-.5-.5H18v1a1 1 0 0 1-2 0V3a1 1 0 0 1 1-1Zm-.534 7.747a1 1 0 0 1 .094 1.412l-4.846 5.538a1 1 0 0 1-1.352.141l-2.77-2.076a1 1 0 0 1 1.2-1.6l2.027 1.519 4.236-4.84a1 1 0 0 1 1.411-.094ZM7.5 2a1 1 0 0 1 1 1v1H14a1 1 0 0 1 0 2H8.5v1a1 1 0 1 1-2 0V3a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-calendar-date-medium" viewBox="0 0 24 24"><path d="M17 2a1 1 0 0 1 1 1v1h1.5C20.817 4 22 5.183 22 6.5v13c0 1.317-1.183 2.5-2.5 2.5h-15C3.183 22 2 20.817 2 19.5v-13C2 5.183 3.183 4 4.5 4a1 1 0 1 1 0 2c-.212 0-.5.288-.5.5v13c0 .212.288.5.5.5h15c.212 0 .5-.288.5-.5v-13c0-.212-.288-.5-.5-.5H18v1a1 1 0 0 1-2 0V3a1 1 0 0 1 1-1ZM8 15a1 1 0 1 1 0 2 1 1 0 0 1 0-2Zm4 0a1 1 0 1 1 0 2 1 1 0 0 1 0-2Zm-4-4a1 1 0 1 1 0 2 1 1 0 0 1 0-2Zm4 0a1 1 0 1 1 0 2 1 1 0 0 1 0-2Zm4 0a1 1 0 1 1 0 2 1 1 0 0 1 0-2ZM7.5 2a1 1 0 0 1 1 1v1H14a1 1 0 0 1 0 2H8.5v1a1 1 0 1 1-2 0V3a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-calendar-decision-medium" viewBox="0 0 24 24"><path d="M17 2a1 1 0 0 1 1 1v1h1.5C20.817 4 22 5.183 22 6.5v13c0 1.317-1.183 2.5-2.5 2.5h-15C3.183 22 2 20.817 2 19.5v-13C2 5.183 3.183 4 4.5 4a1 1 0 1 1 0 2c-.212 0-.5.288-.5.5v13c0 .212.288.5.5.5h15c.212 0 .5-.288.5-.5v-13c0-.212-.288-.5-.5-.5H18v1a1 1 0 0 1-2 0V3a1 1 0 0 1 1-1Zm-2.935 8.246 2.686 2.645c.34.335.34.883 0 1.218l-2.686 2.645a.858.858 0 0 1-1.213-.009.854.854 0 0 1 .009-1.21l1.05-1.035H7.984a.992.992 0 0 1-.984-1c0-.552.44-1 .984-1h5.928l-1.051-1.036a.854.854 0 0 1-.085-1.121l.076-.088a.858.858 0 0 1 1.213-.009ZM7.5 2a1 1 0 0 1 1 1v1H14a1 1 0 0 1 0 2H8.5v1a1 1 0 1 1-2 0V3a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-calendar-impact-factor-medium" viewBox="0 0 24 24"><path d="M17 2a1 1 0 0 1 1 1v1h1.5C20.817 4 22 5.183 22 6.5v13c0 1.317-1.183 2.5-2.5 2.5h-15C3.183 22 2 20.817 2 19.5v-13C2 5.183 3.183 4 4.5 4a1 1 0 1 1 0 2c-.212 0-.5.288-.5.5v13c0 .212.288.5.5.5h15c.212 0 .5-.288.5-.5v-13c0-.212-.288-.5-.5-.5H18v1a1 1 0 0 1-2 0V3a1 1 0 0 1 1-1Zm-3.2 6.924a.48.48 0 0 1 .125.544l-1.52 3.283h2.304c.27 0 .491.215.491.483a.477.477 0 0 1-.13.327l-4.18 4.484a.498.498 0 0 1-.69.031.48.48 0 0 1-.125-.544l1.52-3.284H9.291a.487.487 0 0 1-.491-.482c0-.121.047-.238.13-.327l4.18-4.484a.498.498 0 0 1 .69-.031ZM7.5 2a1 1 0 0 1 1 1v1H14a1 1 0 0 1 0 2H8.5v1a1 1 0 1 1-2 0V3a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-call-papers-medium" viewBox="0 0 24 24"><g><path d="m20.707 2.883-1.414 1.414a1 1 0 0 0 1.414 1.414l1.414-1.414a1 1 0 0 0-1.414-1.414Z"/><path d="M6 16.054c0 2.026 1.052 2.943 3 2.943a1 1 0 1 1 0 2c-2.996 0-5-1.746-5-4.943v-1.227a4.068 4.068 0 0 1-1.83-1.189 4.553 4.553 0 0 1-.87-1.455 4.868 4.868 0 0 1-.3-1.686c0-1.17.417-2.298 1.17-3.14.38-.426.834-.767 1.338-1 .51-.237 1.06-.36 1.617-.36L6.632 6H7l7.932-2.895A2.363 2.363 0 0 1 18 5.36v9.28a2.36 2.36 0 0 1-3.069 2.25l.084.03L7 14.997H6v1.057Zm9.637-11.057a.415.415 0 0 0-.083.008L8 7.638v5.536l7.424 1.786.104.02c.035.01.072.02.109.02.2 0 .363-.16.363-.36V5.36c0-.2-.163-.363-.363-.363Zm-9.638 3h-.874a1.82 1.82 0 0 0-.625.111l-.15.063a2.128 2.128 0 0 0-.689.517c-.42.47-.661 1.123-.661 1.81 0 .34.06.678.176.992.114.308.28.585.485.816.4.447.925.691 1.464.691h.874v-5Z" clip-rule="evenodd"/><path d="M20 8.997h2a1 1 0 1 1 0 2h-2a1 1 0 1 1 0-2ZM20.707 14.293l1.414 1.414a1 1 0 0 1-1.414 1.414l-1.414-1.414a1 1 0 0 1 1.414-1.414Z"/></g></symbol><symbol id="icon-eds-i-card-medium" viewBox="0 0 24 24"><path d="M19.615 2c.315 0 .716.067 1.14.279.76.38 1.245 1.107 1.245 2.106v15.23c0 .315-.067.716-.279 1.14-.38.76-1.107 1.245-2.106 1.245H4.385a2.56 2.56 0 0 1-1.14-.279C2.485 21.341 2 20.614 2 19.615V4.385c0-.315.067-.716.279-1.14C2.659 2.485 3.386 2 4.385 2h15.23Zm0 2H4.385c-.213 0-.265.034-.317.14A.71.71 0 0 0 4 4.385v15.23c0 .213.034.265.14.317a.71.71 0 0 0 .245.068h15.23c.213 0 .265-.034.317-.14a.71.71 0 0 0 .068-.245V4.385c0-.213-.034-.265-.14-.317A.71.71 0 0 0 19.615 4ZM17 16a1 1 0 0 1 0 2H7a1 1 0 0 1 0-2h10Zm0-3a1 1 0 0 1 0 2H7a1 1 0 0 1 0-2h10Zm-.5-7A1.5 1.5 0 0 1 18 7.5v3a1.5 1.5 0 0 1-1.5 1.5h-9A1.5 1.5 0 0 1 6 10.5v-3A1.5 1.5 0 0 1 7.5 6h9ZM16 8H8v2h8V8Z"/></symbol><symbol id="icon-eds-i-cart-medium" viewBox="0 0 24 24"><path d="M5.76 1a1 1 0 0 1 .994.902L7.155 6h13.34c.18 0 .358.02.532.057l.174.045a2.5 2.5 0 0 1 1.693 3.103l-2.069 7.03c-.36 1.099-1.398 1.823-2.49 1.763H8.65c-1.272.015-2.352-.927-2.546-2.244L4.852 3H2a1 1 0 0 1-.993-.883L1 2a1 1 0 0 1 1-1h3.76Zm2.328 14.51a.555.555 0 0 0 .55.488l9.751.001a.533.533 0 0 0 .527-.357l2.059-7a.5.5 0 0 0-.48-.642H7.351l.737 7.51ZM18 19a2 2 0 1 1 0 4 2 2 0 0 1 0-4ZM8 19a2 2 0 1 1 0 4 2 2 0 0 1 0-4Z"/></symbol><symbol id="icon-eds-i-check-circle-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 2a9 9 0 1 0 0 18 9 9 0 0 0 0-18Zm5.125 4.72a1 1 0 0 1 .156 1.405l-6 7.5a1 1 0 0 1-1.421.143l-3-2.5a1 1 0 0 1 1.28-1.536l2.217 1.846 5.362-6.703a1 1 0 0 1 1.406-.156Z"/></symbol><symbol id="icon-eds-i-check-filled-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm5.125 6.72a1 1 0 0 0-1.406.155l-5.362 6.703-2.217-1.846a1 1 0 1 0-1.28 1.536l3 2.5a1 1 0 0 0 1.42-.143l6-7.5a1 1 0 0 0-.155-1.406Z"/></symbol><symbol id="icon-eds-i-chevron-down-medium" viewBox="0 0 24 24"><path d="M3.305 8.28a1 1 0 0 0-.024 1.415l7.495 7.762c.314.345.757.543 1.224.543.467 0 .91-.198 1.204-.522l7.515-7.783a1 1 0 1 0-1.438-1.39L12 15.845l-7.28-7.54A1 1 0 0 0 3.4 8.2l-.096.082Z"/></symbol><symbol id="icon-eds-i-chevron-down-small" viewBox="0 0 16 16"><path d="M13.692 5.278a1 1 0 0 1 .03 1.414L9.103 11.51a1.491 1.491 0 0 1-2.188.019L2.278 6.692a1 1 0 0 1 1.444-1.384L8 9.771l4.278-4.463a1 1 0 0 1 1.318-.111l.096.081Z"/></symbol><symbol id="icon-eds-i-chevron-left-medium" viewBox="0 0 24 24"><path d="M15.72 3.305a1 1 0 0 0-1.415-.024l-7.762 7.495A1.655 1.655 0 0 0 6 12c0 .467.198.91.522 1.204l7.783 7.515a1 1 0 1 0 1.39-1.438L8.155 12l7.54-7.28A1 1 0 0 0 15.8 3.4l-.082-.096Z"/></symbol><symbol id="icon-eds-i-chevron-left-small" viewBox="0 0 16 16"><path d="M10.722 2.308a1 1 0 0 0-1.414-.03L4.49 6.897a1.491 1.491 0 0 0-.019 2.188l4.838 4.637a1 1 0 1 0 1.384-1.444L6.229 8l4.463-4.278a1 1 0 0 0 .111-1.318l-.081-.096Z"/></symbol><symbol id="icon-eds-i-chevron-right-medium" viewBox="0 0 24 24"><path d="M8.28 3.305a1 1 0 0 1 1.415-.024l7.762 7.495c.345.314.543.757.543 1.224 0 .467-.198.91-.522 1.204l-7.783 7.515a1 1 0 1 1-1.39-1.438L15.845 12l-7.54-7.28A1 1 0 0 1 8.2 3.4l.082-.096Z"/></symbol><symbol id="icon-eds-i-chevron-right-small" viewBox="0 0 16 16"><path d="M5.278 2.308a1 1 0 0 1 1.414-.03l4.819 4.619a1.491 1.491 0 0 1 .019 2.188l-4.838 4.637a1 1 0 1 1-1.384-1.444L9.771 8 5.308 3.722a1 1 0 0 1-.111-1.318l.081-.096Z"/></symbol><symbol id="icon-eds-i-chevron-up-medium" viewBox="0 0 24 24"><path d="M20.695 15.72a1 1 0 0 0 .024-1.415l-7.495-7.762A1.655 1.655 0 0 0 12 6c-.467 0-.91.198-1.204.522l-7.515 7.783a1 1 0 1 0 1.438 1.39L12 8.155l7.28 7.54a1 1 0 0 0 1.319.106l.096-.082Z"/></symbol><symbol id="icon-eds-i-chevron-up-small" viewBox="0 0 16 16"><path d="M13.692 10.722a1 1 0 0 0 .03-1.414L9.103 4.49a1.491 1.491 0 0 0-2.188-.019L2.278 9.308a1 1 0 0 0 1.444 1.384L8 6.229l4.278 4.463a1 1 0 0 0 1.318.111l.096-.081Z"/></symbol><symbol id="icon-eds-i-citations-medium" viewBox="0 0 24 24"><path d="M15.59 1a1 1 0 0 1 .706.291l5.41 5.385a1 1 0 0 1 .294.709v13.077c0 .674-.269 1.32-.747 1.796a2.549 2.549 0 0 1-1.798.742h-5.843a1 1 0 1 1 0-2h5.843a.549.549 0 0 0 .387-.16.535.535 0 0 0 .158-.378V7.8L15.178 3H5.545a.543.543 0 0 0-.538.451L5 3.538v8.607a1 1 0 0 1-2 0V3.538A2.542 2.542 0 0 1 5.545 1h10.046ZM5.483 14.35c.197.26.17.62-.049.848l-.095.083-.016.011c-.36.24-.628.45-.804.634-.393.409-.59.93-.59 1.562.077-.019.192-.028.345-.028.442 0 .84.158 1.195.474.355.316.532.716.532 1.2 0 .501-.173.9-.518 1.198-.345.298-.767.446-1.266.446-.672 0-1.209-.195-1.612-.585-.403-.39-.604-.976-.604-1.757 0-.744.11-1.39.33-1.938.222-.549.49-1.009.807-1.38a4.28 4.28 0 0 1 .992-.88c.07-.043.148-.087.232-.133a.881.881 0 0 1 1.121.245Zm5 0c.197.26.17.62-.049.848l-.095.083-.016.011c-.36.24-.628.45-.804.634-.393.409-.59.93-.59 1.562.077-.019.192-.028.345-.028.442 0 .84.158 1.195.474.355.316.532.716.532 1.2 0 .501-.173.9-.518 1.198-.345.298-.767.446-1.266.446-.672 0-1.209-.195-1.612-.585-.403-.39-.604-.976-.604-1.757 0-.744.11-1.39.33-1.938.222-.549.49-1.009.807-1.38a4.28 4.28 0 0 1 .992-.88c.07-.043.148-.087.232-.133a.881.881 0 0 1 1.121.245Z"/></symbol><symbol id="icon-eds-i-clipboard-check-medium" viewBox="0 0 24 24"><path d="M14.4 1c1.238 0 2.274.865 2.536 2.024L18.5 3C19.886 3 21 4.14 21 5.535v14.93C21 21.86 19.886 23 18.5 23h-13C4.114 23 3 21.86 3 20.465V5.535C3 4.14 4.114 3 5.5 3h1.57c.27-1.147 1.3-2 2.53-2h4.8Zm4.115 4-1.59.024A2.601 2.601 0 0 1 14.4 7H9.6c-1.23 0-2.26-.853-2.53-2H5.5c-.27 0-.5.234-.5.535v14.93c0 .3.23.535.5.535h13c.27 0 .5-.234.5-.535V5.535c0-.3-.23-.535-.485-.535Zm-1.909 4.205a1 1 0 0 1 .19 1.401l-5.334 7a1 1 0 0 1-1.344.23l-2.667-1.75a1 1 0 1 1 1.098-1.672l1.887 1.238 4.769-6.258a1 1 0 0 1 1.401-.19ZM14.4 3H9.6a.6.6 0 0 0-.6.6v.8a.6.6 0 0 0 .6.6h4.8a.6.6 0 0 0 .6-.6v-.8a.6.6 0 0 0-.6-.6Z"/></symbol><symbol id="icon-eds-i-clipboard-report-medium" viewBox="0 0 24 24"><path d="M14.4 1c1.238 0 2.274.865 2.536 2.024L18.5 3C19.886 3 21 4.14 21 5.535v14.93C21 21.86 19.886 23 18.5 23h-13C4.114 23 3 21.86 3 20.465V5.535C3 4.14 4.114 3 5.5 3h1.57c.27-1.147 1.3-2 2.53-2h4.8Zm4.115 4-1.59.024A2.601 2.601 0 0 1 14.4 7H9.6c-1.23 0-2.26-.853-2.53-2H5.5c-.27 0-.5.234-.5.535v14.93c0 .3.23.535.5.535h13c.27 0 .5-.234.5-.535V5.535c0-.3-.23-.535-.485-.535Zm-2.658 10.929a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h7.857Zm0-3.929a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h7.857ZM14.4 3H9.6a.6.6 0 0 0-.6.6v.8a.6.6 0 0 0 .6.6h4.8a.6.6 0 0 0 .6-.6v-.8a.6.6 0 0 0-.6-.6Z"/></symbol><symbol id="icon-eds-i-close-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 2a9 9 0 1 0 0 18 9 9 0 0 0 0-18ZM8.707 7.293 12 10.585l3.293-3.292a1 1 0 0 1 1.414 1.414L13.415 12l3.292 3.293a1 1 0 0 1-1.414 1.414L12 13.415l-3.293 3.292a1 1 0 1 1-1.414-1.414L10.585 12 7.293 8.707a1 1 0 0 1 1.414-1.414Z"/></symbol><symbol id="icon-eds-i-cloud-upload-medium" viewBox="0 0 24 24"><path d="m12.852 10.011.028-.004L13 10l.075.003.126.017.086.022.136.052.098.052.104.074.082.073 3 3a1 1 0 0 1 0 1.414l-.094.083a1 1 0 0 1-1.32-.083L14 13.416V20a1 1 0 0 1-2 0v-6.586l-1.293 1.293a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414l3-3 .112-.097.11-.071.114-.054.105-.035.118-.025Zm.587-7.962c3.065.362 5.497 2.662 5.992 5.562l.013.085.207.073c2.117.782 3.496 2.845 3.337 5.097l-.022.226c-.297 2.561-2.503 4.491-5.124 4.502a1 1 0 1 1-.009-2c1.619-.007 2.967-1.186 3.147-2.733.179-1.542-.86-2.979-2.487-3.353-.512-.149-.894-.579-.981-1.165-.21-2.237-2-4.035-4.308-4.308-2.31-.273-4.497 1.06-5.25 3.19l-.049.113c-.234.468-.718.756-1.176.743-1.418.057-2.689.857-3.32 2.084a3.668 3.668 0 0 0 .262 3.798c.796 1.136 2.169 1.764 3.583 1.635a1 1 0 1 1 .182 1.992c-2.125.194-4.193-.753-5.403-2.48a5.668 5.668 0 0 1-.403-5.86c.85-1.652 2.449-2.79 4.323-3.092l.287-.039.013-.028c1.207-2.741 4.125-4.404 7.186-4.042Z"/></symbol><symbol id="icon-eds-i-collection-medium" viewBox="0 0 24 24"><path d="M21 7a1 1 0 0 1 1 1v12.5a2.5 2.5 0 0 1-2.5 2.5H8a1 1 0 0 1 0-2h11.5a.5.5 0 0 0 .5-.5V8a1 1 0 0 1 1-1Zm-5.5-5A2.5 2.5 0 0 1 18 4.5v12a2.5 2.5 0 0 1-2.5 2.5h-11A2.5 2.5 0 0 1 2 16.5v-12A2.5 2.5 0 0 1 4.5 2h11Zm0 2h-11a.5.5 0 0 0-.5.5v12a.5.5 0 0 0 .5.5h11a.5.5 0 0 0 .5-.5v-12a.5.5 0 0 0-.5-.5ZM13 13a1 1 0 0 1 0 2H7a1 1 0 0 1 0-2h6Zm0-3.5a1 1 0 0 1 0 2H7a1 1 0 0 1 0-2h6ZM13 6a1 1 0 0 1 0 2H7a1 1 0 1 1 0-2h6Z"/></symbol><symbol id="icon-eds-i-conference-series-medium" viewBox="0 0 24 24"><path fill-rule="evenodd" d="M4.5 2A2.5 2.5 0 0 0 2 4.5v11A2.5 2.5 0 0 0 4.5 18h2.37l-2.534 2.253a1 1 0 0 0 1.328 1.494L9.88 18H11v3a1 1 0 1 0 2 0v-3h1.12l4.216 3.747a1 1 0 0 0 1.328-1.494L17.13 18h2.37a2.5 2.5 0 0 0 2.5-2.5v-11A2.5 2.5 0 0 0 19.5 2h-15ZM20 6V4.5a.5.5 0 0 0-.5-.5h-15a.5.5 0 0 0-.5.5V6h16ZM4 8v7.5a.5.5 0 0 0 .5.5h15a.5.5 0 0 0 .5-.5V8H4Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-delivery-medium" viewBox="0 0 24 24"><path d="M8.51 20.598a3.037 3.037 0 0 1-3.02 0A2.968 2.968 0 0 1 4.161 19L3.5 19A2.5 2.5 0 0 1 1 16.5v-11A2.5 2.5 0 0 1 3.5 3h10a2.5 2.5 0 0 1 2.45 2.004L16 5h2.527c.976 0 1.855.585 2.27 1.49l2.112 4.62a1 1 0 0 1 .091.416v4.856C23 17.814 21.889 19 20.484 19h-.523a1.01 1.01 0 0 1-.121-.007 2.96 2.96 0 0 1-1.33 1.605 3.037 3.037 0 0 1-3.02 0A2.968 2.968 0 0 1 14.161 19H9.838a2.968 2.968 0 0 1-1.327 1.597Zm-2.024-3.462a.955.955 0 0 0-.481.73L5.999 18l.001.022a.944.944 0 0 0 .388.777l.098.065c.316.181.712.181 1.028 0A.97.97 0 0 0 8 17.978a.95.95 0 0 0-.486-.842 1.037 1.037 0 0 0-1.028 0Zm10 0a.955.955 0 0 0-.481.73l-.005.156a.944.944 0 0 0 .388.777l.098.065c.316.181.712.181 1.028 0a.97.97 0 0 0 .486-.886.95.95 0 0 0-.486-.842 1.037 1.037 0 0 0-1.028 0ZM21 12h-5v3.17a3.038 3.038 0 0 1 2.51.232 2.993 2.993 0 0 1 1.277 1.45l.058.155.058-.005.581-.002c.27 0 .516-.263.516-.618V12Zm-7.5-7h-10a.5.5 0 0 0-.5.5v11a.5.5 0 0 0 .5.5h.662a2.964 2.964 0 0 1 1.155-1.491l.172-.107a3.037 3.037 0 0 1 3.022 0A2.987 2.987 0 0 1 9.843 17H13.5a.5.5 0 0 0 .5-.5v-11a.5.5 0 0 0-.5-.5Zm5.027 2H16v3h4.203l-1.224-2.677a.532.532 0 0 0-.375-.316L18.527 7Z"/></symbol><symbol id="icon-eds-i-download-medium" viewBox="0 0 24 24"><path d="M22 18.5a3.5 3.5 0 0 1-3.5 3.5h-13A3.5 3.5 0 0 1 2 18.5V18a1 1 0 0 1 2 0v.5A1.5 1.5 0 0 0 5.5 20h13a1.5 1.5 0 0 0 1.5-1.5V18a1 1 0 0 1 2 0v.5Zm-3.293-7.793-6 6-.063.059-.093.069-.081.048-.105.049-.104.034-.056.013-.118.017L12 17l-.076-.003-.122-.017-.113-.03-.085-.032-.063-.03-.098-.058-.06-.043-.05-.043-6.04-6.037a1 1 0 0 1 1.414-1.414l4.294 4.29L11 3a1 1 0 0 1 2 0l.001 10.585 4.292-4.292a1 1 0 0 1 1.32-.083l.094.083a1 1 0 0 1 0 1.414Z"/></symbol><symbol id="icon-eds-i-edit-medium" viewBox="0 0 24 24"><path d="M17.149 2a2.38 2.38 0 0 1 1.699.711l2.446 2.46a2.384 2.384 0 0 1 .005 3.38L10.01 19.906a1 1 0 0 1-.434.257l-6.3 1.8a1 1 0 0 1-1.237-1.237l1.8-6.3a1 1 0 0 1 .257-.434L15.443 2.718A2.385 2.385 0 0 1 17.15 2Zm-3.874 5.689-7.586 7.536-1.234 4.319 4.318-1.234 7.54-7.582-3.038-3.039ZM17.149 4a.395.395 0 0 0-.286.126L14.695 6.28l3.029 3.029 2.162-2.173a.384.384 0 0 0 .106-.197L20 6.864c0-.103-.04-.2-.119-.278l-2.457-2.47A.385.385 0 0 0 17.149 4Z"/></symbol><symbol id="icon-eds-i-education-medium" viewBox="0 0 24 24"><path fill-rule="evenodd" d="M12.41 2.088a1 1 0 0 0-.82 0l-10 4.5a1 1 0 0 0 0 1.824L3 9.047v7.124A3.001 3.001 0 0 0 4 22a3 3 0 0 0 1-5.83V9.948l1 .45V14.5a1 1 0 0 0 .087.408L7 14.5c-.913.408-.912.41-.912.41l.001.003.003.006.007.015a1.988 1.988 0 0 0 .083.16c.054.097.131.225.236.373.21.297.53.68.993 1.057C8.351 17.292 9.824 18 12 18c2.176 0 3.65-.707 4.589-1.476.463-.378.783-.76.993-1.057a4.162 4.162 0 0 0 .319-.533l.007-.015.003-.006v-.003h.002s0-.002-.913-.41l.913.408A1 1 0 0 0 18 14.5v-4.103l4.41-1.985a1 1 0 0 0 0-1.824l-10-4.5ZM16 11.297l-3.59 1.615a1 1 0 0 1-.82 0L8 11.297v2.94a3.388 3.388 0 0 0 .677.739C9.267 15.457 10.294 16 12 16s2.734-.543 3.323-1.024a3.388 3.388 0 0 0 .677-.739v-2.94ZM4.437 7.5 12 4.097 19.563 7.5 12 10.903 4.437 7.5ZM3 19a1 1 0 1 1 2 0 1 1 0 0 1-2 0Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-error-diamond-medium" viewBox="0 0 24 24"><path d="M12.002 1c.702 0 1.375.279 1.871.775l8.35 8.353a2.646 2.646 0 0 1 .001 3.744l-8.353 8.353a2.646 2.646 0 0 1-3.742 0l-8.353-8.353a2.646 2.646 0 0 1 0-3.744l8.353-8.353.156-.142c.424-.362.952-.58 1.507-.625l.21-.008Zm0 2a.646.646 0 0 0-.38.123l-.093.08-8.34 8.34a.646.646 0 0 0-.18.355L3 12c0 .171.068.336.19.457l8.353 8.354a.646.646 0 0 0 .914 0l8.354-8.354a.646.646 0 0 0-.001-.914l-8.351-8.354A.646.646 0 0 0 12.002 3ZM12 14.5a1.5 1.5 0 0 1 .144 2.993L12 17.5a1.5 1.5 0 0 1 0-3ZM12 6a1 1 0 0 1 1 1v5a1 1 0 0 1-2 0V7a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-error-filled-medium" viewBox="0 0 24 24"><path d="M12.002 1c.702 0 1.375.279 1.871.775l8.35 8.353a2.646 2.646 0 0 1 .001 3.744l-8.353 8.353a2.646 2.646 0 0 1-3.742 0l-8.353-8.353a2.646 2.646 0 0 1 0-3.744l8.353-8.353.156-.142c.424-.362.952-.58 1.507-.625l.21-.008ZM12 14.5a1.5 1.5 0 0 0 0 3l.144-.007A1.5 1.5 0 0 0 12 14.5ZM12 6a1 1 0 0 0-1 1v5a1 1 0 0 0 2 0V7a1 1 0 0 0-1-1Z"/></symbol><symbol id="icon-eds-i-external-link-medium" viewBox="0 0 24 24"><path d="M9 2a1 1 0 1 1 0 2H4.6c-.371 0-.6.209-.6.5v15c0 .291.229.5.6.5h14.8c.371 0 .6-.209.6-.5V15a1 1 0 0 1 2 0v4.5c0 1.438-1.162 2.5-2.6 2.5H4.6C3.162 22 2 20.938 2 19.5v-15C2 3.062 3.162 2 4.6 2H9Zm6 0h6l.075.003.126.017.111.03.111.044.098.052.096.067.09.08c.036.035.068.073.097.112l.071.11.054.114.035.105.03.148L22 3v6a1 1 0 0 1-2 0V5.414l-6.693 6.693a1 1 0 0 1-1.414-1.414L18.584 4H15a1 1 0 0 1-.993-.883L14 3a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-external-link-small" viewBox="0 0 16 16"><path d="M5 1a1 1 0 1 1 0 2l-2-.001V13L13 13v-2a1 1 0 0 1 2 0v2c0 1.15-.93 2-2.067 2H3.067C1.93 15 1 14.15 1 13V3c0-1.15.93-2 2.067-2H5Zm4 0h5l.075.003.126.017.111.03.111.044.098.052.096.067.09.08.044.047.073.093.051.083.054.113.035.105.03.148L15 2v5a1 1 0 0 1-2 0V4.414L9.107 8.307a1 1 0 0 1-1.414-1.414L11.584 3H9a1 1 0 0 1-.993-.883L8 2a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-file-download-medium" viewBox="0 0 24 24"><path d="M14.5 1a1 1 0 0 1 .707.293l5.5 5.5A1 1 0 0 1 21 7.5v12.962A2.542 2.542 0 0 1 18.455 23H5.545A2.542 2.542 0 0 1 3 20.462V3.538A2.542 2.542 0 0 1 5.545 1H14.5Zm-.415 2h-8.54A.542.542 0 0 0 5 3.538v16.924c0 .296.243.538.545.538h12.91a.542.542 0 0 0 .545-.538V7.915L14.085 3ZM12 7a1 1 0 0 1 1 1v6.585l2.293-2.292a1 1 0 0 1 1.32-.083l.094.083a1 1 0 0 1 0 1.414l-4 4a1.008 1.008 0 0 1-.112.097l-.11.071-.114.054-.105.035-.149.03L12 18l-.075-.003-.126-.017-.111-.03-.111-.044-.098-.052-.096-.067-.09-.08-4-4a1 1 0 0 1 1.414-1.414L11 14.585V8a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-file-report-medium" viewBox="0 0 24 24"><path d="M14.5 1a1 1 0 0 1 .707.293l5.5 5.5A1 1 0 0 1 21 7.5v12.962c0 .674-.269 1.32-.747 1.796a2.549 2.549 0 0 1-1.798.742H5.545c-.674 0-1.32-.267-1.798-.742A2.535 2.535 0 0 1 3 20.462V3.538A2.542 2.542 0 0 1 5.545 1H14.5Zm-.415 2h-8.54A.542.542 0 0 0 5 3.538v16.924c0 .142.057.278.158.379.102.102.242.159.387.159h12.91a.549.549 0 0 0 .387-.16.535.535 0 0 0 .158-.378V7.915L14.085 3ZM16 17a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h8Zm0-3a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h8Zm-4.793-6.207L13 9.585l1.793-1.792a1 1 0 0 1 1.32-.083l.094.083a1 1 0 0 1 0 1.414l-2.5 2.5a1 1 0 0 1-1.414 0L10.5 9.915l-1.793 1.792a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414l2.5-2.5a1 1 0 0 1 1.414 0Z"/></symbol><symbol id="icon-eds-i-file-text-medium" viewBox="0 0 24 24"><path d="M14.5 1a1 1 0 0 1 .707.293l5.5 5.5A1 1 0 0 1 21 7.5v12.962A2.542 2.542 0 0 1 18.455 23H5.545A2.542 2.542 0 0 1 3 20.462V3.538A2.542 2.542 0 0 1 5.545 1H14.5Zm-.415 2h-8.54A.542.542 0 0 0 5 3.538v16.924c0 .296.243.538.545.538h12.91a.542.542 0 0 0 .545-.538V7.915L14.085 3ZM16 15a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h8Zm0-4a1 1 0 0 1 0 2H8a1 1 0 0 1 0-2h8Zm-5-4a1 1 0 0 1 0 2H8a1 1 0 1 1 0-2h3Z"/></symbol><symbol id="icon-eds-i-file-upload-medium" viewBox="0 0 24 24"><path d="M14.5 1a1 1 0 0 1 .707.293l5.5 5.5A1 1 0 0 1 21 7.5v12.962A2.542 2.542 0 0 1 18.455 23H5.545A2.542 2.542 0 0 1 3 20.462V3.538A2.542 2.542 0 0 1 5.545 1H14.5Zm-.415 2h-8.54A.542.542 0 0 0 5 3.538v16.924c0 .296.243.538.545.538h12.91a.542.542 0 0 0 .545-.538V7.915L14.085 3Zm-2.233 4.011.058-.007L12 7l.075.003.126.017.111.03.111.044.098.052.104.074.082.073 4 4a1 1 0 0 1 0 1.414l-.094.083a1 1 0 0 1-1.32-.083L13 10.415V17a1 1 0 0 1-2 0v-6.585l-2.293 2.292a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414l4-4 .112-.097.11-.071.114-.054.105-.035.118-.025Z"/></symbol><symbol id="icon-eds-i-filter-medium" viewBox="0 0 24 24"><path d="M21 2a1 1 0 0 1 .82 1.573L15 13.314V18a1 1 0 0 1-.31.724l-.09.076-4 3A1 1 0 0 1 9 21v-7.684L2.18 3.573a1 1 0 0 1 .707-1.567L3 2h18Zm-1.921 2H4.92l5.9 8.427a1 1 0 0 1 .172.45L11 13v6l2-1.5V13a1 1 0 0 1 .117-.469l.064-.104L19.079 4Z"/></symbol><symbol id="icon-eds-i-funding-medium" viewBox="0 0 24 24"><path fill-rule="evenodd" d="M23 8A7 7 0 1 0 9 8a7 7 0 0 0 14 0ZM9.006 12.225A4.07 4.07 0 0 0 6.12 11.02H2a.979.979 0 1 0 0 1.958h4.12c.558 0 1.094.222 1.489.617l2.207 2.288c.27.27.27.687.012.944a.656.656 0 0 1-.928 0L7.744 15.67a.98.98 0 0 0-1.386 1.384l1.157 1.158c.535.536 1.244.791 1.946.765l.041.002h6.922c.874 0 1.597.748 1.597 1.688 0 .203-.146.354-.309.354H7.755c-.487 0-.96-.178-1.339-.504L2.64 17.259a.979.979 0 0 0-1.28 1.482L5.137 22c.733.631 1.66.979 2.618.979h9.957c1.26 0 2.267-1.043 2.267-2.312 0-2.006-1.584-3.646-3.555-3.646h-4.529a2.617 2.617 0 0 0-.681-2.509l-2.208-2.287ZM16 3a5 5 0 1 0 0 10 5 5 0 0 0 0-10Zm.979 3.5a.979.979 0 1 0-1.958 0v3a.979.979 0 1 0 1.958 0v-3Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-hashtag-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 2a9 9 0 1 0 0 18 9 9 0 0 0 0-18ZM9.52 18.189a1 1 0 1 1-1.964-.378l.437-2.274H6a1 1 0 1 1 0-2h2.378l.592-3.076H6a1 1 0 0 1 0-2h3.354l.51-2.65a1 1 0 1 1 1.964.378l-.437 2.272h3.04l.51-2.65a1 1 0 1 1 1.964.378l-.438 2.272H18a1 1 0 0 1 0 2h-1.917l-.592 3.076H18a1 1 0 0 1 0 2h-2.893l-.51 2.652a1 1 0 1 1-1.964-.378l.437-2.274h-3.04l-.51 2.652Zm.895-4.652h3.04l.591-3.076h-3.04l-.591 3.076Z"/></symbol><symbol id="icon-eds-i-home-medium" viewBox="0 0 24 24"><path d="M5 22a1 1 0 0 1-1-1v-8.586l-1.293 1.293a1 1 0 0 1-1.32.083l-.094-.083a1 1 0 0 1 0-1.414l10-10a1 1 0 0 1 1.414 0l10 10a1 1 0 0 1-1.414 1.414L20 12.415V21a1 1 0 0 1-1 1H5Zm7-17.585-6 5.999V20h5v-4a1 1 0 0 1 2 0v4h5v-9.585l-6-6Z"/></symbol><symbol id="icon-eds-i-image-medium" viewBox="0 0 24 24"><path d="M19.615 2A2.385 2.385 0 0 1 22 4.385v15.23A2.385 2.385 0 0 1 19.615 22H4.385A2.385 2.385 0 0 1 2 19.615V4.385A2.385 2.385 0 0 1 4.385 2h15.23Zm0 2H4.385A.385.385 0 0 0 4 4.385v15.23c0 .213.172.385.385.385h1.244l10.228-8.76a1 1 0 0 1 1.254-.037L20 13.392V4.385A.385.385 0 0 0 19.615 4Zm-3.07 9.283L8.703 20h10.912a.385.385 0 0 0 .385-.385v-3.713l-3.455-2.619ZM9.5 6a3.5 3.5 0 1 1 0 7 3.5 3.5 0 0 1 0-7Zm0 2a1.5 1.5 0 1 0 0 3 1.5 1.5 0 0 0 0-3Z"/></symbol><symbol id="icon-eds-i-impact-factor-medium" viewBox="0 0 24 24"><path d="M16.49 2.672c.74.694.986 1.765.632 2.712l-.04.1-1.549 3.54h1.477a2.496 2.496 0 0 1 2.485 2.34l.005.163c0 .618-.23 1.21-.642 1.675l-7.147 7.961a2.48 2.48 0 0 1-3.554.165 2.512 2.512 0 0 1-.633-2.712l.042-.103L9.108 15H7.46c-1.393 0-2.379-1.11-2.455-2.369L5 12.473c0-.593.142-1.145.628-1.692l7.307-7.944a2.48 2.48 0 0 1 3.555-.165ZM14.43 4.164l-7.33 7.97c-.083.093-.101.214-.101.34 0 .277.19.526.46.526h4.163l.097-.009c.015 0 .03.003.046.009.181.078.264.32.186.5l-2.554 5.817a.512.512 0 0 0 .127.552.48.48 0 0 0 .69-.033l7.155-7.97a.513.513 0 0 0 .13-.34.497.497 0 0 0-.49-.502h-3.988a.355.355 0 0 1-.328-.497l2.555-5.844a.512.512 0 0 0-.127-.552.48.48 0 0 0-.69.033Z"/></symbol><symbol id="icon-eds-i-info-circle-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 2a9 9 0 1 0 0 18 9 9 0 0 0 0-18Zm0 7a1 1 0 0 1 1 1v5h1.5a1 1 0 0 1 0 2h-5a1 1 0 0 1 0-2H11v-4h-.5a1 1 0 0 1-.993-.883L9.5 11a1 1 0 0 1 1-1H12Zm0-4.5a1.5 1.5 0 0 1 .144 2.993L12 8.5a1.5 1.5 0 0 1 0-3Z"/></symbol><symbol id="icon-eds-i-info-filled-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 9h-1.5a1 1 0 0 0-1 1l.007.117A1 1 0 0 0 10.5 12h.5v4H9.5a1 1 0 0 0 0 2h5a1 1 0 0 0 0-2H13v-5a1 1 0 0 0-1-1Zm0-4.5a1.5 1.5 0 0 0 0 3l.144-.007A1.5 1.5 0 0 0 12 5.5Z"/></symbol><symbol id="icon-eds-i-journal-medium" viewBox="0 0 24 24"><path d="M18.5 1A2.5 2.5 0 0 1 21 3.5v14a2.5 2.5 0 0 1-2.5 2.5h-13a.5.5 0 1 0 0 1H20a1 1 0 0 1 0 2H5.5A2.5 2.5 0 0 1 3 20.5v-17A2.5 2.5 0 0 1 5.5 1h13ZM7 3H5.5a.5.5 0 0 0-.5.5v14.549l.016-.002c.104-.02.211-.035.32-.042L5.5 18H7V3Zm11.5 0H9v15h9.5a.5.5 0 0 0 .5-.5v-14a.5.5 0 0 0-.5-.5ZM16 5a1 1 0 0 1 1 1v4a1 1 0 0 1-1 1h-5a1 1 0 0 1-1-1V6a1 1 0 0 1 1-1h5Zm-1 2h-3v2h3V7Z"/></symbol><symbol id="icon-eds-i-mail-medium" viewBox="0 0 24 24"><path d="M20.462 3C21.875 3 23 4.184 23 5.619v12.762C23 19.816 21.875 21 20.462 21H3.538C2.125 21 1 19.816 1 18.381V5.619C1 4.184 2.125 3 3.538 3h16.924ZM21 8.158l-7.378 6.258a2.549 2.549 0 0 1-3.253-.008L3 8.16v10.222c0 .353.253.619.538.619h16.924c.285 0 .538-.266.538-.619V8.158ZM20.462 5H3.538c-.264 0-.5.228-.534.542l8.65 7.334c.2.165.492.165.684.007l8.656-7.342-.001-.025c-.044-.3-.274-.516-.531-.516Z"/></symbol><symbol id="icon-eds-i-mail-send-medium" viewBox="0 0 24 24"><path d="M20.444 5a2.562 2.562 0 0 1 2.548 2.37l.007.078.001.123v7.858A2.564 2.564 0 0 1 20.444 18H9.556A2.564 2.564 0 0 1 7 15.429l.001-7.977.007-.082A2.561 2.561 0 0 1 9.556 5h10.888ZM21 9.331l-5.46 3.51a1 1 0 0 1-1.08 0L9 9.332v6.097c0 .317.251.571.556.571h10.888a.564.564 0 0 0 .556-.571V9.33ZM20.444 7H9.556a.543.543 0 0 0-.32.105l5.763 3.706 5.766-3.706a.543.543 0 0 0-.32-.105ZM4.308 5a1 1 0 1 1 0 2H2a1 1 0 1 1 0-2h2.308Zm0 5.5a1 1 0 0 1 0 2H2a1 1 0 0 1 0-2h2.308Zm0 5.5a1 1 0 0 1 0 2H2a1 1 0 0 1 0-2h2.308Z"/></symbol><symbol id="icon-eds-i-mentions-medium" viewBox="0 0 24 24"><path d="m9.452 1.293 5.92 5.92 2.92-2.92a1 1 0 0 1 1.415 1.414l-2.92 2.92 5.92 5.92a1 1 0 0 1 0 1.415 10.371 10.371 0 0 1-10.378 2.584l.652 3.258A1 1 0 0 1 12 23H2a1 1 0 0 1-.874-1.486l4.789-8.62C4.194 9.074 4.9 4.43 8.038 1.292a1 1 0 0 1 1.414 0Zm-2.355 13.59L3.699 21h7.081l-.689-3.442a10.392 10.392 0 0 1-2.775-2.396l-.22-.28Zm1.69-11.427-.07.09a8.374 8.374 0 0 0 11.737 11.737l.089-.071L8.787 3.456Z"/></symbol><symbol id="icon-eds-i-menu-medium" viewBox="0 0 24 24"><path d="M21 4a1 1 0 0 1 0 2H3a1 1 0 1 1 0-2h18Zm-4 7a1 1 0 0 1 0 2H3a1 1 0 0 1 0-2h14Zm4 7a1 1 0 0 1 0 2H3a1 1 0 0 1 0-2h18Z"/></symbol><symbol id="icon-eds-i-metrics-medium" viewBox="0 0 24 24"><path d="M3 22a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1h6a1 1 0 0 1 1 1v7h4V8a1 1 0 0 1 1-1h6a1 1 0 0 1 1 1v13a1 1 0 0 1-.883.993L21 22H3Zm17-2V9h-4v11h4Zm-6-8h-4v8h4v-8ZM8 4H4v16h4V4Z"/></symbol><symbol id="icon-eds-i-news-medium" viewBox="0 0 24 24"><path d="M17.384 3c.975 0 1.77.787 1.77 1.762v13.333c0 .462.354.846.815.899l.107.006.109-.006a.915.915 0 0 0 .809-.794l.006-.105V8.19a1 1 0 0 1 2 0v9.905A2.914 2.914 0 0 1 20.077 21H3.538a2.547 2.547 0 0 1-1.644-.601l-.147-.135A2.516 2.516 0 0 1 1 18.476V4.762C1 3.787 1.794 3 2.77 3h14.614Zm-.231 2H3v13.476c0 .11.035.216.1.304l.054.063c.101.1.24.157.384.157l13.761-.001-.026-.078a2.88 2.88 0 0 1-.115-.655l-.004-.17L17.153 5ZM14 15.021a.979.979 0 1 1 0 1.958H6a.979.979 0 1 1 0-1.958h8Zm0-8c.54 0 .979.438.979.979v4c0 .54-.438.979-.979.979H6A.979.979 0 0 1 5.021 12V8c0-.54.438-.979.979-.979h8Zm-.98 1.958H6.979v2.041h6.041V8.979Z"/></symbol><symbol id="icon-eds-i-newsletter-medium" viewBox="0 0 24 24"><path d="M21 10a1 1 0 0 1 1 1v9.5a2.5 2.5 0 0 1-2.5 2.5h-15A2.5 2.5 0 0 1 2 20.5V11a1 1 0 0 1 2 0v.439l8 4.888 8-4.889V11a1 1 0 0 1 1-1Zm-1 3.783-7.479 4.57a1 1 0 0 1-1.042 0l-7.48-4.57V20.5a.5.5 0 0 0 .501.5h15a.5.5 0 0 0 .5-.5v-6.717ZM15 9a1 1 0 0 1 0 2H9a1 1 0 0 1 0-2h6Zm2.5-8A2.5 2.5 0 0 1 20 3.5V9a1 1 0 0 1-2 0V3.5a.5.5 0 0 0-.5-.5h-11a.5.5 0 0 0-.5.5V9a1 1 0 1 1-2 0V3.5A2.5 2.5 0 0 1 6.5 1h11ZM15 5a1 1 0 0 1 0 2H9a1 1 0 1 1 0-2h6Z"/></symbol><symbol id="icon-eds-i-notifcation-medium" viewBox="0 0 24 24"><path d="M14 20a1 1 0 0 1 0 2h-4a1 1 0 0 1 0-2h4ZM3 18l-.133-.007c-1.156-.124-1.156-1.862 0-1.986l.3-.012C4.32 15.923 5 15.107 5 14V9.5C5 5.368 8.014 2 12 2s7 3.368 7 7.5V14c0 1.107.68 1.923 1.832 1.995l.301.012c1.156.124 1.156 1.862 0 1.986L21 18H3Zm9-14C9.17 4 7 6.426 7 9.5V14c0 .671-.146 1.303-.416 1.858L6.51 16h10.979l-.073-.142a4.192 4.192 0 0 1-.412-1.658L17 14V9.5C17 6.426 14.83 4 12 4Z"/></symbol><symbol id="icon-eds-i-publish-medium" viewBox="0 0 24 24"><g><path d="M16.296 1.291A1 1 0 0 0 15.591 1H5.545A2.542 2.542 0 0 0 3 3.538V13a1 1 0 1 0 2 0V3.538l.007-.087A.543.543 0 0 1 5.545 3h9.633L20 7.8v12.662a.534.534 0 0 1-.158.379.548.548 0 0 1-.387.159H11a1 1 0 1 0 0 2h8.455c.674 0 1.32-.267 1.798-.742A2.534 2.534 0 0 0 22 20.462V7.385a1 1 0 0 0-.294-.709l-5.41-5.385Z"/><path d="M10.762 16.647a1 1 0 0 0-1.525-1.294l-4.472 5.271-2.153-1.665a1 1 0 1 0-1.224 1.582l2.91 2.25a1 1 0 0 0 1.374-.144l5.09-6ZM16 10a1 1 0 1 1 0 2H8a1 1 0 1 1 0-2h8ZM12 7a1 1 0 0 0-1-1H8a1 1 0 1 0 0 2h3a1 1 0 0 0 1-1Z"/></g></symbol><symbol id="icon-eds-i-refresh-medium" viewBox="0 0 24 24"><g><path d="M7.831 5.636H6.032A8.76 8.76 0 0 1 9 3.631 8.549 8.549 0 0 1 12.232 3c.603 0 1.192.063 1.76.182C17.979 4.017 21 7.632 21 12a1 1 0 1 0 2 0c0-5.296-3.674-9.746-8.591-10.776A10.61 10.61 0 0 0 5 3.851V2.805a1 1 0 0 0-.987-1H4a1 1 0 0 0-1 1v3.831a1 1 0 0 0 1 1h3.831a1 1 0 0 0 .013-2h-.013ZM17.968 18.364c-1.59 1.632-3.784 2.636-6.2 2.636C6.948 21 3 16.993 3 12a1 1 0 1 0-2 0c0 6.053 4.799 11 10.768 11 2.788 0 5.324-1.082 7.232-2.85v1.045a1 1 0 1 0 2 0v-3.831a1 1 0 0 0-1-1h-3.831a1 1 0 0 0 0 2h1.799Z"/></g></symbol><symbol id="icon-eds-i-search-medium" viewBox="0 0 24 24"><path d="M11 1c5.523 0 10 4.477 10 10 0 2.4-.846 4.604-2.256 6.328l3.963 3.965a1 1 0 0 1-1.414 1.414l-3.965-3.963A9.959 9.959 0 0 1 11 21C5.477 21 1 16.523 1 11S5.477 1 11 1Zm0 2a8 8 0 1 0 0 16 8 8 0 0 0 0-16Z"/></symbol><symbol id="icon-eds-i-settings-medium" viewBox="0 0 24 24"><path d="M11.382 1h1.24a2.508 2.508 0 0 1 2.334 1.63l.523 1.378 1.59.933 1.444-.224c.954-.132 1.89.3 2.422 1.101l.095.155.598 1.066a2.56 2.56 0 0 1-.195 2.848l-.894 1.161v1.896l.92 1.163c.6.768.707 1.812.295 2.674l-.09.17-.606 1.08a2.504 2.504 0 0 1-2.531 1.25l-1.428-.223-1.589.932-.523 1.378a2.512 2.512 0 0 1-2.155 1.625L12.65 23h-1.27a2.508 2.508 0 0 1-2.334-1.63l-.524-1.379-1.59-.933-1.443.225c-.954.132-1.89-.3-2.422-1.101l-.095-.155-.598-1.066a2.56 2.56 0 0 1 .195-2.847l.891-1.161v-1.898l-.919-1.162a2.562 2.562 0 0 1-.295-2.674l.09-.17.606-1.08a2.504 2.504 0 0 1 2.531-1.25l1.43.223 1.618-.938.524-1.375.07-.167A2.507 2.507 0 0 1 11.382 1Zm.003 2a.509.509 0 0 0-.47.338l-.65 1.71a1 1 0 0 1-.434.51L7.6 6.85a1 1 0 0 1-.655.123l-1.762-.275a.497.497 0 0 0-.498.252l-.61 1.088a.562.562 0 0 0 .04.619l1.13 1.43a1 1 0 0 1 .216.62v2.585a1 1 0 0 1-.207.61L4.15 15.339a.568.568 0 0 0-.036.634l.601 1.072a.494.494 0 0 0 .484.26l1.78-.278a1 1 0 0 1 .66.126l2.2 1.292a1 1 0 0 1 .43.507l.648 1.71a.508.508 0 0 0 .467.338h1.263a.51.51 0 0 0 .47-.34l.65-1.708a1 1 0 0 1 .428-.507l2.201-1.292a1 1 0 0 1 .66-.126l1.763.275a.497.497 0 0 0 .498-.252l.61-1.088a.562.562 0 0 0-.04-.619l-1.13-1.43a1 1 0 0 1-.216-.62v-2.585a1 1 0 0 1 .207-.61l1.105-1.437a.568.568 0 0 0 .037-.634l-.601-1.072a.494.494 0 0 0-.484-.26l-1.78.278a1 1 0 0 1-.66-.126l-2.2-1.292a1 1 0 0 1-.43-.507l-.649-1.71A.508.508 0 0 0 12.62 3h-1.234ZM12 8a4 4 0 1 1 0 8 4 4 0 0 1 0-8Zm0 2a2 2 0 1 0 0 4 2 2 0 0 0 0-4Z"/></symbol><symbol id="icon-eds-i-shipping-medium" viewBox="0 0 24 24"><path d="M16.515 2c1.406 0 2.706.728 3.352 1.902l2.02 3.635.02.042.036.089.031.105.012.058.01.073.004.075v11.577c0 .64-.244 1.255-.683 1.713a2.356 2.356 0 0 1-1.701.731H4.386a2.356 2.356 0 0 1-1.702-.731 2.476 2.476 0 0 1-.683-1.713V7.948c.01-.217.083-.43.22-.6L4.2 3.905C4.833 2.755 6.089 2.032 7.486 2h9.029ZM20 9H4v10.556a.49.49 0 0 0 .075.26l.053.07a.356.356 0 0 0 .257.114h15.23c.094 0 .186-.04.258-.115a.477.477 0 0 0 .127-.33V9Zm-2 7.5a1 1 0 0 1 0 2h-4a1 1 0 0 1 0-2h4ZM16.514 4H13v3h6.3l-1.183-2.13c-.288-.522-.908-.87-1.603-.87ZM11 3.999H7.51c-.679.017-1.277.36-1.566.887L4.728 7H11V3.999Z"/></symbol><symbol id="icon-eds-i-step-guide-medium" viewBox="0 0 24 24"><path d="M11.394 9.447a1 1 0 1 0-1.788-.894l-.88 1.759-.019-.02a1 1 0 1 0-1.414 1.415l1 1a1 1 0 0 0 1.601-.26l1.5-3ZM12 11a1 1 0 0 1 1-1h3a1 1 0 1 1 0 2h-3a1 1 0 0 1-1-1ZM12 17a1 1 0 0 1 1-1h3a1 1 0 1 1 0 2h-3a1 1 0 0 1-1-1ZM10.947 14.105a1 1 0 0 1 .447 1.342l-1.5 3a1 1 0 0 1-1.601.26l-1-1a1 1 0 1 1 1.414-1.414l.02.019.879-1.76a1 1 0 0 1 1.341-.447Z"/><path d="M5.545 1A2.542 2.542 0 0 0 3 3.538v16.924A2.542 2.542 0 0 0 5.545 23h12.91A2.542 2.542 0 0 0 21 20.462V7.5a1 1 0 0 0-.293-.707l-5.5-5.5A1 1 0 0 0 14.5 1H5.545ZM5 3.538C5 3.245 5.24 3 5.545 3h8.54L19 7.914v12.547c0 .294-.24.539-.546.539H5.545A.542.542 0 0 1 5 20.462V3.538Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-submission-medium" viewBox="0 0 24 24"><g><path d="M5 3.538C5 3.245 5.24 3 5.545 3h9.633L20 7.8v12.662a.535.535 0 0 1-.158.379.549.549 0 0 1-.387.159H6a1 1 0 0 1-1-1v-2.5a1 1 0 1 0-2 0V20a3 3 0 0 0 3 3h13.455c.673 0 1.32-.266 1.798-.742A2.535 2.535 0 0 0 22 20.462V7.385a1 1 0 0 0-.294-.709l-5.41-5.385A1 1 0 0 0 15.591 1H5.545A2.542 2.542 0 0 0 3 3.538V7a1 1 0 0 0 2 0V3.538Z"/><path d="m13.707 13.707-4 4a1 1 0 0 1-1.414 0l-.083-.094a1 1 0 0 1 .083-1.32L10.585 14 2 14a1 1 0 1 1 0-2l8.583.001-2.29-2.294a1 1 0 0 1 1.414-1.414l4.037 4.04.043.05.043.06.059.098.03.063.031.085.03.113.017.122L14 13l-.004.087-.017.118-.013.056-.034.104-.049.105-.048.081-.07.093-.058.063Z"/></g></symbol><symbol id="icon-eds-i-table-1-medium" viewBox="0 0 24 24"><path d="M4.385 22a2.56 2.56 0 0 1-1.14-.279C2.485 21.341 2 20.614 2 19.615V4.385c0-.315.067-.716.279-1.14C2.659 2.485 3.386 2 4.385 2h15.23c.315 0 .716.067 1.14.279.76.38 1.245 1.107 1.245 2.106v15.23c0 .315-.067.716-.279 1.14-.38.76-1.107 1.245-2.106 1.245H4.385ZM4 19.615c0 .213.034.265.14.317a.71.71 0 0 0 .245.068H8v-4H4v3.615ZM20 16H10v4h9.615c.213 0 .265-.034.317-.14a.71.71 0 0 0 .068-.245V16Zm0-2v-4H10v4h10ZM4 14h4v-4H4v4ZM19.615 4H10v4h10V4.385c0-.213-.034-.265-.14-.317A.71.71 0 0 0 19.615 4ZM8 4H4.385l-.082.002c-.146.01-.19.047-.235.138A.71.71 0 0 0 4 4.385V8h4V4Z"/></symbol><symbol id="icon-eds-i-table-2-medium" viewBox="0 0 24 24"><path d="M4.384 22A2.384 2.384 0 0 1 2 19.616V4.384A2.384 2.384 0 0 1 4.384 2h15.232A2.384 2.384 0 0 1 22 4.384v15.232A2.384 2.384 0 0 1 19.616 22H4.384ZM10 15H4v4.616c0 .212.172.384.384.384H10v-5Zm5 0h-3v5h3v-5Zm5 0h-3v5h2.616a.384.384 0 0 0 .384-.384V15ZM10 9H4v4h6V9Zm5 0h-3v4h3V9Zm5 0h-3v4h3V9Zm-.384-5H4.384A.384.384 0 0 0 4 4.384V7h16V4.384A.384.384 0 0 0 19.616 4Z"/></symbol><symbol id="icon-eds-i-tag-medium" viewBox="0 0 24 24"><path d="m12.621 1.998.127.004L20.496 2a1.5 1.5 0 0 1 1.497 1.355L22 3.5l-.005 7.669c.038.456-.133.905-.447 1.206l-9.02 9.018a2.075 2.075 0 0 1-2.932 0l-6.99-6.99a2.075 2.075 0 0 1 .001-2.933L11.61 2.47c.246-.258.573-.418.881-.46l.131-.011Zm.286 2-8.885 8.886a.075.075 0 0 0 0 .106l6.987 6.988c.03.03.077.03.106 0l8.883-8.883L19.999 4l-7.092-.002ZM16 6.5a1.5 1.5 0 0 1 .144 2.993L16 9.5a1.5 1.5 0 0 1 0-3Z"/></symbol><symbol id="icon-eds-i-trash-medium" viewBox="0 0 24 24"><path d="M12 1c2.717 0 4.913 2.232 4.997 5H21a1 1 0 0 1 0 2h-1v12.5c0 1.389-1.152 2.5-2.556 2.5H6.556C5.152 23 4 21.889 4 20.5V8H3a1 1 0 1 1 0-2h4.003l.001-.051C7.114 3.205 9.3 1 12 1Zm6 7H6v12.5c0 .238.19.448.454.492l.102.008h10.888c.315 0 .556-.232.556-.5V8Zm-4 3a1 1 0 0 1 1 1v6.005a1 1 0 0 1-2 0V12a1 1 0 0 1 1-1Zm-4 0a1 1 0 0 1 1 1v6a1 1 0 0 1-2 0v-6a1 1 0 0 1 1-1Zm2-8c-1.595 0-2.914 1.32-2.996 3h5.991v-.02C14.903 4.31 13.589 3 12 3Z"/></symbol><symbol id="icon-eds-i-user-account-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 16c-1.806 0-3.52.994-4.664 2.698A8.947 8.947 0 0 0 12 21a8.958 8.958 0 0 0 4.664-1.301C15.52 17.994 13.806 17 12 17Zm0-14a9 9 0 0 0-6.25 15.476C7.253 16.304 9.54 15 12 15s4.747 1.304 6.25 3.475A9 9 0 0 0 12 3Zm0 3a4 4 0 1 1 0 8 4 4 0 0 1 0-8Zm0 2a2 2 0 1 0 0 4 2 2 0 0 0 0-4Z"/></symbol><symbol id="icon-eds-i-user-add-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm9 10a1 1 0 0 1 1 1v3h3a1 1 0 0 1 0 2h-3v3a1 1 0 0 1-2 0v-3h-3a1 1 0 0 1 0-2h3v-3a1 1 0 0 1 1-1Zm-5.545-.15a1 1 0 1 1-.91 1.78 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 11.5 20a1 1 0 0 1 .993.883L12.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378Z"/></symbol><symbol id="icon-eds-i-user-assign-medium" viewBox="0 0 24 24"><path d="M16.226 13.298a1 1 0 0 1 1.414-.01l.084.093a1 1 0 0 1-.073 1.32L15.39 17H22a1 1 0 0 1 0 2h-6.611l2.262 2.298a1 1 0 0 1-1.425 1.404l-3.939-4a1 1 0 0 1 0-1.404l3.94-4Zm-3.771-.449a1 1 0 1 1-.91 1.781 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 10.5 20a1 1 0 0 1 .993.883L11.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378ZM9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Z"/></symbol><symbol id="icon-eds-i-user-block-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm9 10a5 5 0 1 1 0 10 5 5 0 0 1 0-10Zm-5.545-.15a1 1 0 1 1-.91 1.78 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 11.5 20a1 1 0 0 1 .993.883L12.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378ZM15 18a3 3 0 0 0 4.294 2.707l-4.001-4c-.188.391-.293.83-.293 1.293Zm3-3c-.463 0-.902.105-1.294.293l4.001 4A3 3 0 0 0 18 15Z"/></symbol><symbol id="icon-eds-i-user-check-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm13.647 12.237a1 1 0 0 1 .116 1.41l-5.091 6a1 1 0 0 1-1.375.144l-2.909-2.25a1 1 0 1 1 1.224-1.582l2.153 1.665 4.472-5.271a1 1 0 0 1 1.41-.116Zm-8.139-.977c.22.214.428.44.622.678a1 1 0 1 1-1.548 1.266 6.025 6.025 0 0 0-1.795-1.49.86.86 0 0 1-.163-.048l-.079-.036a5.721 5.721 0 0 0-2.62-.63l-.194.006c-2.76.134-5.022 2.177-5.592 4.864l-.035.175-.035.213c-.03.201-.05.405-.06.61L3.003 20 10 20a1 1 0 0 1 .993.883L11 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876l.005-.223.02-.356.02-.222.03-.248.022-.15c.02-.133.044-.265.071-.397.44-2.178 1.725-4.105 3.595-5.301a7.75 7.75 0 0 1 3.755-1.215l.12-.004a7.908 7.908 0 0 1 5.87 2.252Z"/></symbol><symbol id="icon-eds-i-user-delete-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6ZM4.763 13.227a7.713 7.713 0 0 1 7.692-.378 1 1 0 1 1-.91 1.781 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20H11.5a1 1 0 0 1 .993.883L12.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897Zm11.421 1.543 2.554 2.553 2.555-2.553a1 1 0 0 1 1.414 1.414l-2.554 2.554 2.554 2.555a1 1 0 0 1-1.414 1.414l-2.555-2.554-2.554 2.554a1 1 0 0 1-1.414-1.414l2.553-2.555-2.553-2.554a1 1 0 0 1 1.414-1.414Z"/></symbol><symbol id="icon-eds-i-user-edit-medium" viewBox="0 0 24 24"><path d="m19.876 10.77 2.831 2.83a1 1 0 0 1 0 1.415l-7.246 7.246a1 1 0 0 1-.572.284l-3.277.446a1 1 0 0 1-1.125-1.13l.461-3.277a1 1 0 0 1 .283-.567l7.23-7.246a1 1 0 0 1 1.415-.001Zm-7.421 2.08a1 1 0 1 1-.91 1.78 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 7.5 20a1 1 0 0 1 .993.883L8.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378Zm6.715.042-6.29 6.3-.23 1.639 1.633-.222 6.302-6.302-1.415-1.415ZM9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Z"/></symbol><symbol id="icon-eds-i-user-linked-medium" viewBox="0 0 24 24"><path d="M15.65 6c.31 0 .706.066 1.122.274C17.522 6.65 18 7.366 18 8.35v12.3c0 .31-.066.706-.274 1.122-.375.75-1.092 1.228-2.076 1.228H3.35a2.52 2.52 0 0 1-1.122-.274C1.478 22.35 1 21.634 1 20.65V8.35c0-.31.066-.706.274-1.122C1.65 6.478 2.366 6 3.35 6h12.3Zm0 2-12.376.002c-.134.007-.17.04-.21.12A.672.672 0 0 0 3 8.35v12.3c0 .198.028.24.122.287.09.044.2.063.228.063h.887c.788-2.269 2.814-3.5 5.263-3.5 2.45 0 4.475 1.231 5.263 3.5h.887c.198 0 .24-.028.287-.122.044-.09.063-.2.063-.228V8.35c0-.198-.028-.24-.122-.287A.672.672 0 0 0 15.65 8ZM9.5 19.5c-1.36 0-2.447.51-3.06 1.5h6.12c-.613-.99-1.7-1.5-3.06-1.5ZM20.65 1A2.35 2.35 0 0 1 23 3.348V15.65A2.35 2.35 0 0 1 20.65 18H20a1 1 0 0 1 0-2h.65a.35.35 0 0 0 .35-.35V3.348A.35.35 0 0 0 20.65 3H8.35a.35.35 0 0 0-.35.348V4a1 1 0 1 1-2 0v-.652A2.35 2.35 0 0 1 8.35 1h12.3ZM9.5 10a3.5 3.5 0 1 1 0 7 3.5 3.5 0 0 1 0-7Zm0 2a1.5 1.5 0 1 0 0 3 1.5 1.5 0 0 0 0-3Z"/></symbol><symbol id="icon-eds-i-user-multiple-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm6 0a5 5 0 0 1 0 10 1 1 0 0 1-.117-1.993L15 9a3 3 0 0 0 0-6 1 1 0 0 1 0-2ZM9 3a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm8.857 9.545a7.99 7.99 0 0 1 2.651 1.715A8.31 8.31 0 0 1 23 20.134V21a1 1 0 0 1-1 1h-3a1 1 0 0 1 0-2h1.995l-.005-.153a6.307 6.307 0 0 0-1.673-3.945l-.204-.209a5.99 5.99 0 0 0-1.988-1.287 1 1 0 1 1 .732-1.861Zm-3.349 1.715A8.31 8.31 0 0 1 17 20.134V21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.877c.044-4.343 3.387-7.908 7.638-8.115a7.908 7.908 0 0 1 5.87 2.252ZM9.016 14l-.285.006c-3.104.15-5.58 2.718-5.725 5.9L3.004 20h11.991l-.005-.153a6.307 6.307 0 0 0-1.673-3.945l-.204-.209A5.924 5.924 0 0 0 9.3 14.008L9.016 14Z"/></symbol><symbol id="icon-eds-i-user-notify-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm10 18v1a1 1 0 0 1-2 0v-1h-3a1 1 0 0 1 0-2v-2.818C14 13.885 15.777 12 18 12s4 1.885 4 4.182V19a1 1 0 0 1 0 2h-3Zm-6.545-8.15a1 1 0 1 1-.91 1.78 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 11.5 20a1 1 0 0 1 .993.883L12.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378ZM18 14c-1.091 0-2 .964-2 2.182V19h4v-2.818c0-1.165-.832-2.098-1.859-2.177L18 14Z"/></symbol><symbol id="icon-eds-i-user-remove-medium" viewBox="0 0 24 24"><path d="M9 1a5 5 0 1 1 0 10A5 5 0 0 1 9 1Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm3.455 9.85a1 1 0 1 1-.91 1.78 5.713 5.713 0 0 0-5.705.282c-1.67 1.068-2.728 2.927-2.832 4.956L3.004 20 11.5 20a1 1 0 0 1 .993.883L12.5 21a1 1 0 0 1-1 1H2a1 1 0 0 1-1-1v-.876c.028-2.812 1.446-5.416 3.763-6.897a7.713 7.713 0 0 1 7.692-.378ZM22 17a1 1 0 0 1 0 2h-8a1 1 0 0 1 0-2h8Z"/></symbol><symbol id="icon-eds-i-user-single-medium" viewBox="0 0 24 24"><path d="M12 1a5 5 0 1 1 0 10 5 5 0 0 1 0-10Zm0 2a3 3 0 1 0 0 6 3 3 0 0 0 0-6Zm-.406 9.008a8.965 8.965 0 0 1 6.596 2.494A9.161 9.161 0 0 1 21 21.025V22a1 1 0 0 1-1 1H4a1 1 0 0 1-1-1v-.985c.05-4.825 3.815-8.777 8.594-9.007Zm.39 1.992-.299.006c-3.63.175-6.518 3.127-6.678 6.775L5 21h13.998l-.009-.268a7.157 7.157 0 0 0-1.97-4.573l-.214-.213A6.967 6.967 0 0 0 11.984 14Z"/></symbol><symbol id="icon-eds-i-warning-circle-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 2a9 9 0 1 0 0 18 9 9 0 0 0 0-18Zm0 11.5a1.5 1.5 0 0 1 .144 2.993L12 17.5a1.5 1.5 0 0 1 0-3ZM12 6a1 1 0 0 1 1 1v5a1 1 0 0 1-2 0V7a1 1 0 0 1 1-1Z"/></symbol><symbol id="icon-eds-i-warning-filled-medium" viewBox="0 0 24 24"><path d="M12 1c6.075 0 11 4.925 11 11s-4.925 11-11 11S1 18.075 1 12 5.925 1 12 1Zm0 13.5a1.5 1.5 0 0 0 0 3l.144-.007A1.5 1.5 0 0 0 12 14.5ZM12 6a1 1 0 0 0-1 1v5a1 1 0 0 0 2 0V7a1 1 0 0 0-1-1Z"/></symbol><symbol id="icon-chevron-left-medium" viewBox="0 0 24 24"><path d="M15.7194 3.3054C15.3358 2.90809 14.7027 2.89699 14.3054 3.28061L6.54342 10.7757C6.19804 11.09 6 11.5335 6 12C6 12.4665 6.19804 12.91 6.5218 13.204L14.3054 20.7194C14.7027 21.103 15.3358 21.0919 15.7194 20.6946C16.103 20.2973 16.0919 19.6642 15.6946 19.2806L8.155 12L15.6946 4.71939C16.0614 4.36528 16.099 3.79863 15.8009 3.40105L15.7194 3.3054Z"/></symbol><symbol id="icon-chevron-right-medium" viewBox="0 0 24 24"><path d="M8.28061 3.3054C8.66423 2.90809 9.29729 2.89699 9.6946 3.28061L17.4566 10.7757C17.802 11.09 18 11.5335 18 12C18 12.4665 17.802 12.91 17.4782 13.204L9.6946 20.7194C9.29729 21.103 8.66423 21.0919 8.28061 20.6946C7.89699 20.2973 7.90809 19.6642 8.3054 19.2806L15.845 12L8.3054 4.71939C7.93865 4.36528 7.90098 3.79863 8.19908 3.40105L8.28061 3.3054Z"/></symbol><symbol id="icon-eds-alerts" viewBox="0 0 32 32"><path d="M28 12.667c.736 0 1.333.597 1.333 1.333v13.333A3.333 3.333 0 0 1 26 30.667H6a3.333 3.333 0 0 1-3.333-3.334V14a1.333 1.333 0 1 1 2.666 0v1.252L16 21.769l10.667-6.518V14c0-.736.597-1.333 1.333-1.333Zm-1.333 5.71-9.972 6.094c-.427.26-.963.26-1.39 0l-9.972-6.094v8.956c0 .368.299.667.667.667h20a.667.667 0 0 0 .667-.667v-8.956ZM19.333 12a1.333 1.333 0 1 1 0 2.667h-6.666a1.333 1.333 0 1 1 0-2.667h6.666Zm4-10.667a3.333 3.333 0 0 1 3.334 3.334v6.666a1.333 1.333 0 1 1-2.667 0V4.667A.667.667 0 0 0 23.333 4H8.667A.667.667 0 0 0 8 4.667v6.666a1.333 1.333 0 1 1-2.667 0V4.667a3.333 3.333 0 0 1 3.334-3.334h14.666Zm-4 5.334a1.333 1.333 0 0 1 0 2.666h-6.666a1.333 1.333 0 1 1 0-2.666h6.666Z"/></symbol><symbol id="icon-eds-arrow-up" viewBox="0 0 24 24"><path fill-rule="evenodd" d="m13.002 7.408 4.88 4.88a.99.99 0 0 0 1.32.08l.09-.08c.39-.39.39-1.03 0-1.42l-6.58-6.58a1.01 1.01 0 0 0-1.42 0l-6.58 6.58a1 1 0 0 0-.09 1.32l.08.1a1 1 0 0 0 1.42-.01l4.88-4.87v11.59a.99.99 0 0 0 .88.99l.12.01c.55 0 1-.45 1-1V7.408z" class="layer"/></symbol><symbol id="icon-eds-checklist" viewBox="0 0 32 32"><path d="M19.2 1.333a3.468 3.468 0 0 1 3.381 2.699L24.667 4C26.515 4 28 5.52 28 7.38v19.906c0 1.86-1.485 3.38-3.333 3.38H7.333c-1.848 0-3.333-1.52-3.333-3.38V7.38C4 5.52 5.485 4 7.333 4h2.093A3.468 3.468 0 0 1 12.8 1.333h6.4ZM9.426 6.667H7.333c-.36 0-.666.312-.666.713v19.906c0 .401.305.714.666.714h17.334c.36 0 .666-.313.666-.714V7.38c0-.4-.305-.713-.646-.714l-2.121.033A3.468 3.468 0 0 1 19.2 9.333h-6.4a3.468 3.468 0 0 1-3.374-2.666Zm12.715 5.606c.586.446.7 1.283.253 1.868l-7.111 9.334a1.333 1.333 0 0 1-1.792.306l-3.556-2.333a1.333 1.333 0 1 1 1.463-2.23l2.517 1.651 6.358-8.344a1.333 1.333 0 0 1 1.868-.252ZM19.2 4h-6.4a.8.8 0 0 0-.8.8v1.067a.8.8 0 0 0 .8.8h6.4a.8.8 0 0 0 .8-.8V4.8a.8.8 0 0 0-.8-.8Z"/></symbol><symbol id="icon-eds-citation" viewBox="0 0 36 36"><path d="M23.25 1.5a1.5 1.5 0 0 1 1.06.44l8.25 8.25a1.5 1.5 0 0 1 .44 1.06v19.5c0 2.105-1.645 3.75-3.75 3.75H18a1.5 1.5 0 0 1 0-3h11.25c.448 0 .75-.302.75-.75V11.873L22.628 4.5H8.31a.811.811 0 0 0-.8.68l-.011.13V16.5a1.5 1.5 0 0 1-3 0V5.31A3.81 3.81 0 0 1 8.31 1.5h14.94ZM8.223 20.358a.984.984 0 0 1-.192 1.378l-.048.034c-.54.36-.942.676-1.206.951-.59.614-.885 1.395-.885 2.343.115-.028.288-.042.518-.042.662 0 1.26.237 1.791.711.533.474.799 1.074.799 1.799 0 .753-.259 1.352-.777 1.799-.518.446-1.151.669-1.9.669-1.006 0-1.812-.293-2.417-.878C3.302 28.536 3 27.657 3 26.486c0-1.115.165-2.085.496-2.907.331-.823.734-1.513 1.209-2.071.475-.558.971-.997 1.49-1.318a6.01 6.01 0 0 1 .347-.2 1.321 1.321 0 0 1 1.681.368Zm7.5 0a.984.984 0 0 1-.192 1.378l-.048.034c-.54.36-.942.676-1.206.951-.59.614-.885 1.395-.885 2.343.115-.028.288-.042.518-.042.662 0 1.26.237 1.791.711.533.474.799 1.074.799 1.799 0 .753-.259 1.352-.777 1.799-.518.446-1.151.669-1.9.669-1.006 0-1.812-.293-2.417-.878-.604-.586-.906-1.465-.906-2.636 0-1.115.165-2.085.496-2.907.331-.823.734-1.513 1.209-2.071.475-.558.971-.997 1.49-1.318a6.01 6.01 0 0 1 .347-.2 1.321 1.321 0 0 1 1.681.368Z"/></symbol><symbol id="icon-eds-i-access-indicator" viewBox="0 0 16 16"><circle cx="4.5" cy="11.5" r="3.5" style="fill:currentColor"/><path fill-rule="evenodd" d="M4 3v3a1 1 0 0 1-2 0V2.923C2 1.875 2.84 1 3.909 1h5.909a1 1 0 0 1 .713.298l3.181 3.231a1 1 0 0 1 .288.702v7.846c0 .505-.197.993-.554 1.354a1.902 1.902 0 0 1-1.355.569H10a1 1 0 1 1 0-2h2V5.64L9.4 3H4Z" clip-rule="evenodd" style="fill:#222"/></symbol><symbol id="icon-eds-i-copy-link" viewBox="0 0 24 24"><path fill-rule="evenodd" clip-rule="evenodd" d="M19.4594 8.57015C19.0689 8.17963 19.0689 7.54646 19.4594 7.15594L20.2927 6.32261C20.2927 6.32261 20.2927 6.32261 20.2927 6.32261C21.0528 5.56252 21.0528 4.33019 20.2928 3.57014C19.5327 2.81007 18.3004 2.81007 17.5404 3.57014L16.7071 4.40347C16.3165 4.794 15.6834 4.794 15.2928 4.40348C14.9023 4.01296 14.9023 3.3798 15.2928 2.98927L16.1262 2.15594C17.6673 0.614803 20.1659 0.614803 21.707 2.15593C23.2481 3.69705 23.248 6.19569 21.707 7.7368L20.8737 8.57014C20.4831 8.96067 19.85 8.96067 19.4594 8.57015Z"/><path fill-rule="evenodd" clip-rule="evenodd" d="M18.0944 5.90592C18.4849 6.29643 18.4849 6.9296 18.0944 7.32013L16.4278 8.9868C16.0373 9.37733 15.4041 9.37734 15.0136 8.98682C14.6231 8.59631 14.6231 7.96314 15.0136 7.57261L16.6802 5.90594C17.0707 5.51541 17.7039 5.5154 18.0944 5.90592Z"/><path fill-rule="evenodd" clip-rule="evenodd" d="M13.5113 6.32243C13.9018 6.71295 13.9018 7.34611 13.5113 7.73664L12.678 8.56997C12.678 8.56997 12.678 8.56997 12.678 8.56997C11.9179 9.33006 11.9179 10.5624 12.6779 11.3224C13.438 12.0825 14.6703 12.0825 15.4303 11.3224L16.2636 10.4891C16.6542 10.0986 17.2873 10.0986 17.6779 10.4891C18.0684 10.8796 18.0684 11.5128 17.6779 11.9033L16.8445 12.7366C15.3034 14.2778 12.8048 14.2778 11.2637 12.7366C9.72262 11.1955 9.72266 8.69689 11.2637 7.15578L12.097 6.32244C12.4876 5.93191 13.1207 5.93191 13.5113 6.32243Z"/><path d="M8 20V22H19.4619C20.136 22 20.7822 21.7311 21.2582 21.2529C21.7333 20.7757 22 20.1289 22 19.4549V15C22 14.4477 21.5523 14 21 14C20.4477 14 20 14.4477 20 15V19.4549C20 19.6004 19.9426 19.7397 19.8408 19.842C19.7399 19.9433 19.6037 20 19.4619 20H8Z"/><path d="M4 13H2V19.4619C2 20.136 2.26889 20.7822 2.74705 21.2582C3.22434 21.7333 3.87105 22 4.5451 22H9C9.55228 22 10 21.5523 10 21C10 20.4477 9.55228 20 9 20H4.5451C4.39957 20 4.26028 19.9426 4.15804 19.8408C4.05668 19.7399 4 19.6037 4 19.4619V13Z"/><path d="M4 13H2V4.53808C2 3.86398 2.26889 3.21777 2.74705 2.74178C3.22434 2.26666 3.87105 2 4.5451 2H9C9.55228 2 10 2.44772 10 3C10 3.55228 9.55228 4 9 4H4.5451C4.39957 4 4.26028 4.05743 4.15804 4.15921C4.05668 4.26011 4 4.39633 4 4.53808V13Z"/></symbol><symbol id="icon-eds-i-github-medium" viewBox="0 0 24 24"><path d="M 11.964844 0 C 5.347656 0 0 5.269531 0 11.792969 C 0 17.003906 3.425781 21.417969 8.179688 22.976562 C 8.773438 23.09375 8.992188 22.722656 8.992188 22.410156 C 8.992188 22.136719 8.972656 21.203125 8.972656 20.226562 C 5.644531 20.929688 4.953125 18.820312 4.953125 18.820312 C 4.417969 17.453125 3.625 17.101562 3.625 17.101562 C 2.535156 16.378906 3.703125 16.378906 3.703125 16.378906 C 4.914062 16.457031 5.546875 17.589844 5.546875 17.589844 C 6.617188 19.386719 8.339844 18.878906 9.03125 18.566406 C 9.132812 17.804688 9.449219 17.277344 9.785156 16.984375 C 7.132812 16.710938 4.339844 15.695312 4.339844 11.167969 C 4.339844 9.878906 4.8125 8.824219 5.566406 8.003906 C 5.445312 7.710938 5.03125 6.5 5.683594 4.878906 C 5.683594 4.878906 6.695312 4.566406 8.972656 6.089844 C 9.949219 5.832031 10.953125 5.703125 11.964844 5.699219 C 12.972656 5.699219 14.003906 5.835938 14.957031 6.089844 C 17.234375 4.566406 18.242188 4.878906 18.242188 4.878906 C 18.898438 6.5 18.480469 7.710938 18.363281 8.003906 C 19.136719 8.824219 19.589844 9.878906 19.589844 11.167969 C 19.589844 15.695312 16.796875 16.691406 14.125 16.984375 C 14.558594 17.355469 14.933594 18.058594 14.933594 19.171875 C 14.933594 20.753906 14.914062 22.019531 14.914062 22.410156 C 14.914062 22.722656 15.132812 23.09375 15.726562 22.976562 C 20.480469 21.414062 23.910156 17.003906 23.910156 11.792969 C 23.929688 5.269531 18.558594 0 11.964844 0 Z M 11.964844 0 "/></symbol><symbol id="icon-eds-i-institution-medium" viewBox="0 0 24 24"><g><path fill-rule="evenodd" clip-rule="evenodd" d="M11.9967 1C11.6364 1 11.279 1.0898 10.961 1.2646C10.9318 1.28061 10.9035 1.29806 10.8761 1.31689L2.79765 6.87C2.46776 7.08001 2.20618 7.38466 2.07836 7.76668C1.94823 8.15561 1.98027 8.55648 2.12665 8.90067C2.42086 9.59246 3.12798 10 3.90107 10H4.99994V16H4.49994C3.11923 16 1.99994 17.1193 1.99994 18.5V19.5C1.99994 20.8807 3.11923 22 4.49994 22H19.4999C20.8807 22 21.9999 20.8807 21.9999 19.5V18.5C21.9999 17.1193 20.8807 16 19.4999 16H18.9999V10H20.0922C20.8653 10 21.5725 9.59252 21.8667 8.90065C22.0131 8.55642 22.0451 8.15553 21.9149 7.7666C21.7871 7.38459 21.5255 7.07997 21.1956 6.86998L13.1172 1.31689C13.0898 1.29806 13.0615 1.28061 13.0324 1.2646C12.7143 1.0898 12.357 1 11.9967 1ZM4.6844 8L11.9472 3.00755C11.9616 3.00295 11.9783 3 11.9967 3C12.015 3 12.0318 3.00295 12.0461 3.00755L19.3089 8H4.6844ZM16.9999 16V10H14.9999V16H16.9999ZM12.9999 16V10H10.9999V16H12.9999ZM8.99994 16V10H6.99994V16H8.99994ZM3.99994 18.5C3.99994 18.2239 4.2238 18 4.49994 18H19.4999C19.7761 18 19.9999 18.2239 19.9999 18.5V19.5C19.9999 19.7761 19.7761 20 19.4999 20H4.49994C4.2238 20 3.99994 19.7761 3.99994 19.5V18.5Z"/></g></symbol><symbol id="icon-eds-i-limited-access" viewBox="0 0 16 16"><path fill-rule="evenodd" d="M4 3v3a1 1 0 0 1-2 0V2.923C2 1.875 2.84 1 3.909 1h5.909a1 1 0 0 1 .713.298l3.181 3.231a1 1 0 0 1 .288.702V6a1 1 0 1 1-2 0v-.36L9.4 3H4ZM3 8a1 1 0 0 1 1 1v1a1 1 0 1 1-2 0V9a1 1 0 0 1 1-1Zm10 0a1 1 0 0 1 1 1v1a1 1 0 1 1-2 0V9a1 1 0 0 1 1-1Zm-3.5 6a1 1 0 0 1-1 1h-1a1 1 0 1 1 0-2h1a1 1 0 0 1 1 1Zm2.441-1a1 1 0 0 1 2 0c0 .73-.246 1.306-.706 1.664a1.61 1.61 0 0 1-.876.334l-.032.002H11.5a1 1 0 1 1 0-2h.441ZM4 13a1 1 0 0 0-2 0c0 .73.247 1.306.706 1.664a1.609 1.609 0 0 0 .876.334l.032.002H4.5a1 1 0 1 0 0-2H4Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-search-category-medium" viewBox="0 0 32 32"><path fill-rule="evenodd" d="M2 5.306A3.306 3.306 0 0 1 5.306 2h5.833a3.306 3.306 0 0 1 3.306 3.306v5.833a3.306 3.306 0 0 1-3.306 3.305H5.306A3.306 3.306 0 0 1 2 11.14V5.306Zm3.306-.584a.583.583 0 0 0-.584.584v5.833c0 .322.261.583.584.583h5.833a.583.583 0 0 0 .583-.583V5.306a.583.583 0 0 0-.583-.584H5.306Zm15.555 8.945a7.194 7.194 0 1 0 4.034 13.153l2.781 2.781a1.361 1.361 0 1 0 1.925-1.925l-2.781-2.781a7.194 7.194 0 0 0-5.958-11.228Zm3.173 10.346a4.472 4.472 0 1 0-.021.021l.01-.01.011-.011Zm-5.117-19.29a.583.583 0 0 0-.584.583v5.833a1.361 1.361 0 0 1-2.722 0V5.306A3.306 3.306 0 0 1 18.917 2h5.833a3.306 3.306 0 0 1 3.306 3.306v5.833c0 .6-.161 1.166-.443 1.654a1.361 1.361 0 1 1-2.357-1.363.575.575 0 0 0 .078-.291V5.306a.583.583 0 0 0-.584-.584h-5.833ZM2 18.916a3.306 3.306 0 0 1 3.306-3.306h5.833a1.361 1.361 0 1 1 0 2.722H5.306a.583.583 0 0 0-.584.584v5.833c0 .322.261.583.584.583h5.833a.574.574 0 0 0 .29-.077 1.361 1.361 0 1 1 1.364 2.356 3.296 3.296 0 0 1-1.654.444H5.306A3.306 3.306 0 0 1 2 24.75v-5.833Z" clip-rule="evenodd"/></symbol><symbol id="icon-eds-i-subjects-medium" viewBox="0 0 24 24"><g id="icon-subjects-copy" stroke="none" stroke-width="1" fill-rule="evenodd"><path d="M13.3846154,2 C14.7015971,2 15.7692308,3.06762994 15.7692308,4.38461538 L15.7692308,7.15384615 C15.7692308,8.47082629 14.7015955,9.53846154 13.3846154,9.53846154 L13.1038388,9.53925278 C13.2061091,9.85347965 13.3815528,10.1423885 13.6195822,10.3804178 C13.9722182,10.7330539 14.436524,10.9483278 14.9293854,10.9918129 L15.1153846,11 C16.2068332,11 17.2535347,11.433562 18.0254647,12.2054189 C18.6411944,12.8212361 19.0416785,13.6120766 19.1784166,14.4609738 L19.6153846,14.4615385 C20.932386,14.4615385 22,15.5291672 22,16.8461538 L22,19.6153846 C22,20.9323924 20.9323924,22 19.6153846,22 L16.8461538,22 C15.5291672,22 14.4615385,20.932386 14.4615385,19.6153846 L14.4615385,16.8461538 C14.4615385,15.5291737 15.5291737,14.4615385 16.8461538,14.4615385 L17.126925,14.460779 C17.0246537,14.1465537 16.8492179,13.857633 16.6112344,13.6196157 C16.2144418,13.2228606 15.6764136,13 15.1153846,13 C14.0239122,13 12.9771569,12.5664197 12.2053686,11.7946314 C12.1335167,11.7227795 12.0645962,11.6485444 11.9986839,11.5721119 C11.9354038,11.6485444 11.8664833,11.7227795 11.7946314,11.7946314 C11.0228431,12.5664197 9.97608778,13 8.88461538,13 C8.323576,13 7.78552852,13.2228666 7.38881294,13.6195822 C7.15078359,13.8576115 6.97533988,14.1465203 6.8730696,14.4607472 L7.15384615,14.4615385 C8.47082629,14.4615385 9.53846154,15.5291737 9.53846154,16.8461538 L9.53846154,19.6153846 C9.53846154,20.932386 8.47083276,22 7.15384615,22 L4.38461538,22 C3.06762347,22 2,20.9323876 2,19.6153846 L2,16.8461538 C2,15.5291721 3.06762994,14.4615385 4.38461538,14.4615385 L4.8215823,14.4609378 C4.95831893,13.6120029 5.3588057,12.8211623 5.97459937,12.2053686 C6.69125996,11.488708 7.64500941,11.0636656 8.6514968,11.0066017 L8.88461538,11 C9.44565477,11 9.98370225,10.7771334 10.3804178,10.3804178 C10.6184472,10.1423885 10.7938909,9.85347965 10.8961612,9.53925278 L10.6153846,9.53846154 C9.29840448,9.53846154 8.23076923,8.47082629 8.23076923,7.15384615 L8.23076923,4.38461538 C8.23076923,3.06762994 9.29840286,2 10.6153846,2 L13.3846154,2 Z M7.15384615,16.4615385 L4.38461538,16.4615385 C4.17220099,16.4615385 4,16.63374 4,16.8461538 L4,19.6153846 C4,19.8278134 4.17218833,20 4.38461538,20 L7.15384615,20 C7.36626945,20 7.53846154,19.8278103 7.53846154,19.6153846 L7.53846154,16.8461538 C7.53846154,16.6337432 7.36625679,16.4615385 7.15384615,16.4615385 Z M19.6153846,16.4615385 L16.8461538,16.4615385 C16.6337432,16.4615385 16.4615385,16.6337432 16.4615385,16.8461538 L16.4615385,19.6153846 C16.4615385,19.8278103 16.6337306,20 16.8461538,20 L19.6153846,20 C19.8278229,20 20,19.8278229 20,19.6153846 L20,16.8461538 C20,16.6337306 19.8278103,16.4615385 19.6153846,16.4615385 Z M13.3846154,4 L10.6153846,4 C10.4029708,4 10.2307692,4.17220099 10.2307692,4.38461538 L10.2307692,7.15384615 C10.2307692,7.36625679 10.402974,7.53846154 10.6153846,7.53846154 L13.3846154,7.53846154 C13.597026,7.53846154 13.7692308,7.36625679 13.7692308,7.15384615 L13.7692308,4.38461538 C13.7692308,4.17220099 13.5970292,4 13.3846154,4 Z" id="Shape" fill-rule="nonzero"/></g></symbol><symbol id="icon-eds-small-arrow-left" viewBox="0 0 16 17"><path stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M14 8.092H2m0 0L8 2M2 8.092l6 6.035"/></symbol><symbol id="icon-eds-small-arrow-right" viewBox="0 0 16 16"><g fill-rule="evenodd" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="2"><path d="M2 8.092h12M8 2l6 6.092M8 14.127l6-6.035"/></g></symbol><symbol id="icon-orcid-logo" viewBox="0 0 40 40"><path fill-rule="evenodd" d="M12.281 10.453c.875 0 1.578-.719 1.578-1.578 0-.86-.703-1.578-1.578-1.578-.875 0-1.578.703-1.578 1.578 0 .86.703 1.578 1.578 1.578Zm-1.203 18.641h2.406V12.359h-2.406v16.735Z"/><path fill-rule="evenodd" d="M17.016 12.36h6.5c6.187 0 8.906 4.421 8.906 8.374 0 4.297-3.36 8.375-8.875 8.375h-6.531V12.36Zm6.234 14.578h-3.828V14.53h3.703c4.688 0 6.828 2.844 6.828 6.203 0 2.063-1.25 6.203-6.703 6.203Z" clip-rule="evenodd"/></symbol></svg> </div> <a class="c-skip-link" href="#main">Skip to main content</a> <div class="u-lazy-ad-wrapper u-mbs-0"> <div class="c-ad c-ad--728x90 c-ad--conditional" data-test="springer-doubleclick-ad"> <div class="c-ad c-ad__inner" > <p class="c-ad__label">Advertisement</p> <div id="div-gpt-ad-LB1" class="div-gpt-ad grade-c-hide" data-gpt data-gpt-unitpath="/270604982/springerlink/11263/article" data-gpt-sizes="728x90" data-gpt-targeting="pos=top;articleid=s11263-024-02006-w;" data-ad-type="top" style="min-width:728px;min-height:90px"> <noscript> <a href="//pubads.g.doubleclick.net/gampad/jump?iu=/270604982/springerlink/11263/article&sz=728x90&pos=top&articleid=s11263-024-02006-w"> <img data-test="gpt-advert-fallback-img" src="//pubads.g.doubleclick.net/gampad/ad?iu=/270604982/springerlink/11263/article&sz=728x90&pos=top&articleid=s11263-024-02006-w" alt="Advertisement" width="728" height="90"> </a> </noscript> </div> </div> </div> </div> <header class="eds-c-header" data-eds-c-header> <div class="eds-c-header__container" data-eds-c-header-expander-anchor> <div class="eds-c-header__brand"> <a href="https://link.springer.com" data-test=springerlink-logo data-track="click_imprint_logo" data-track-context="unified header" data-track-action="click logo link" data-track-category="unified header" data-track-label="link" > <img src="/oscar-static/images/darwin/header/img/logo-springer-nature-link-3149409f62.svg" alt="Springer Nature Link"> </a> </div> <a class="c-header__link eds-c-header__link" id="identity-account-widget" data-track="click_login" data-track-context="header" href='https://idp.springer.com/auth/personal/springernature?redirect_uri=https://link.springer.com/article/10.1007/s11263-024-02006-w?'><span class="eds-c-header__widget-fragment-title">Log in</span></a> </div> <nav class="eds-c-header__nav" aria-label="header navigation"> <div class="eds-c-header__nav-container"> <div class="eds-c-header__item eds-c-header__item--menu"> <a href="#eds-c-header-nav" class="eds-c-header__link" data-eds-c-header-expander> <svg class="eds-c-header__icon" width="24" height="24" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-menu-medium"></use> </svg><span>Menu</span> </a> </div> <div class="eds-c-header__item eds-c-header__item--inline-links"> <a class="eds-c-header__link" href="https://link.springer.com/journals/" data-track="nav_find_a_journal" data-track-context="unified header" data-track-action="click find a journal" data-track-category="unified header" data-track-label="link" > Find a journal </a> <a class="eds-c-header__link" href="https://www.springernature.com/gp/authors" data-track="nav_how_to_publish" data-track-context="unified header" data-track-action="click publish with us link" data-track-category="unified header" data-track-label="link" > Publish with us </a> <a class="eds-c-header__link" href="https://link.springernature.com/home/" data-track="nav_track_your_research" data-track-context="unified header" data-track-action="click track your research" data-track-category="unified header" data-track-label="link" > Track your research </a> </div> <div class="eds-c-header__link-container"> <div class="eds-c-header__item eds-c-header__item--divider"> <a href="#eds-c-header-popup-search" class="eds-c-header__link" data-eds-c-header-expander data-eds-c-header-test-search-btn> <svg class="eds-c-header__icon" width="24" height="24" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-search-medium"></use> </svg><span>Search</span> </a> </div> <div id="ecommerce-header-cart-icon-link" class="eds-c-header__item ecommerce-cart" style="display:inline-block"> <a class="eds-c-header__link" href="https://order.springer.com/public/cart" style="appearance:none;border:none;background:none;color:inherit;position:relative"> <svg id="eds-i-cart" class="eds-c-header__icon" xmlns="http://www.w3.org/2000/svg" height="24" width="24" viewBox="0 0 24 24" aria-hidden="true" focusable="false"> <path fill="currentColor" fill-rule="nonzero" d="M2 1a1 1 0 0 0 0 2l1.659.001 2.257 12.808a2.599 2.599 0 0 0 2.435 2.185l.167.004 9.976-.001a2.613 2.613 0 0 0 2.61-1.748l.03-.106 1.755-7.82.032-.107a2.546 2.546 0 0 0-.311-1.986l-.108-.157a2.604 2.604 0 0 0-2.197-1.076L6.042 5l-.56-3.17a1 1 0 0 0-.864-.82l-.12-.007L2.001 1ZM20.35 6.996a.63.63 0 0 1 .54.26.55.55 0 0 1 .082.505l-.028.1L19.2 15.63l-.022.05c-.094.177-.282.299-.526.317l-10.145.002a.61.61 0 0 1-.618-.515L6.394 6.999l13.955-.003ZM18 19a2 2 0 1 0 0 4 2 2 0 0 0 0-4ZM8 19a2 2 0 1 0 0 4 2 2 0 0 0 0-4Z"></path> </svg><span>Cart</span><span class="cart-info" style="display:none;position:absolute;top:10px;right:45px;background-color:#C65301;color:#fff;width:18px;height:18px;font-size:11px;border-radius:50%;line-height:17.5px;text-align:center"></span></a> <script>(function () { var exports = {}; if (window.fetch) { "use strict"; Object.defineProperty(exports, "__esModule", { value: true }); exports.headerWidgetClientInit = void 0; var headerWidgetClientInit = function (getCartInfo) { document.body.addEventListener("updatedCart", function () { updateCartIcon(); }, false); return updateCartIcon(); function updateCartIcon() { return getCartInfo() .then(function (res) { return res.json(); }) .then(refreshCartState) .catch(function (_) { }); } function refreshCartState(json) { var indicator = document.querySelector("#ecommerce-header-cart-icon-link .cart-info"); /* istanbul ignore else */ if (indicator && json.itemCount) { indicator.style.display = 'block'; indicator.textContent = json.itemCount > 9 ? '9+' : json.itemCount.toString(); var moreThanOneItem = json.itemCount > 1; indicator.setAttribute('title', "there ".concat(moreThanOneItem ? "are" : "is", " ").concat(json.itemCount, " item").concat(moreThanOneItem ? "s" : "", " in your cart")); } return json; } }; exports.headerWidgetClientInit = headerWidgetClientInit; headerWidgetClientInit( function () { return window.fetch("https://cart.springer.com/cart-info", { credentials: "include", headers: { Accept: "application/json" } }) } ) }})()</script> </div> </div> </div> </nav> </header> <article lang="en" id="main" class="app-masthead__colour-18"> <section class="app-masthead " aria-label="article masthead"> <div class="app-masthead__container"> <div class="app-article-masthead u-sans-serif js-context-bar-sticky-point-masthead" data-track-component="article" data-test="masthead-component"> <div class="app-article-masthead__info"> <nav aria-label="breadcrumbs" data-test="breadcrumbs"> <ol class="c-breadcrumbs c-breadcrumbs--contrast" itemscope itemtype="https://schema.org/BreadcrumbList"> <li class="c-breadcrumbs__item" id="breadcrumb0" itemprop="itemListElement" itemscope="" itemtype="https://schema.org/ListItem"> <a href="/" class="c-breadcrumbs__link" itemprop="item" data-track="click_breadcrumb" data-track-context="article page" data-track-category="article" data-track-action="breadcrumbs" data-track-label="breadcrumb1"><span itemprop="name">Home</span></a><meta itemprop="position" content="1"> <svg class="c-breadcrumbs__chevron" role="img" aria-hidden="true" focusable="false" width="10" height="10" viewBox="0 0 10 10"> <path d="m5.96738168 4.70639573 2.39518594-2.41447274c.37913917-.38219212.98637524-.38972225 1.35419292-.01894278.37750606.38054586.37784436.99719163-.00013556 1.37821513l-4.03074001 4.06319683c-.37758093.38062133-.98937525.38100976-1.367372-.00003075l-4.03091981-4.06337806c-.37759778-.38063832-.38381821-.99150444-.01600053-1.3622839.37750607-.38054587.98772445-.38240057 1.37006824.00302197l2.39538588 2.4146743.96295325.98624457z" fill-rule="evenodd" transform="matrix(0 -1 1 0 0 10)"/> </svg> </li> <li class="c-breadcrumbs__item" id="breadcrumb1" itemprop="itemListElement" itemscope="" itemtype="https://schema.org/ListItem"> <a href="/journal/11263" class="c-breadcrumbs__link" itemprop="item" data-track="click_breadcrumb" data-track-context="article page" data-track-category="article" data-track-action="breadcrumbs" data-track-label="breadcrumb2"><span itemprop="name">International Journal of Computer Vision</span></a><meta itemprop="position" content="2"> <svg class="c-breadcrumbs__chevron" role="img" aria-hidden="true" focusable="false" width="10" height="10" viewBox="0 0 10 10"> <path d="m5.96738168 4.70639573 2.39518594-2.41447274c.37913917-.38219212.98637524-.38972225 1.35419292-.01894278.37750606.38054586.37784436.99719163-.00013556 1.37821513l-4.03074001 4.06319683c-.37758093.38062133-.98937525.38100976-1.367372-.00003075l-4.03091981-4.06337806c-.37759778-.38063832-.38381821-.99150444-.01600053-1.3622839.37750607-.38054587.98772445-.38240057 1.37006824.00302197l2.39538588 2.4146743.96295325.98624457z" fill-rule="evenodd" transform="matrix(0 -1 1 0 0 10)"/> </svg> </li> <li class="c-breadcrumbs__item" id="breadcrumb2" itemprop="itemListElement" itemscope="" itemtype="https://schema.org/ListItem"> <span itemprop="name">Article</span><meta itemprop="position" content="3"> </li> </ol> </nav> <h1 class="c-article-title" data-test="article-title" data-article-title="">Automated Detection of Cat Facial Landmarks</h1> <ul class="c-article-identifiers"> <li class="c-article-identifiers__item"> <a href="https://www.springernature.com/gp/open-research/about/the-fundamentals-of-open-access-and-open-research" data-track="click" data-track-action="open access" data-track-label="link" class="u-color-open-access" data-test="open-access">Open access</a> </li> <li class="c-article-identifiers__item"> Published: <time datetime="2024-03-05">05 March 2024</time> </li> </ul> <ul class="c-article-identifiers c-article-identifiers--cite-list"> <li class="c-article-identifiers__item"> <span data-test="journal-volume">Volume 132</span>, pages 3103–3118, (<span data-test="article-publication-year">2024</span>) </li> <li class="c-article-identifiers__item c-article-identifiers__item--cite"> <a href="#citeas" data-track="click" data-track-action="cite this article" data-track-category="article body" data-track-label="link">Cite this article</a> </li> </ul> <div class="app-article-masthead__buttons" data-test="download-article-link-wrapper" data-track-context="masthead"> <div class="c-pdf-container"> <div class="c-pdf-download u-clear-both u-mb-16"> <a href="/content/pdf/10.1007/s11263-024-02006-w.pdf" class="u-button u-button--full-width u-button--primary u-justify-content-space-between c-pdf-download__link" data-article-pdf="true" data-readcube-pdf-url="true" data-test="pdf-link" data-draft-ignore="true" data-track="content_download" data-track-type="article pdf download" data-track-action="download pdf" data-track-label="button" data-track-external download> <span class="c-pdf-download__text">Download PDF</span> <svg aria-hidden="true" focusable="false" width="16" height="16" class="u-icon"><use xlink:href="#icon-eds-i-download-medium"/></svg> </a> </div> </div> <p class="app-article-masthead__access"> <svg width="16" height="16" focusable="false" role="img" aria-hidden="true"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-check-filled-medium"></use></svg> You have full access to this <a href="https://www.springernature.com/gp/open-research/about/the-fundamentals-of-open-access-and-open-research" data-track="click" data-track-action="open access" data-track-label="link">open access</a> article</p> </div> </div> <div class="app-article-masthead__brand"> <a href="/journal/11263" class="app-article-masthead__journal-link" data-track="click_journal_home" data-track-action="journal homepage" data-track-context="article page" data-track-label="link"> <picture> <source type="image/webp" media="(min-width: 768px)" width="120" height="159" srcset="https://media.springernature.com/w120/springer-static/cover-hires/journal/11263?as=webp, https://media.springernature.com/w316/springer-static/cover-hires/journal/11263?as=webp 2x"> <img width="72" height="95" src="https://media.springernature.com/w72/springer-static/cover-hires/journal/11263?as=webp" srcset="https://media.springernature.com/w144/springer-static/cover-hires/journal/11263?as=webp 2x" alt=""> </picture> <span class="app-article-masthead__journal-title">International Journal of Computer Vision</span> </a> <a href="https://link.springer.com/journal/11263/aims-and-scope" class="app-article-masthead__submission-link" data-track="click_aims_and_scope" data-track-action="aims and scope" data-track-context="article page" data-track-label="link"> Aims and scope <svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-arrow-right-medium"></use></svg> </a> <a href="https://www.editorialmanager.com/visi" class="app-article-masthead__submission-link" data-track="click_submit_manuscript" data-track-context="article masthead on springerlink article page" data-track-action="submit manuscript" data-track-label="link"> Submit manuscript <svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-arrow-right-medium"></use></svg> </a> </div> </div> </div> </section> <div class="c-article-main u-container u-mt-24 u-mb-32 l-with-sidebar" id="main-content" data-component="article-container"> <main class="u-serif js-main-column" data-track-component="article body"> <div class="c-context-bar u-hide" data-test="context-bar" data-context-bar aria-hidden="true"> <div class="c-context-bar__container u-container"> <div class="c-context-bar__title"> Automated Detection of Cat Facial Landmarks </div> <div data-test="inCoD" data-track-context="sticky banner"> <div class="c-pdf-container"> <div class="c-pdf-download u-clear-both u-mb-16"> <a href="/content/pdf/10.1007/s11263-024-02006-w.pdf" class="u-button u-button--full-width u-button--primary u-justify-content-space-between c-pdf-download__link" data-article-pdf="true" data-readcube-pdf-url="true" data-test="pdf-link" data-draft-ignore="true" data-track="content_download" data-track-type="article pdf download" data-track-action="download pdf" data-track-label="button" data-track-external download> <span class="c-pdf-download__text">Download PDF</span> <svg aria-hidden="true" focusable="false" width="16" height="16" class="u-icon"><use xlink:href="#icon-eds-i-download-medium"/></svg> </a> </div> </div> </div> </div> </div> <div class="c-article-header"> <header> <ul class="c-article-author-list c-article-author-list--short" data-test="authors-list" data-component-authors-activator="authors-list"><li class="c-article-author-list__item"><a data-test="author-name" data-track="click" data-track-action="open author" data-track-label="link" href="#auth-George-Martvel-Aff1" data-author-popup="auth-George-Martvel-Aff1" data-author-search="Martvel, George" data-corresp-id="c1">George Martvel<svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-mail-medium"></use></svg></a><span class="u-js-hide"> <a class="js-orcid" href="http://orcid.org/0009-0009-2602-2041"><span class="u-visually-hidden">ORCID: </span>orcid.org/0009-0009-2602-2041</a></span><sup class="u-js-hide"><a href="#Aff1">1</a></sup>, </li><li class="c-article-author-list__item"><a data-test="author-name" data-track="click" data-track-action="open author" data-track-label="link" href="#auth-Ilan-Shimshoni-Aff1" data-author-popup="auth-Ilan-Shimshoni-Aff1" data-author-search="Shimshoni, Ilan">Ilan Shimshoni</a><sup class="u-js-hide"><a href="#Aff1">1</a></sup> & </li><li class="c-article-author-list__item"><a data-test="author-name" data-track="click" data-track-action="open author" data-track-label="link" href="#auth-Anna-Zamansky-Aff1" data-author-popup="auth-Anna-Zamansky-Aff1" data-author-search="Zamansky, Anna">Anna Zamansky</a><sup class="u-js-hide"><a href="#Aff1">1</a></sup> </li></ul> <div data-test="article-metrics"> <ul class="app-article-metrics-bar u-list-reset"> <li class="app-article-metrics-bar__item"> <p class="app-article-metrics-bar__count"><svg class="u-icon app-article-metrics-bar__icon" width="24" height="24" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-accesses-medium"></use> </svg>4529 <span class="app-article-metrics-bar__label">Accesses</span></p> </li> <li class="app-article-metrics-bar__item"> <p class="app-article-metrics-bar__count"><svg class="u-icon app-article-metrics-bar__icon" width="24" height="24" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-altmetric-medium"></use> </svg>2 <span class="app-article-metrics-bar__label">Altmetric</span></p> </li> <li class="app-article-metrics-bar__item app-article-metrics-bar__item--metrics"> <p class="app-article-metrics-bar__details"><a href="/article/10.1007/s11263-024-02006-w/metrics" data-track="click" data-track-action="view metrics" data-track-label="link" rel="nofollow">Explore all metrics <svg class="u-icon app-article-metrics-bar__arrow-icon" width="24" height="24" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-arrow-right-medium"></use> </svg></a></p> </li> </ul> </div> <div class="u-mt-32"> </div> </header> </div> <div data-article-body="true" data-track-component="article body" class="c-article-body"> <section aria-labelledby="Abs1" data-title="Abstract" lang="en"><div class="c-article-section" id="Abs1-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Abs1">Abstract</h2><div class="c-article-section__content" id="Abs1-content"><p>The field of animal affective computing is rapidly emerging, and analysis of facial expressions is a crucial aspect. One of the most significant challenges that researchers in the field currently face is the scarcity of high-quality, comprehensive datasets that allow the development of models for facial expressions analysis. One of the possible approaches is the utilisation of facial landmarks, which has been shown for humans and animals. In this paper we present a novel dataset of cat facial images annotated with bounding boxes and 48 facial landmarks grounded in cat facial anatomy. We also introduce a landmark detection convolution neural network-based model which uses a magnifying ensemble method. Our model shows excellent performance on cat faces and is generalizable to human and other animals facial landmark detection.</p></div></div></section> <div data-test="cobranding-download"> </div> <section aria-labelledby="inline-recommendations" data-title="Inline Recommendations" class="c-article-recommendations" data-track-component="inline-recommendations"> <h3 class="c-article-recommendations-title" id="inline-recommendations">Similar content being viewed by others</h3> <div class="c-article-recommendations-list"> <div class="c-article-recommendations-list__item"> <article class="c-article-recommendations-card" itemscope itemtype="http://schema.org/ScholarlyArticle"> <div class="c-article-recommendations-card__img"><img src="https://media.springernature.com/w215h120/springer-static/image/art%3A10.1007%2Fs11042-018-6482-7/MediaObjects/11042_2018_6482_Fig1_HTML.png" loading="lazy" alt=""></div> <div class="c-article-recommendations-card__main"> <h3 class="c-article-recommendations-card__heading" itemprop="name headline"> <a class="c-article-recommendations-card__link" itemprop="url" href="https://link.springer.com/10.1007/s11042-018-6482-7?fromPaywallRec=false" data-track="select_recommendations_1" data-track-context="inline recommendations" data-track-action="click recommendations inline - 1" data-track-label="10.1007/s11042-018-6482-7">Robust facial landmark extraction scheme using multiple convolutional neural networks </a> </h3> <div class="c-article-meta-recommendations" data-test="recommendation-info"> <span class="c-article-meta-recommendations__item-type">Article</span> <span class="c-article-meta-recommendations__date">23 August 2018</span> </div> </div> </article> </div> <div class="c-article-recommendations-list__item"> <article class="c-article-recommendations-card" itemscope itemtype="http://schema.org/ScholarlyArticle"> <div class="c-article-recommendations-card__img"><img src="https://media.springernature.com/w92h120/springer-static/cover-hires/book/978-3-030-13469-3?as=webp" loading="lazy" alt=""></div> <div class="c-article-recommendations-card__main"> <h3 class="c-article-recommendations-card__heading" itemprop="name headline"> <a class="c-article-recommendations-card__link" itemprop="url" href="https://link.springer.com/10.1007/978-3-030-13469-3_67?fromPaywallRec=false" data-track="select_recommendations_2" data-track-context="inline recommendations" data-track-action="click recommendations inline - 2" data-track-label="10.1007/978-3-030-13469-3_67">Facial Landmarks Detection Using a Cascade of Recombinator Networks </a> </h3> <div class="c-article-meta-recommendations" data-test="recommendation-info"> <span class="c-article-meta-recommendations__item-type">Chapter</span> <span class="c-article-meta-recommendations__date">© 2019</span> </div> </div> </article> </div> <div class="c-article-recommendations-list__item"> <article class="c-article-recommendations-card" itemscope itemtype="http://schema.org/ScholarlyArticle"> <div class="c-article-recommendations-card__img"><img src="https://media.springernature.com/w215h120/springer-static/image/art%3A10.1007%2Fs11263-019-01151-x/MediaObjects/11263_2019_1151_Fig1_HTML.jpg" loading="lazy" alt=""></div> <div class="c-article-recommendations-card__main"> <h3 class="c-article-recommendations-card__heading" itemprop="name headline"> <a class="c-article-recommendations-card__link" itemprop="url" href="https://link.springer.com/10.1007/s11263-019-01151-x?fromPaywallRec=false" data-track="select_recommendations_3" data-track-context="inline recommendations" data-track-action="click recommendations inline - 3" data-track-label="10.1007/s11263-019-01151-x">Deep, Landmark-Free FAME: Face Alignment, Modeling, and Expression Estimation </a> </h3> <div class="c-article-meta-recommendations" data-test="recommendation-info"> <span class="c-article-meta-recommendations__item-type">Article</span> <span class="c-article-meta-recommendations__date">13 February 2019</span> </div> </div> </article> </div> </div> </section> <script> window.dataLayer = window.dataLayer || []; window.dataLayer.push({ recommendations: { recommender: 'semantic', model: 'specter', policy_id: 'NA', timestamp: 1740138748, embedded_user: 'null' } }); </script> <section aria-labelledby="content-related-subjects" data-test="subject-content"> <h3 id="content-related-subjects" class="c-article__sub-heading">Explore related subjects</h3> <span class="u-sans-serif u-text-s u-display-block u-mb-24">Discover the latest articles, news and stories from top researchers in related subjects.</span> <ul class="c-article-subject-list" role="list"> <li class="c-article-subject-list__subject"> <a href="/subject/artificial-intelligence" data-track="select_related_subject_1" data-track-context="related subjects from content page" data-track-label="Artificial Intelligence">Artificial Intelligence</a> </li> </ul> </section> <div class="app-card-service" data-test="article-checklist-banner"> <div> <a class="app-card-service__link" data-track="click_presubmission_checklist" data-track-context="article page top of reading companion" data-track-category="pre-submission-checklist" data-track-action="clicked article page checklist banner test 2 old version" data-track-label="link" href="https://beta.springernature.com/pre-submission?journalId=11263" data-test="article-checklist-banner-link"> <span class="app-card-service__link-text">Use our pre-submission checklist</span> <svg class="app-card-service__link-icon" aria-hidden="true" focusable="false"><use xlink:href="#icon-eds-i-arrow-right-small"></use></svg> </a> <p class="app-card-service__description">Avoid common mistakes on your manuscript.</p> </div> <div class="app-card-service__icon-container"> <svg class="app-card-service__icon" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-clipboard-check-medium"></use> </svg> </div> </div> <div class="main-content"> <section data-title="Introduction"><div class="c-article-section" id="Sec1-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec1"><span class="c-article-section__title-number">1 </span>Introduction</h2><div class="c-article-section__content" id="Sec1-content"><p>There is a huge body of work addressing automated human facial analysis, which has a plethora of applications in affective computing, healthcare, biometry, human-computer interaction and many other fields (Friesen & Ekman, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 1978" title="Friesen, E., & Ekman, P. (1978). Facial action coding system: A technique for the measurement of facial movement. Palo Alto, 3(2), 5." href="/article/10.1007/s11263-024-02006-w#ref-CR28" id="ref-link-section-d3557877e356">1978</a>; Li & Deng, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Li, S., & Deng, W. (2020). Deep facial expression recognition: A survey. IEEE Transactions on Affective Computing, 13(3), 1195–1215." href="/article/10.1007/s11263-024-02006-w#ref-CR57" id="ref-link-section-d3557877e359">2020</a>). One of the cornerstones of most of the approaches is localization of keypoints, also known as fiducial points or facial landmarks (Wu & Ji, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Wu, Y., & Ji, Q. (2019). Facial landmark detection: A literature survey. International Journal of Computer Vision, 127(2), 115–142." href="/article/10.1007/s11263-024-02006-w#ref-CR94" id="ref-link-section-d3557877e362">2019</a>). Their location provides important information that can be used for face alignment, feature extraction, facial expression recognition, head pose estimation, eye gaze tracking and many more tasks (Akinyelu & Blignaut, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Akinyelu, A. A., & Blignaut, P. (2022). Convolutional neural network-based technique for gaze estimation on mobile devices. Frontiers in Artificial Intelligence, 4, 796825." href="/article/10.1007/s11263-024-02006-w#ref-CR2" id="ref-link-section-d3557877e365">2022</a>; Al-Eidan et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Al-Eidan, R. M., Al-Khalifa, H. S., & Al-Salman, A. S. (2020). Deep-learning-based models for pain recognition: A systematic review. Applied Sciences, 10, 5984." href="/article/10.1007/s11263-024-02006-w#ref-CR3" id="ref-link-section-d3557877e368">2020</a>; Malek & Rossi, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Malek, S., & Rossi, S. (2021). Head pose estimation using facial-landmarks classification for children rehabilitation games. Pattern Recognition Letters, 152, 406–412." href="/article/10.1007/s11263-024-02006-w#ref-CR62" id="ref-link-section-d3557877e372">2021</a>). Automated facial action unit recognition systems also often use facial landmarks (Yang et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Yang, J., Zhang, F., Chen, B., & Khan, S. U. (2019). Facial expression recognition based on facial action unit. In: 2019 tenth international green and sustainable computing conference (IGSC) (pp. 1–6). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR100" id="ref-link-section-d3557877e375">2019</a>; Ma et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Ma, J., Li, X., Ren, Y., Yang, R., & Zhao, Q. (2021). Landmark-based facial feature construction and action unit intensity prediction. Mathematical Problems in Engineering, 2021, 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR63" id="ref-link-section-d3557877e378">2021</a>).</p><p>The type and number of landmarks varies depending on the specific application at hand: for coarse tasks such as face recognition or gaze direction assessment a few landmarks may suffice, however for facial expression analysis, depending on the species, up to eighty landmarks may be needed, leading to the need for automation of landmark detection (Tarnowski et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Tarnowski, P., Kołodziej, M., Majkowski, A., & Rak, R. J. (2017). Emotion recognition using facial expressions. Procedia Computer Science, 108, 1175–1184." href="/article/10.1007/s11263-024-02006-w#ref-CR88" id="ref-link-section-d3557877e384">2017</a>; Wu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., & Zhou, Q. (2018). Look at boundary: A boundary-aware face alignment algorithm. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2129–2138)." href="/article/10.1007/s11263-024-02006-w#ref-CR93" id="ref-link-section-d3557877e387">2018</a>). Despite its being seemingly a simple task, this computer vision problem has proven to be extremely challenging due to inherent face variability as well as effects of pose, expression, illumination, etc. Accordingly, a huge body of research addresses the automation of facial landmark detection and localization.</p><p>Research concerned with automation of animal behavior analysis has so far lagged behind the human domain. However, this is beginning to change, partly due to introduction of deep-learning platforms such as the DeepLabCut (Mathis et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281." href="/article/10.1007/s11263-024-02006-w#ref-CR65" id="ref-link-section-d3557877e393">2018</a>), DeepPoseKit (Graving et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e396">2019</a>), which allow to automate animal motion tracking and recognition of keypoints located on animals’ body. However, recently an increasing number of works go ‘deeper’ than tracking addressing recognition of animals’ affective states, including emotions, stress and pain. A comprehensive survey of state-of-the-art of these methods is provided in Broome et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Broome, S., Feighelstein, M., Zamansky, A., Lencioni, C. G., Andersen, H. P., Pessanha, F., Mahmoud, M., Kjellström, H., & Salah, A. A. (2023). Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions. International Journal of Computer Vision, 131(2), 572–590." href="/article/10.1007/s11263-024-02006-w#ref-CR9" id="ref-link-section-d3557877e399">2023</a>), focusing mainly on facial expressions. Since they are produced by all mammals, they are one of the most important channels of communication in this context (Hummel et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Hummel, H. I., Pessanha, F., Salah, A. A., van Loon, T.J ., & Veltkamp, R. C. (2020). Automatic pain detection on horse and donkey faces. In: 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020) (pp. 793–800). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR39" id="ref-link-section-d3557877e402">2020</a>; Paul & Mendl, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Paul, E. S., & Mendl, M. T. (2018). Animal emotion: Descriptive and prescriptive definitions and their implications for a comparative perspective. Applied Animal Behaviour Science, 205, 202–209." href="/article/10.1007/s11263-024-02006-w#ref-CR74" id="ref-link-section-d3557877e405">2018</a>). Facial analysis of animals presents immense challenges due to the great variability of textures, shapes, breeds and morphological structure in the realm of animals. This leads to the need for addressing the problem of automated facial landmark detection in animals.</p><p>This brings about unique challenges not present in the human domain, such as more complicated data collection protocols and ground truth establishment in the absence of verbal communication with animals. But most importantly, the variabilities across and within lead to the need of exploring effective and economic, in terms of computational resources and time, approaches for solving the landmark localization problem, taking into account the lack of the proper amount of data (Broome et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Broome, S., Feighelstein, M., Zamansky, A., Lencioni, C. G., Andersen, H. P., Pessanha, F., Mahmoud, M., Kjellström, H., & Salah, A. A. (2023). Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions. International Journal of Computer Vision, 131(2), 572–590." href="/article/10.1007/s11263-024-02006-w#ref-CR9" id="ref-link-section-d3557877e411">2023</a>) needed to solve such a problem by machine learning methods.</p><p>To address these gaps, this paper makes two main contributions. First, we introduce the Cat Facial Landmarks in the Wild (CatFLW) dataset containing 2091images of cats with 48 facial landmarks, annotated using an AI-assisted annotation method and grounded with the cat facial anatomy. Secondly, we present an Ensemble Landmark Detector (ELD), a baseline model which shows excellent performance on cats. The model also performs well on the human facial landmark WFLW dataset, compared to state-of-the-art models, as well as scaling the problem of detecting facial landmarks to other animal species.</p></div></div></section><section data-title="Related Works"><div class="c-article-section" id="Sec2-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec2"><span class="c-article-section__title-number">2 </span>Related Works</h2><div class="c-article-section__content" id="Sec2-content"><h3 class="c-article__sub-heading" id="Sec3"><span class="c-article-section__title-number">2.1 </span>AI-assisted Annotation</h3><p>AI-assisted annotation of training data is widely used to reduce the time and cost of created datasets for machine learning (Wu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Wu, M., Li, C., & Yao, Z. (2022). Deep active learning for computer vision tasks: Methodologies, applications, and challenges. Applied Sciences, 12(16), 8103." href="/article/10.1007/s11263-024-02006-w#ref-CR95" id="ref-link-section-d3557877e430">2022</a>). The idea of the method is to consistently annotate training data using predictions of a machine learning model that is gradually being retrained on corrected data. Due to the gradual improvement of the predictions of the assisting model and the use of algorithms and metrics for selecting the most suitable training data, human annotators spend less time processing more data.</p><p>Various AI-assisted annotation techniques are also used in computer vision tasks: image and video classification (Li & Guo, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Li, X., & Guo, Y. (2013). Adaptive active learning for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 859–866)." href="/article/10.1007/s11263-024-02006-w#ref-CR53" id="ref-link-section-d3557877e436">2013</a>; Collins et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Collins, B., Deng, J., Li, K., & Fei-Fei, L. (2008). Towards scalable dataset construction: An active learning approach. In: Proceedings of computer vision–ECCV 2008: 10th European conference on computer vision, Marseille, France, October 12-18, 2008, Part I 10 (pp. 86–98). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR15" id="ref-link-section-d3557877e439">2008</a>; Yang et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2003" title="Yang, J., et al. (2003). Automatically labeling video data using multi-class active learning. In: Proceedings of ninth IEEE international conference on computer vision (pp. 516–523). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR98" id="ref-link-section-d3557877e442">2003</a>; Gu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2015" title="Gu, Y., Jin, Z., & Chiu, S. C. (2015). Active learning combining uncertainty and diversity for multi-class image classification. IET Computer Vision, 9(3), 400–407." href="/article/10.1007/s11263-024-02006-w#ref-CR32" id="ref-link-section-d3557877e445">2015</a>), object detection (Yoo & Kweon, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Yoo, D., & Kweon, I. S. (2019) Learning loss for active learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 93–102)." href="/article/10.1007/s11263-024-02006-w#ref-CR102" id="ref-link-section-d3557877e448">2019</a>; Liu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Liu, Z., Ding, H., Zhong, H., Li, W., Dai, J., & He, C. (2021). Influence selection for active learning. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 9274–9283)." href="/article/10.1007/s11263-024-02006-w#ref-CR58" id="ref-link-section-d3557877e452">2021</a>; Aghdam et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Aghdam, H. H., Gonzalez-Garcia, A., Weijer, J. v. d., & López, A. M. (2019). Active learning for deep detection neural networks. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 3672–3680)." href="/article/10.1007/s11263-024-02006-w#ref-CR1" id="ref-link-section-d3557877e455">2019</a>; Kellenberger et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Kellenberger, B., Marcos, D., Lobry, S., & Tuia, D. (2019). Half a percent of labels is enough: Efficient animal detection in UAV imagery using deep CNNS and active learning. IEEE Transactions on Geoscience and Remote Sensing, 57(12), 9524–9533." href="/article/10.1007/s11263-024-02006-w#ref-CR44" id="ref-link-section-d3557877e458">2019</a>), segmentation (Sinha et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Sinha, S., Ebrahimi, S., & Darrell, T. (2019). Variational adversarial active learning. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 5972–5981)." href="/article/10.1007/s11263-024-02006-w#ref-CR82" id="ref-link-section-d3557877e461">2019</a>), face recognition (Elhamifar et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Elhamifar, E., Sapiro, G., Yang, A., & Sasrty, S. S. (2013). A convex optimization framework for active learning. In: Proceedings of the IEEE international conference on computer vision (pp. 209–216)." href="/article/10.1007/s11263-024-02006-w#ref-CR20" id="ref-link-section-d3557877e464">2013</a>) and landmark detection (Quan et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Quan, Q., Yao, Q., Li, J., & Zhou, S. K. (2022). Which images to label for few-shot medical landmark detection? In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 20606–20616)." href="/article/10.1007/s11263-024-02006-w#ref-CR77" id="ref-link-section-d3557877e467">2022</a>).</p><p>In the field of animal landmark detection, similar methods are used to speed up the annotation process. So, in LEAP (Pereira et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Pereira, T. D., Aldarondo, D. E., Willmore, L., Kislin, M., Wang, S.S.-H., Murthy, M., & Shaevitz, J. W. (2019). Fast animal pose estimation using deep neural networks. Nature Methods, 16(1), 117–125." href="/article/10.1007/s11263-024-02006-w#ref-CR75" id="ref-link-section-d3557877e473">2019</a>), the structure of which includes the gradual training of models for detecting animal poses, Pereira et al. annotated 32 landmarks on <i>drosophila</i>. The authors demonstrate that in six iterations of AI-assisted annotation, the speed of annotating one image increased by 20 times. A similar pipeline is also used in Graving et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e479">2019</a>), in which the authors created a DeepPoseKit—a universal tool for detecting the poses of insects and animals. Its functionality includes an optimized process of active learning, which has made it a popular tool among the scientific community.</p><h3 class="c-article__sub-heading" id="Sec4"><span class="c-article-section__title-number">2.2 </span>Landmarks in Animals</h3><p>Facial and body landmarks are increasingly being used to assess various internal states of animals. Indeed, by measuring geometric relationships and analyzing the position of animal body parts, it is possible to establish correlations between the internal states of animals and their external expressions (Brown et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Brown, A. E., Yemini, E. I., Grundy, L. J., Jucikas, T., & Schafer, W. R. (2013). A dictionary of behavioral motifs reveals clusters of genes affecting caenorhabditis elegans locomotion. Proceedings of the National Academy of Sciences, 110(2), 791–796." href="/article/10.1007/s11263-024-02006-w#ref-CR10" id="ref-link-section-d3557877e490">2013</a>; Bierbach et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Bierbach, D., Laskowski, K. L., & Wolf, M. (2017). Behavioural individuality in clonal fish arises despite near-identical rearing conditions. Nature Communications, 8(1), 15361." href="/article/10.1007/s11263-024-02006-w#ref-CR6" id="ref-link-section-d3557877e493">2017</a>; Kain et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Kain, J., Stokes, C., Gaudry, Q., Song, X., Foley, J., Wilson, R., & De Bivort, B. (2013). Leg-tracking and automated behavioural classification in drosophila. Nature Communications, 4(1), 1910." href="/article/10.1007/s11263-024-02006-w#ref-CR43" id="ref-link-section-d3557877e496">2013</a>; Wiltschko et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2015" title="Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., Abraira, V. E., Adams, R. P., & Datta, S. R. (2015). Mapping sub-second structure in mouse behavior. Neuron, 88(6), 1121–1135." href="/article/10.1007/s11263-024-02006-w#ref-CR92" id="ref-link-section-d3557877e499">2015</a>).</p><p>Finlayson et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2016" title="Finlayson, K., Lampe, J. F., Hintze, S., Würbel, H., & Melotti, L. (2016). Facial indicators of positive emotions in rats. PLoS ONE, 11(11), 0166446." href="/article/10.1007/s11263-024-02006-w#ref-CR27" id="ref-link-section-d3557877e505">2016</a>) evaluated the influence of the types of interaction with mice (<i>Rattus norvegicus</i>) on the change in facial expressions. To do this, they processed 3,000 frames, where, among other metrics, measured the width and height of the eye and eyebrow, and angles of the ear and eyebrow, relying on the Rat Grimace Scale (Sotocina et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2011" title="Sotocina, S. G., Sorge, R. E., Zaloum, A., Tuttle, A. H., Martin, L. J., Wieskopf, J. S., Mapplebeck, J. C., Wei, P., Zhan, S., Zhang, S., et al. (2011). The rat grimace scale: A partially automated method for quantifying pain in the laboratory rat via facial expressions. Molecular Pain, 7, 1744–8069." href="/article/10.1007/s11263-024-02006-w#ref-CR83" id="ref-link-section-d3557877e511">2011</a>) in the choice of metrics. According to our estimates, to compute these metrics the authors would require 14 symmetrical facial landmarks.</p><p>In Ferres et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Ferres, K., Schloesser, T., & Gloor, P. A. (2022). Predicting dog emotions based on posture analysis using deeplabcut. Future Internet, 14(4), 97." href="/article/10.1007/s11263-024-02006-w#ref-CR25" id="ref-link-section-d3557877e517">2022</a>), classify emotional states of dogs using data on their postures. The authors trained a DeepLabCut (Mathis et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281." href="/article/10.1007/s11263-024-02006-w#ref-CR65" id="ref-link-section-d3557877e520">2018</a>) framework model on 13,809 dog images annotated with 24 body landmarks. The article does not provide the accuracy of the landmark detection, however, using them, four classifiers with different architectures trained on 360 instances of body landmarks coordinates demonstrated classification accuracy above 62%.</p><p>In McLennan and Mahmoud (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="McLennan, K., & Mahmoud, M. (2019). Development of an automated pain facial expression detection system for sheep (ovis aries). Animals, 9(4), 196." href="/article/10.1007/s11263-024-02006-w#ref-CR67" id="ref-link-section-d3557877e526">2019</a>), propose to combine Sheep Pain Facial Expression Scale (SPFES) (McLennan et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2016" title="McLennan, K. M., Rebelo, C. J., Corke, M. J., Holmes, M. A., Leach, M. C., & Constantino-Casas, F. (2016). Development of a facial expression scale using footrot and mastitis as models of pain in sheep. Applied Animal Behaviour Science, 176, 19–26." href="/article/10.1007/s11263-024-02006-w#ref-CR68" id="ref-link-section-d3557877e529">2016</a>) with landmark-based computer vision models to evaluate pain in sheep. However, the authors emphasise that “availability of more labelled training datasets is key for future development in this area".</p><p>Gong et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Gong, C., Zhang, Y., Wei, Y., Du, X., Su, L., & Weng, Z. (2022). Multicow pose estimation based on keypoint extraction. PLoS ONE, 17(6), 0269259." href="/article/10.1007/s11263-024-02006-w#ref-CR29" id="ref-link-section-d3557877e536">2022</a>) used 16 body landmarks to further classify poses in cows. Using automatically detected landmarks the authors achieve more than 90% precision rate on pose classification.</p><p>Often facial landmarks are used for animals for facial alignment, and then other computer vision models use aligned images for animal identification or other purposes (Clapham et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Clapham, M., Miller, E., Nguyen, M., & Van Horn, R. C. (2022). Multispecies facial detection for individual identification of wildlife: A case study across ursids. Mammalian Biology, 102(3), 943–955." href="/article/10.1007/s11263-024-02006-w#ref-CR14" id="ref-link-section-d3557877e542">2022</a>; Billah et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Billah, M., Wang, X., Yu, J., & Jiang, Y. (2022). Real-time goat face recognition using convolutional neural network. Computers and Electronics in Agriculture, 194, 106730." href="/article/10.1007/s11263-024-02006-w#ref-CR7" id="ref-link-section-d3557877e545">2022</a>). This approach does not require much computational power, since it is limited to a small number of landmarks, but it significantly simplifies the processing for subsequent models, since it normalizes the data.</p><h3 class="c-article__sub-heading" id="Sec5"><span class="c-article-section__title-number">2.3 </span>Cat Facial Landmarks Applications</h3><p>Many studies (Bennett et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Bennett, V., Gourkow, N., & Mills, D. S. (2017). Facial correlates of emotional behaviour in the domestic cat (felis catus). Behavioural Processes, 141, 342–350." href="/article/10.1007/s11263-024-02006-w#ref-CR5" id="ref-link-section-d3557877e556">2017</a>; Humphrey et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Humphrey, T., Proops, L., Forman, J., Spooner, R., & McComb, K. (2020). The role of cat eye narrowing movements in cat-human communication. Scientific Reports, 10(1), 16503." href="/article/10.1007/s11263-024-02006-w#ref-CR40" id="ref-link-section-d3557877e559">2020</a>; Scott & Florkiewicz, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Scott, L., & Florkiewicz, B. N. (2023). Feline faces: Unraveling the social function of domestic cat facial signals. Behavioural Processes, 104959." href="/article/10.1007/s11263-024-02006-w#ref-CR80" id="ref-link-section-d3557877e562">2023</a>) related to the analysis of the internal state of cats have been inspired or directly use Facial Action Coding System for cats (CatFACS) (Caeiro et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Caeiro, C. C., Burrows, A. M., & Waller, B. M. (2017). Development and application of catfacs: Are human cat adopters influenced by cat facial expressions? Applied Animal Behaviour Science, 189, 66–78." href="/article/10.1007/s11263-024-02006-w#ref-CR11" id="ref-link-section-d3557877e565">2017</a>). In Deputte et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Deputte, B. L., Jumelet, E., Gilbert, C., & Titeux, E. (2021). Heads and tails: An analysis of visual signals in cats, felis catus. Animals, 11(9), 2752." href="/article/10.1007/s11263-024-02006-w#ref-CR19" id="ref-link-section-d3557877e568">2021</a>), studied more than 100 h of cat-cat and cat-human interactions to study the positions of ears and tails during these interactions. The authors performed the entire analysis manually without using any automation, which led to significant time and labour costs. In the limitations section, the authors indicate that “a much larger number of data should have been obtained". In Llewelyn and Kiddie (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Llewelyn, H., & Kiddie, J. (2022). Can a facial action coding system (catfacs) be used to determine the welfare state of cats with cerebellar hypoplasia? Veterinary Record, 190(8)." href="/article/10.1007/s11263-024-02006-w#ref-CR61" id="ref-link-section-d3557877e572">2022</a>) explore the use of CatFACS as a welfare assessment tool for cats with cerebellar hypoplasia (CH), finding 16 action units, which could infer the welfare of healthy and CH cats.</p><p>Cats are of specific interest in the context of pain, as they are one of the most challenging species in terms of pain assessment and management due to a reduced physiological tolerance and adverse effects to common veterinary analgesics (Lascelles & Robertson, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2010" title="Lascelles, B. D. X., & Robertson, S. A. (2010). Djd-associated pain in cats: What can we do to promote patient comfort? Journal of Feline Medicine and Surgery, 12(3), 200–212." href="/article/10.1007/s11263-024-02006-w#ref-CR51" id="ref-link-section-d3557877e578">2010</a>), a lack of strong consensus over key behavioural pain indicators (Merola & Mills, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2016" title="Merola, I., & Mills, D. S. (2016). Behavioural signs of pain in cats: An expert consensus. PLoS ONE, 11(2), 0150040." href="/article/10.1007/s11263-024-02006-w#ref-CR69" id="ref-link-section-d3557877e581">2016</a>) and human limitations in accurately interpreting feline facial expressions (Dawson et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Dawson, L. C., Cheal, J., Niel, L., & Mason, G. (2019). Humans can identify cats’ affective states from subtle facial expressions. Animal Welfare, 28(4), 519–531." href="/article/10.1007/s11263-024-02006-w#ref-CR17" id="ref-link-section-d3557877e584">2019</a>). Three different manual pain assessment scales have been developed and validated for domestic cats: the UNESP-Botucatu multidimensional composite pain scale (Brondani et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Brondani, J. T., Mama, K. R., Luna, S. P., Wright, B. D., Niyom, S., Ambrosio, J., Vogel, P. R., & Padovani, C. R. (2013). Validation of the English version of the UNESP-Botucatu multidimensional composite pain scale for assessing postoperative pain in cats. BMC Veterinary Research, 9(1), 1–15." href="/article/10.1007/s11263-024-02006-w#ref-CR8" id="ref-link-section-d3557877e587">2013</a>), the Glasgow composite measure pain scale (CMPS) (Reid et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Reid, J., Scott, E., Calvo, G., & Nolan, A. (2017). Definitive glasgow acute pain scale for cats: Validation and intervention level. Veterinary Record, 108(18)." href="/article/10.1007/s11263-024-02006-w#ref-CR78" id="ref-link-section-d3557877e590">2017</a>), and the Feline Grimace Scale (FGS) (Evangelista et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Evangelista, M. C., Watanabe, R., Leung, V. S., Monteiro, B. P., O’Toole, E., Pang, D. S., & Steagall, P. V. (2019). Facial expressions of pain in cats: The development and validation of a feline grimace scale. Scientific Reports, 9(1), 1–11." href="/article/10.1007/s11263-024-02006-w#ref-CR22" id="ref-link-section-d3557877e594">2019</a>). The latter was further used for a comparative study in which human’s assignment of FGS to cats during real time observations and then subsequent FGS scoring of the same cats from still images were compared. It was shown that there was no significant difference between the scoring methods (Evangelista et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Evangelista, M. C., Benito, J., Monteiro, B. P., Watanabe, R., Doodnaught, G. M., Pang, D. S., & Steagall, P. V. (2020). Clinical applicability of the feline grimace scale: Real-time versus image scoring and the influence of sedation and surgery. PeerJ, 8, 8967." href="/article/10.1007/s11263-024-02006-w#ref-CR21" id="ref-link-section-d3557877e597">2020</a>), indicating that facial images can be a reliable medium from which to assess pain.</p><p>In Evangelista et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Evangelista, M. C., Watanabe, R., Leung, V. S., Monteiro, B. P., O’Toole, E., Pang, D. S., & Steagall, P. V. (2019). Facial expressions of pain in cats: The development and validation of a feline grimace scale. Scientific Reports, 9(1), 1–11." href="/article/10.1007/s11263-024-02006-w#ref-CR22" id="ref-link-section-d3557877e603">2019</a>), used 8 different metrics on cats’ faces in order to establish the connection of facial movements with pain. Those metrics included distances between the ear tips and ear bases, eye height and width, muzzle height and width, as well as two ear angles. In order to measure these parameters, the authors had to manually annotate at least 24 facial landmarks for 51 images. According to their results, cats in pain had a reduced size of muzzle and ears more flattened.</p><p>In Yang and Sinnott (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Yang, Y., & Sinnott, R. O. (2023). Automated recognition and classification of cat pain through deep learning. Lecture Notes in Computer Science, 13864." href="/article/10.1007/s11263-024-02006-w#ref-CR97" id="ref-link-section-d3557877e609">2023</a>), used different deep learning approaches to classify pain in cats. Despite the high accuracy of the classification, their work lacks the explicability of the results obtained. The approach based on landmarks allows implementing action units into the procedure of pain or emotion recognition, which makes it more explainable, in contrast to the “black box" deep learning approach. In Feighelstein et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Feighelstein, M., Henze, L., Meller, S., Shimshoni, I., Hermoni, B., Berko, M., Twele, F., Schütter, A., Dorn, N., Kästner, S., et al. (2023). Explainable automated pain recognition in cats. Scientific Reports, 13(1), 8973." href="/article/10.1007/s11263-024-02006-w#ref-CR23" id="ref-link-section-d3557877e612">2023</a>), 48 facial landmarks were used to identify important facial areas for the same task of pain classification. The presented results clearly demonstrate the most significant aspects by which pain can be determined in cats, and, therefore, can provide additional insights in understanding cats behaviour and their well-being. While such explainability is not critically important for humans, where we can fully rely on our understanding of the expressions of human emotions and on the direct feedback, this is not so for animals.</p><p>Finka et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e619">2019</a>) applied 48 geometric landmarks to identify and quantify facial shape change associated with pain in cats. These landmarks were based on both the anatomy of cat facial musculature, and the range of facial expressions generated as a result of facial action units (Caeiro et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Caeiro, C. C., Burrows, A. M., & Waller, B. M. (2017). Development and application of catfacs: Are human cat adopters influenced by cat facial expressions? Applied Animal Behaviour Science, 189, 66–78." href="/article/10.1007/s11263-024-02006-w#ref-CR11" id="ref-link-section-d3557877e622">2017</a>). The authors manually annotated the landmarks, and used statistical methods (PCA analysis) to establish a relationship between PC scores and a validated measure of pain in cats. Feighelstein et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Feighelstein, M., Shimshoni, I., Finka, L. R., Luna, S. P., Mills, D. S., & Zamansky, A. (2022). Automated recognition of pain in cats. Scientific Reports, 12(1), 9575." href="/article/10.1007/s11263-024-02006-w#ref-CR24" id="ref-link-section-d3557877e625">2022</a>) used the dataset and landmark annotations from Finka et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e628">2019</a>) to automate pain recognition in cats using machine learning models based on facial landmarks grouped into multivectors, reaching accuracy of above 72%. This indicates that the use of the 48 landmarks from Finka et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e631">2019</a>) can be a reliable method for pain recognition from cat facial images.</p><p>Holden et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2014" title="Holden, E., Calvo, G., Collins, M., Bell, A., Reid, J., Scott, E., & Nolan, A. M. (2014). Evaluation of facial expression in acute pain in cats. Journal of Small Animal Practice, 55(12), 615–621." href="/article/10.1007/s11263-024-02006-w#ref-CR36" id="ref-link-section-d3557877e637">2014</a>) annotated 59 images with 78 facial landmarks and 80 distances between them to classify pain in cats. The authors claim that using facial landmarks and specific distance ratios, it is possible to determine pain in cats with an accuracy of 98%. It should be pointed out that not all the landmarks turned to be useful for determining pain, and collecting data on a larger number of annotated images could provide additional insights about pain markers, for example, dividing the studied cats by affecting medications in order to take into account their effects on the animal’s body.</p><p>In Vojtkovská et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Vojtkovská, V., Voslářová, E., & Večerek, V. (2020). Methods of assessment of the welfare of shelter cats: A review. Animals, 10(9), 1527." href="/article/10.1007/s11263-024-02006-w#ref-CR90" id="ref-link-section-d3557877e643">2020</a>), state that the analysis of action units and grimace scales can serve as an accurate way to determine pain in cats (Evangelista et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Evangelista, M. C., Watanabe, R., Leung, V. S., Monteiro, B. P., O’Toole, E., Pang, D. S., & Steagall, P. V. (2019). Facial expressions of pain in cats: The development and validation of a feline grimace scale. Scientific Reports, 9(1), 1–11." href="/article/10.1007/s11263-024-02006-w#ref-CR22" id="ref-link-section-d3557877e646">2019</a>; Caeiro et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Caeiro, C. C., Burrows, A. M., & Waller, B. M. (2017). Development and application of catfacs: Are human cat adopters influenced by cat facial expressions? Applied Animal Behaviour Science, 189, 66–78." href="/article/10.1007/s11263-024-02006-w#ref-CR11" id="ref-link-section-d3557877e649">2017</a>), however “screenshots are obtained from videos, but their analysis is time-consuming. In practice, pain should be assessed immediately and easily (Evangelista et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Evangelista, M. C., Benito, J., Monteiro, B. P., Watanabe, R., Doodnaught, G. M., Pang, D. S., & Steagall, P. V. (2020). Clinical applicability of the feline grimace scale: Real-time versus image scoring and the influence of sedation and surgery. PeerJ, 8, 8967." href="/article/10.1007/s11263-024-02006-w#ref-CR21" id="ref-link-section-d3557877e652">2020</a>)".</p><p>As can be seen, facial and body landmarks are widely used to analyze the external manifestations of the internal state of animals. For cats, such an analysis is especially important due to the morphological structure and specifics of their reaction to medications. The connection of facial landmarks with the morphological structure of cats’ faces allows linking geometric relations to action units and FACS. Creating a tool that allows automatic detection of facial landmarks for cats would significantly speed up and simplify the data collection process for the above-mentioned studies. Our study represents the first of its kind combination of a dataset of cat facial landmarks and a model that allows detecting them with high accuracy for the most anatomically complete display of morphological features of domestic cats.</p></div></div></section><section data-title="The CatFLW Dataset"><div class="c-article-section" id="Sec6-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec6"><span class="c-article-section__title-number">3 </span>The CatFLW Dataset</h2><div class="c-article-section__content" id="Sec6-content"><p>To promote the development of automated facial landmark detectors in cats, we have created the CatFLW dataset inspired by existing ones for humans and animals. The main motivation for its creation was the relatively small amount of similar datasets or a low number of facial landmarks in existing ones.</p><h3 class="c-article__sub-heading" id="Sec7"><span class="c-article-section__title-number">3.1 </span>Related Datasets</h3><p>Our dataset is based on the original dataset collected by Zhang et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR103" id="ref-link-section-d3557877e674">2008</a>), that contains 10,000 images of cats annotated with 9 facial landmarks collected from flickr.com. It includes a wide variety of different cat individuals of different breeds, ages and sexes in different conditions, which can provide good generalization when training computer vision models, however some images from it depict several animals, have visual interference in front of animal faces or cropped (according to our estimates 10–15%). Figure <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig1">1</a> shows the examples of such inapplicable images. It is also worth noting that a number of images contain inaccurate annotations with significant errors, which can also lead to incorrect operation of computer vision models trained on this data.</p><p>In Zhang et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR103" id="ref-link-section-d3557877e683">2008</a>) 9 landmarks are labeled for each image (two for the eyes, one for the nose and three for the each ear), which is sufficient for detecting and analyzing general information about animal faces (tilt of the head or direction of movement), but is not enough for analyzing complex movements of facial muscles.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-1" data-title="Fig. 1"><figure><figcaption><b id="Fig1" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 1</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/1" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig1_HTML.png?as=webp"><img aria-describedby="Fig1" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig1_HTML.png" alt="figure 1" loading="lazy" width="685" height="308"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-1-desc"><p>Examples of non-suitable images from (Zhang et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR103" id="ref-link-section-d3557877e696">2008</a>). Left to right: ear tip is out of the image, multiple cats, lower part ot the face is occluded</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/1" data-track-dest="link:Figure1 Full size image" aria-label="Full size image figure 1" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>Sun and Murata (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Sun, Y., & Murata, N. (2020). Cafm: A 3d morphable model for animals. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision workshops (pp. 20–24)." href="/article/10.1007/s11263-024-02006-w#ref-CR84" id="ref-link-section-d3557877e710">2020</a>) used the same dataset in their work, expanding the annotation to 15 facial landmarks. It slightly better displays the features of the cats’ faces structure, containing two more landmarks for each eye, one landmark for the bridge of the nose and one for the mouth in addition to the 9 landmarks presented in Zhang et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR103" id="ref-link-section-d3557877e713">2008</a>). However, to the best of our knowledge, only 1,706 out of the declared 10,000 images are publicly available.</p><p>Khan et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Khan, M. H., McDonagh, J., Khan, S., Shahabuddin, M., Arora, A., Khan, F. S., Shao, L., & Tzimiropoulos, G. (2020). Animalweb: A large-scale hierarchical dataset of annotated animal faces. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6939–6948)." href="/article/10.1007/s11263-024-02006-w#ref-CR45" id="ref-link-section-d3557877e719">2020</a>) collected the AnimalWeb dataset, consisting of an impressive number of 21,900 images annotated with 9 landmarks. Despite the wide range of represented species, only approximately 450 images can be attributed to feline species. Moreover, the annotation system was chosen differently: here there are two landmarks for each eye, one for the nose and four for the mouth.</p><p>In Hewitt and Mahmoud (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Hewitt, C., & Mahmoud, M. (2019). Pose-informed face alignment for extreme head pose variations in animals. In: 2019 8th international conference on affective computing and intelligent interaction (ACII) (pp. 1–6). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR35" id="ref-link-section-d3557877e726">2019</a>), the authors collected a dataset of 850 images of sheep and annotated them with 25 facial landmarks. In our opinion, such a representation is the best in terms of the number of landmarks and their potential application at the moment.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-1"><figure><figcaption class="c-article-table__figcaption"><b id="Tab1" data-test="table-caption">Table 1 Comparison of animal facial landmarks datasets</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/1" aria-label="Full size table 1"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>Other relevant datasets, containing less than 9 facial landmarks, are mentioned in Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab1">1</a>, including those in Liu et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2012" title="Liu, J., Kanazawa, A., Jacobs, D., & Belhumeur, P. (2012). Dog breed classification using part localization. In: Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Part I 12 (pp. 172–185). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR59" id="ref-link-section-d3557877e1032">2012</a>), Cao et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Cao, J., Tang, H., Fang, H. -S., Shen, X., Lu, C., & Tai, Y. -W. (2019). Cross-domain adaptation for animal pose estimation. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 9498–9507)." href="/article/10.1007/s11263-024-02006-w#ref-CR12" id="ref-link-section-d3557877e1035">2019</a>), Mougeot et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Mougeot, G., Li, D., & Jia, S. (2019). A deep learning approach for dog face verification and recognition. In: PRICAI 2019: Trends in artificial intelligence: proceedings of 16th Pacific rim international conference on artificial intelligence, Cuvu, Yanuca Island, Fiji, August 26-30, 2019, Part III 16 (pp. 418–430). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR71" id="ref-link-section-d3557877e1038">2019</a>), Yang et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2016" title="Yang, H., Zhang, R., & Robinson, P. (2016). Human and sheep facial landmarks localisation by triplet interpolated features. In: 2016 IEEE winter conference on applications of computer vision (WACV) (pp. 1–8). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR99" id="ref-link-section-d3557877e1041">2016</a>). For comparison, popular datasets for human facial landmark detection (Belhumeur et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2013" title="Belhumeur, P. N., Jacobs, D. W., Kriegman, D. J., & Kumar, N. (2013). Localizing parts of faces using a consensus of exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12), 2930–2940." href="/article/10.1007/s11263-024-02006-w#ref-CR4" id="ref-link-section-d3557877e1045">2013</a>; Le et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2012" title="Le, V., Brandt, J., Lin, Z., Bourdev, L., & Huang, T. S. (2012). Interactive facial feature localization. In: Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Part III 12 (pp. 679–692). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR52" id="ref-link-section-d3557877e1048">2012</a>) have several dozens of landmarks. There are also other datasets suitable for face detection and recognition of various animal species (Deb et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Deb, D., Wiper, S., Gong, S., Shi, Y., Tymoszek, C., Fletcher, A., & Jain, A. K. (2018). Face recognition: Primates in the wild. In: 2018 IEEE 9th international conference on biometrics theory, applications and systems (BTAS) (pp. 1–10). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR18" id="ref-link-section-d3557877e1051">2018</a>; Guo et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Guo, S., Xu, P., Miao, Q., Shao, G., Chapman, C.A., Chen, X., He, G., Fang, D., Zhang, H., & Sun, Y., et al. (2020). Automatic identification of individual primates with deep learning techniques. Iscience, 23(8)." href="/article/10.1007/s11263-024-02006-w#ref-CR33" id="ref-link-section-d3557877e1054">2020</a>; Körschens et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Körschens, M., Barz, B., & Denzler, J. (2018). Towards automatic identification of elephants in the wild. 
 arXiv:1812.04418
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR46" id="ref-link-section-d3557877e1057">2018</a>; Chen et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Chen, P., Swarup, P., Matkowski, W. M., Kong, A. W. K., Han, S., Zhang, Z., & Rong, H. (2020). A study on giant panda recognition based on images of a large proportion of captive pandas. Ecology and Evolution, 10(7), 3561–3573." href="/article/10.1007/s11263-024-02006-w#ref-CR13" id="ref-link-section-d3557877e1060">2020</a>), but they have no facial landmark annotations.</p><p>Two conclusions follow from the above: firstly, the validity of the assumption about the lack of datasets with a comprehensive number of facial landmarks for animals and cats in particular, and secondly, there are differences in the annotations between the datasets and in the choice of landmark positions in accordance with the morphological structure of the animal faces.</p><h3 class="c-article__sub-heading" id="Sec8"><span class="c-article-section__title-number">3.2 </span>The Annotation Process</h3><p>The CatFLW dataset consists of 2091images selected from the dataset in Zhang et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2008" title="Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 802–816). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR103" id="ref-link-section-d3557877e1074">2008</a>) using the following inclusion criterion which optimize the training of landmark detection models: the image should contain a <b>single fully visible</b> cat face, where the cat is in non-laboratory conditions (‘in the wild’). We did not put constraints on the breeds and scale of cats and their faces within our guidelines when choosing images in order to maximize the diversification of our dataset. The image sizes range from <span class="mathjax-tex">\(240 \times 180\)</span> to <span class="mathjax-tex">\(1024 \times 1024\)</span>. Figure <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig2">2</a> shows examples of images which were used in the dataset.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-2" data-title="Fig. 2"><figure><figcaption><b id="Fig2" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 2</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/2" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig2_HTML.png?as=webp"><img aria-describedby="Fig2" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig2_HTML.png" alt="figure 2" loading="lazy" width="685" height="371"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-2-desc"><p>Images with bounding boxes and 48 facial landmarks from CatFLW</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/2" data-track-dest="link:Figure2 Full size image" aria-label="Full size image figure 2" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>We should also consider the issue of partial occlusion of faces, where not all the landmarks are visible, but their location can be estimated from the logic of the structure of the face. Such images, however, were discarded due to considerations of working with convolution neural networks, which largely focus on the visual boundaries and textures of images, as a result of which, when these models are trained on images with incompletely visible faces, bias and incorrect behavior may occur during further predictions. As the practice of working with such models shows, subsequently, when predicting on images where the face is partially occluded, the model nevertheless “guesses" the positions of the landmarks based on the geometric relationship between them.</p><p>After selection and filtering of images, a bounding box and 48 facial landmarks were placed on each image using the Labelbox platform (Labelbox, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Labelbox (2023). "Labelbox. 
 https://labelbox.com
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR48" id="ref-link-section-d3557877e1155">2023</a>).</p><p><b>Bounding box.</b> Each bounding box is defined by the two coordinates of the upper left and lower right corners in the coordinate system of the image. In the current dataset, it is not tied to facial landmarks (this is usually done by selecting the outermost landmarks and calculating the difference in their coordinates), but is visually placed so that the entire face of the animal fits into the bounding box as well as about 10% of the space around the face. This margin is made because when studying detection models on the initial version of the dataset, it was noticed that some of them tend to crop the faces and choose smaller bounding boxes. In the case when face detection is performed for the further localization of facial landmarks, the clipping of faces can lead to the disappearance from the image of important parts of the cat’s face, such as the tips of the ears or the mouth. Such an annotation still makes it possible to construct bounding boxes by the outermost landmarks if needed.</p><p><b>Landmarks.</b> 48 facial landmarks introduced in Finka et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e1169">2019</a>) and used in Feighelstein et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Feighelstein, M., Shimshoni, I., Finka, L. R., Luna, S. P., Mills, D. S., & Zamansky, A. (2022). Automated recognition of pain in cats. Scientific Reports, 12(1), 9575." href="/article/10.1007/s11263-024-02006-w#ref-CR24" id="ref-link-section-d3557877e1172">2022</a>) were manually placed on each image (shown in Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig3">3</a>). These landmarks were specifically chosen for their relationship with underlying musculature and anatomical features, and relevance to CatFACS (Caeiro et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Caeiro, C. C., Burrows, A. M., & Waller, B. M. (2017). Development and application of catfacs: Are human cat adopters influenced by cat facial expressions? Applied Animal Behaviour Science, 189, 66–78." href="/article/10.1007/s11263-024-02006-w#ref-CR11" id="ref-link-section-d3557877e1178">2017</a>). The landmarks are grouped into four semantic groups in the sense of their physical location on the face: Lower Face (cheeks, whiskers and nose), Jaw (chin and mouth), Upper Face (eyes), and Ears. For each of them, the position guideline and, if possible, the associated facial muscles and action units are determined. Thus, the structure of facial landmarks is quite strictly tied to the anatomical features of the cat’s face, which allows associating morphological characteristics with landmarks, their mutual position and movement. Detailed information about the landmarks’ placement can be found in the appendices to the article (Finka et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e1181">2019</a>).</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-3" data-title="Fig. 3"><figure><figcaption><b id="Fig3" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 3</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/3" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig3_HTML.png?as=webp"><img aria-describedby="Fig3" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig3_HTML.png" alt="figure 3" loading="lazy" width="685" height="462"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-3-desc"><p>Example image from the CatFLW dataset with a bounding box and 48 facial landmarks. The brightness of the image is reduced for greater visibility of facial landmarks</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/3" data-track-dest="link:Figure3 Full size image" aria-label="Full size image figure 3" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p><b>Annotation Process.</b> The process of landmark annotation had several stages: at first, 10% images were annotated by an annotator who has an extensive experience labeling facial landmarks for various animals. Then they were annotated by a second expert with the same annotation instructions. The landmarks were then compared to verify the internal validity and reliability via the Inter Class Correlation Coefficient ICC2 (Shrout & Fleiss, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 1979" title="Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420." href="/article/10.1007/s11263-024-02006-w#ref-CR81" id="ref-link-section-d3557877e1207">1979</a>), and reached a strong agreement between the annotators with a score of 0.998.</p><p>Finally, the remaining part of images were annotated by the first annotator using the “human-in-the-loop" method described below. After the annotation, a review and correction of and landmarks and bounding boxes were performed.</p><h3 class="c-article__sub-heading" id="Sec9"><span class="c-article-section__title-number">3.3 </span>AI-assisted Annotation</h3><p>The concept behind AI-assisted annotation approach involves the systematic annotation of training data by leveraging the predictions generated by a machine learning model. This model undergoes a continuous retraining process, each time increasing the accuracy and reducing the time required to annotate the consecutive training data.</p><p>Khan et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Khan, M. H., McDonagh, J., Khan, S., Shahabuddin, M., Arora, A., Khan, F. S., Shao, L., & Tzimiropoulos, G. (2020). Animalweb: A large-scale hierarchical dataset of annotated animal faces. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6939–6948)." href="/article/10.1007/s11263-024-02006-w#ref-CR45" id="ref-link-section-d3557877e1224">2020</a>) spent approximately 5,408 man-hours annotating all the images in the AnimalWeb dataset (each annotation was obtained by taking the median value of the annotations of five or more error-prone volunteers). Roughly, it is 3 min for the annotation of one image (9 landmarks) or 20 s for one landmark (taken per person). Our annotation process took <span class="mathjax-tex">\(\sim \)</span>140 h, which is about 4.16 min per image (48 landmarks) and only 5.2 s per landmark. Such performance is achieved due to a semi-supervised annotation method, inspired by such in Graving et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e1245">2019</a>), Pereira et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Pereira, T. D., Aldarondo, D. E., Willmore, L., Kislin, M., Wang, S.S.-H., Murthy, M., & Shaevitz, J. W. (2019). Fast animal pose estimation using deep neural networks. Nature Methods, 16(1), 117–125." href="/article/10.1007/s11263-024-02006-w#ref-CR75" id="ref-link-section-d3557877e1248">2019</a>) that uses the predictions of a gradually-trained model as a basis for annotation.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-4" data-title="Fig. 4"><figure><figcaption><b id="Fig4" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 4</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/4" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig4_HTML.png?as=webp"><img aria-describedby="Fig4" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig4_HTML.png" alt="figure 4" loading="lazy" width="685" height="268"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-4-desc"><p>Left: The distribution of annotation time per image for different batches. The predictions of the first model (v1) significantly reduced the annotation time, while the subsequent refinement by the second model (v2) reduced the variance, without changing the median value too much. Right: The percentage of images from the batch for which a predicted landmark was not shifted. The more accurate the model’s predictions are, the less time is needed for annotation: correctly predicted landmarks do not need to be shifted</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/4" data-track-dest="link:Figure4 Full size image" aria-label="Full size image figure 4" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>To assess the impact of AI on the annotation process, 3 batches were created from the data. The first one consisted of 200 images annotated without AI assistance and 620 images from Finka et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. Scientific Reports, 9(1), 1–12." href="/article/10.1007/s11263-024-02006-w#ref-CR26" id="ref-link-section-d3557877e1271">2019</a>) (due to the absence of consent from the authors, we responsibly utilized their dataset solely for pre-training purposes and excluded it from the training data in our subsequent experiments with the ELD model), so 820 images total. Then, using the AI-assisted annotation methodology, we annotated the second batch (910 images) using the ELD model (v1) trained on the first one. The average time for annotation of one image between the first and second batches was reduced by 35%, since for annotation it became necessary only to adjust the position of the landmark, and not to place it from scratch. In the second step of the process, we annotated the third batch (981 images) using predictions of our model (v2) trained on a combination of the first batch and a manually corrected version of the second (<span class="mathjax-tex">\(\sim \)</span>1700 images total). The results obtained for the time spent on annotation of one image are shown in Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig4">4</a>(left). Increasing the accuracy of predicting the position of a landmark reduces the time required to adjust it, sometimes to zero if the annotator considers the position to be completely correct. Figure <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig4">4</a>(right) shows the distribution of the percentage of images in which a specific facial landmark was not shifted by the annotator: between the two versions of the model, the increase is approximately doubled.</p></div></div></section><section data-title="Ensemble Landmark Detector"><div class="c-article-section" id="Sec10-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec10"><span class="c-article-section__title-number">4 </span>Ensemble Landmark Detector</h2><div class="c-article-section__content" id="Sec10-content"><p>The problem of regression of facial landmarks is formulated as follows: having an image <span class="mathjax-tex">\({\textbf {x}} \in \mathbb {R}^{H \times W \times C}\)</span>, it is needed to create a function <i>f</i> (in this case parameterized by a neural network) that maps the space <span class="mathjax-tex">\(\mathbb {R}^{H \times W \times C}\)</span> into the space of <span class="mathjax-tex">\(K=48\)</span> landmarks <span class="mathjax-tex">\(L^K\)</span>: <span class="mathjax-tex">\(f({\textbf {x}}) = [{\textbf {y}}_1({\textbf {x}}),\ldots ,{\textbf {y}}_K({\textbf {x}})] \in L^K\)</span>. Each landmark <span class="mathjax-tex">\({\textbf {y}}_i({\textbf {x}})\)</span> is defined by two coordinates in the <span class="mathjax-tex">\({\textbf {x}}\)</span> coordinate system.</p><h3 class="c-article__sub-heading" id="Sec11"><span class="c-article-section__title-number">4.1 </span>Architecture</h3><p>Grishchenko et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., & Grundmann, M. (2020). Attention mesh: High-fidelity face mesh prediction in real-time. 
 arXiv:2006.10962
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR31" id="ref-link-section-d3557877e1602">2020</a>) point out that “using a single regression network for the entire face leads to degraded quality in regions that are perceptually more significant (e.g. lips, eyes)". Indeed, regression models, with correct training and compliance with the bias-variance tradeoff, are able to provide accurate results only with a relatively small dimension of the output vector. In the case of large dimensions, the model tends to “generalize" or average the output vector: this explains the fact that when obscuring a part of the face, hidden landmarks are still determined from the general geometry of the face and relative position with other landmarks, rather than from the image. Moreover, in the case of the human face, the landmarks are relatively “firmly connected" with each other (with the exception of the eyes and the lower part of the face). This leads to the fact that even with an inaccurate prediction of the regression model, the landmarks will be approximately in the right places, giving a relatively small error in general.</p><p>In Grishchenko et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., & Grundmann, M. (2020). Attention mesh: High-fidelity face mesh prediction in real-time. 
 arXiv:2006.10962
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR31" id="ref-link-section-d3557877e1608">2020</a>), the authors distinguish three regions of interest on the human face: two eyes and lips, since they have the greatest mobility. By analogy, we will use the magnification of regions to refine landmark coordinates on cats’ faces, but in a cascade way. For our model, we will use an ensemble architecture with five regions of interest, since in addition to the eyes and the whiskers area, cats are characterized by high ear mobility.</p><p><b>Face Detection.</b> The first stage of landmark detection is localization and cropping of the cat’s face from the input image. For this, the image is rescaled with filling to a resolution of <span class="mathjax-tex">\(224 \times 224\)</span> and is fed as an input to the face detector. As a detector, we used an EfficientNetV2 (Tan & Le, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In: International conference on machine learning (pp. 10096–10106). PMLR" href="/article/10.1007/s11263-024-02006-w#ref-CR87" id="ref-link-section-d3557877e1640">2021</a>) model with a custom head. To do this, we disabled the <i>include_top</i> parameter when initializing the model and added three fully connected layers with ReLU and linear activation functions and sizes of 128, 64, 4. The model’s output is a vector of landmarks with 4 coordinates of the bounding box (upper left and lower right corners). Figure <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig5">5</a> shows the detector’s structure.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-5" data-title="Fig. 5"><figure><figcaption><b id="Fig5" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 5</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/5" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig5_HTML.png?as=webp"><img aria-describedby="Fig5" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig5_HTML.png" alt="figure 5" loading="lazy" width="685" height="323"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-5-desc"><p>EfficientNetV2-based detector structure. The model takes a <span class="mathjax-tex">\(224 \times 224\)</span> RGB image and outputs the coordinates vector (top left and bottom right corners of the bounding box)</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/5" data-track-dest="link:Figure5 Full size image" aria-label="Full size image figure 5" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>After detecting and cropping the image by the bounding box, the resulting image is again rescaled to <span class="mathjax-tex">\(224 \times 224\)</span> and passed for detection of regions.</p><p><b>Regions Detection.</b> In the next step, we used a model similar to the face detector, except that the output layer has size of 10. Thus, we detect the coordinates of 5 centers of regions of interest: eyes, nose (whiskers area) and two ears. We used taking average landmark coordinates from the corresponding regions to get 5 landmarks from 48 as training data.</p><p><b>Ensemble Detection.</b> After determining the centers of the regions of interest, the <span class="mathjax-tex">\(224 \times 224\)</span> image is aligned by the eyes to reduce the spread on the roll tilt angles and five fixed-size regions are cropped out of it. For the eyes, the region size is set as a quarter of the image (56x56 pixels), for the nose and ears—half of the image (<span class="mathjax-tex">\(112 \times 112\)</span> pixels). These sizes were taken empirically so that all associated landmarks were located within the regions in most cases. We choose fixed sizes of regions for greater normalization of the obtained data: in the case of automatic detection of regions’ bounding boxes, the data contained in the cropped regions would be much more heterogeneous from the standpoint of the position and cropping of potentially relevant parts of the face (for example, for the eye region, the center of the eye is almost always in the center of the region in our pipeline). The regions are then resized to <span class="mathjax-tex">\(224 \times 224\)</span> to meet the input size of the EfficientNetV2 model.</p><p>Then the landmarks are divided into groups according to their corresponding regions. There are 8 landmarks for each eye, 5 for each ear, and 22 for the nose and whiskers region. All landmarks are then detected by an ensemble of five models, each of which has a structure similar to the detector, except that the size of the output layer is equal to the number of landmarks multiplied by two (the coordinates of the corresponding landmarks). After that, the coordinates of the landmarks are transferred to the coordinate system of the original image and are combined into a final vector of 96 coordinates (48 landmarks). The final structure of the Ensemble Landmark Detector is shown in Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig6">6</a>.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-6" data-title="Fig. 6"><figure><figcaption><b id="Fig6" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 6</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/6" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig6_HTML.png?as=webp"><img aria-describedby="Fig6" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig6_HTML.png" alt="figure 6" loading="lazy" width="685" height="379"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-6-desc"><p>Ensemble Landmark Detector. Provided ensemble architecture and region centers could be changed depending on the face morphology</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/6" data-track-dest="link:Figure6 Full size image" aria-label="Full size image figure 6" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div></div></div></section><section data-title="Experiments"><div class="c-article-section" id="Sec12-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec12"><span class="c-article-section__title-number">5 </span>Experiments</h2><div class="c-article-section__content" id="Sec12-content"><p>We present an ablation study of various versions of our model on CatFLW in order to (1) provide a baseline for detecting landmarks on our dataset, and (2) demonstrate the effectiveness of the proposed features of the ELD model.</p><p>To evaluate the accuracy of the models, we will use the commonly used Normalized Mean Error (NME) metric, which is defined as</p><div id="Equ1" class="c-article-equation"><div class="c-article-equation__content"><span class="mathjax-tex">$$\begin{aligned} NME = \frac{1}{M \cdot N} \sum _{i=1}^{N} \sum _{j=1}^{M} \frac{\left\| {x_i}^j - {x'_i}^j \right\| }{iod_i}, \end{aligned}$$</span></div></div><p>where <i>M</i> is the number of landmarks in the image, <i>N</i> is the number of images in the dataset, <span class="mathjax-tex">\(iod_i\)</span> is the inter-ocular distance (distance between the outer corners of the two eyes), <span class="mathjax-tex">\({x_i}^j\)</span> and <span class="mathjax-tex">\({x'_i}^j\)</span>—the coordinates of the predicted and the ground truth landmark respectively.</p><h3 class="c-article__sub-heading" id="Sec13"><span class="c-article-section__title-number">5.1 </span>Experimental Setup</h3><p>Since all parts of ELD are based on the same backbones, we used the identical training strategy for the face detector, the detector of the centers of regions of interest and each of the five models in the ensemble. We trained our models for 300 epochs using the mean squared error loss and the ADAM optimizer with a starting learning rate of <span class="mathjax-tex">\(10^{-4}\)</span> and a batch size of 16. We lowered the learning rate by a factor of ten each time the validation loss did not improve for 75 epochs, saving a model with the best validation loss.</p><p>We used augmentation of the training data, artificially doubling the size of the training dataset, applying each of the following methods with a 90% probability to each pair of image-landmarks: random rotation, changing the balance of color, brightness, contrast, sharpness, laying-on blur masks and random noise.</p><p>Our hardware consisted of a Supermicro 5039AD-I workstation with a single Intel Core i7–7800X CPU (6 cores, 3.5GHz, 8.25M cache, LGA2066), 64GB of RAM, a 500GB SSD, and an NVIDIA GP102GL GPU.</p><h3 class="c-article__sub-heading" id="Sec14"><span class="c-article-section__title-number">5.2 </span>Ablation Study</h3><p>To evaluate the accuracy of our model on the CatFLW, we split the dataset into training, validation and test parts in the ratio of 75:15:10 (1569/314/208 images correspondingly). All experiments with the dataset were carried out using the subsets obtained from this division.</p><p><b>Face Detection.</b> Our goal in developing the CatFLW dataset and ELD model was to make an end-to-end system capable of working in the wild, that is, on unannotated images. Due to the fact that most of the models for detecting facial landmarks do not focus on detecting faces (Sun et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Sun, K., Zhao, Y., Jiang, B., Cheng, T., Xiao, B., Liu, D., Mu, Y., Wang, X., Liu, W., & Wang, J. (2019). High-resolution representations for labeling pixels and regions. 
 arXiv:1904.04514
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR85" id="ref-link-section-d3557877e2128">2019</a>; Wang et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Wang, X., Bo, L., & Fuxin, L. (2019). Adaptive wing loss for robust face alignment via heatmap regression. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 6971–6981)." href="/article/10.1007/s11263-024-02006-w#ref-CR91" id="ref-link-section-d3557877e2131">2019</a>; Li et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Li, W., Lu, Y., Zheng, K., Liao, H., Lin, C., Luo, J., Cheng, C. -T., Xiao, J., Lu, L., & Kuo, C. -F., et al. (2020). Structured landmark detection via topology-adapting deep graph learning. In: Computer vision–ECCV 2020: Proceedings of the 16th European conference, Glasgow, UK, August 23–28, 2020, Part IX 16 (pp. 266–283). Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR56" id="ref-link-section-d3557877e2134">2020</a>, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Li, H., Guo, Z., Rhee, S. -M., Han, S., & Han, J. -J. (2022). Towards accurate facial landmark detection via cascaded transformers. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4176–4185)." href="/article/10.1007/s11263-024-02006-w#ref-CR54" id="ref-link-section-d3557877e2137">2022</a>; Jin et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Jin, H., Liao, S., & Shao, L. (2021). Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. International Journal of Computer Vision, 129, 3174–3194." href="/article/10.1007/s11263-024-02006-w#ref-CR41" id="ref-link-section-d3557877e2140">2021</a>; Huang et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Huang, Y., Yang, H., Li, C., Kim, J., & Wei, F. (2021). Adnet: Leveraging error-bias towards normal direction in face alignment. In: Proceedings of the IEEE/CVF international conference on computer vision (pp. 3080–3090)." href="/article/10.1007/s11263-024-02006-w#ref-CR38" id="ref-link-section-d3557877e2144">2021</a>; Lan et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Lan, X., Hu, Q., Chen, Q., Xue, J., & Cheng, J. (2021). Hih: Towards more accurate face alignment via heatmap in heatmap. 
 arXiv:2104.03100
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR50" id="ref-link-section-d3557877e2147">2021</a>; Zhou et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Zhou, Z., Li, H., Liu, H., Wang, N., Yu, G., & Ji, R. (2023). Star loss: Reducing semantic ambiguity in facial landmark detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 15475–15484)." href="/article/10.1007/s11263-024-02006-w#ref-CR105" id="ref-link-section-d3557877e2150">2023</a>),but use ready-made bounding boxes at the data preprocessing stage, we do not deviate from this pipeline in this section later for the fare comparison. However, we provide data for detecting landmarks in the case of using our manually labeled bounding boxes and bounding boxes determined by the ELD facial detector. In the experiments we used a series of EfficientNetV2 models (Tan & Le, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In: International conference on machine learning (pp. 10096–10106). PMLR" href="/article/10.1007/s11263-024-02006-w#ref-CR87" id="ref-link-section-d3557877e2153">2021</a>) as backbones. For additional comparison, we also trained YOLOv8 [86], using the YOLOv8n version, trained for 300 epochs with the default hyperparameters.</p><p>From the direct comparison given in Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab2">2</a>, we can say that when using a facial detector, the accuracy drops significantly. This is explained by the fact that in the case of pre-cropped faces, the scale and approximate position of the landmarks in the image are preserved. In the case of a detector, there may be variations in the scale of the detected faces, as well as “cutting-off" meaningfully important facial parts potentially containing landmarks.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-2"><figure><figcaption class="c-article-table__figcaption"><b id="Tab2" data-test="table-caption">Table 2 Comparison of landmark detection using training data obtained by facial detector and with pre-cropped faces on the CatFLW test set</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/2" aria-label="Full size table 2"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>In the case of the YOLO model, we observe a higher final accuracy of landmark detection, however, such an improvement leads to a disadvantage—in some cases, the model does not detect face bounding boxes, which leads to the inability to detect landmarks in such images. For our test set, we didn’t get landmarks in five images (2.3% of the total). For comparison, this result is presented here without taking into account the missing images, however, we do not use the YOLO detector in the ELD to ensure the experiments validity.</p><p><b>Regions of Interest.</b> When choosing the size of the regions for the ensemble detection, we used empirical values based on the considerations that for too small regions, some of the information will be cut off and the coordinates of the desired landmarks will lie outside the region of interest, and for too large regions, there will be a lot of extra information in each, which ultimately reduces the accuracy of detection.</p><p>To test our assumptions, we trained ensemble models on regions of different sizes, changing one region size for each experiment (by default, we consider the same region sizes for pairs of eyes and ears). For convenience, we took the sizes of the regions as a proportional part of the rescaled face bounding box (<span class="mathjax-tex">\(224 \times 224\)</span> pixels). The final NME metric is given for all the landmarks, in order to measure the contribution of changes in each region in total. The results confirming our suppositions are shown in Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab3">3</a>. Similar experiments were conducted for several types of backbones and showed analogous dependencies.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-3"><figure><figcaption class="c-article-table__figcaption"><b id="Tab3" data-test="table-caption">Table 3 Comparison of landmark detection using different regions’ of interest sizes proportions in relation to the face bounding box</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/3" aria-label="Full size table 3"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p><b>Backbones.</b> The EfficientNet models were chosen as a backbone for the ELD because of the balance between the number of parameters and the accuracy of performance. Since such backbones are rarely used for regression problems on their own, we could only refer to the accuracy on image classification datasets (Xie et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2017" title="Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1492–1500)." href="/article/10.1007/s11263-024-02006-w#ref-CR96" id="ref-link-section-d3557877e2594">2017</a>; Tan & Le, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning (pp. 6105–6114). PMLR." href="/article/10.1007/s11263-024-02006-w#ref-CR86" id="ref-link-section-d3557877e2597">2019</a>; Liu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Liu, Z., Mao, H., Wu, C. -Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976–11986)." href="/article/10.1007/s11263-024-02006-w#ref-CR60" id="ref-link-section-d3557877e2600">2022</a>) and to the use of these models as backbones in other landmark detection models (Mathis et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281." href="/article/10.1007/s11263-024-02006-w#ref-CR65" id="ref-link-section-d3557877e2603">2018</a>, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Mathis, A., Biasi, T., Schneider, S., Yuksekgonul, M., Rogers, B., Bethge, M., & Mathis, M. W. (2021). Pretraining boosts out-of-domain robustness for pose estimation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1859–1868)." href="/article/10.1007/s11263-024-02006-w#ref-CR64" id="ref-link-section-d3557877e2606">2021</a>). For the experiment, we have selected several popular and most effective backbones and measured the accuracy of the prediction of the ELD model as a whole, where each of its components is a certain backbone. The results are shown in the Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab4">4</a>.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-4"><figure><figcaption class="c-article-table__figcaption"><b id="Tab4" data-test="table-caption">Table 4 Evaluation of a total landmark detection error on the CatFLW using different backbones</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/4" aria-label="Full size table 4"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>It can be seen that more complex backbones generally show better results (sometimes more architecturally advanced models with fewer parameters demonstrate better accuracy), but a slight increase in accuracy is caused by a significant increase in the number of parameters. Considering that the ELD contains six similar models, we do not provide more results for complex backbones due to their inefficiency for detection.</p><p><b>Data Size Impact.</b> For machine learning models, the problem of “data hunger" is well known, that is, a strong dependence of the accuracy of the model on the amount of training data. In Mathis et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281." href="/article/10.1007/s11263-024-02006-w#ref-CR65" id="ref-link-section-d3557877e2954">2018</a>), the authors show that the DeepLabCut model for detecting body landmarks reaches an acceptable error of several pixels, starting with hundreds of annotated images, when the error decreases with an increase in the test set, but much less significantly. A similar correlation can be observed for the DeepPoseKit (Graving et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e2957">2019</a>) model.</p><p>We present a study of the effect of the size of the training set on the accuracy of the ELD (Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig7">7</a>). Our model also demonstrates acceptable results, starting with several hundred images in a training set, although further increase in data size still leads to a noticeable increase in accuracy.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-7" data-title="Fig. 7"><figure><figcaption><b id="Fig7" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 7</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/7" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig7_HTML.png?as=webp"><img aria-describedby="Fig7" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig7_HTML.png" alt="figure 7" loading="lazy" width="685" height="529"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-7-desc"><p>ELD’s prediction error on the test set with different numbers of training examples</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/7" data-track-dest="link:Figure7 Full size image" aria-label="Full size image figure 7" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><h3 class="c-article__sub-heading" id="Sec15"><span class="c-article-section__title-number">5.3 </span>Comparison with Other Models on CatFLW</h3><p>For a comparative accuracy evaluation of the ELD, we measured the perfomance of the several popular landmark detectors on the CatFLW. We have selected the following models: DeepPoseKit (DPK) (Graving et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e2992">2019</a>), LEAP (Pereira et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Pereira, T. D., Aldarondo, D. E., Willmore, L., Kislin, M., Wang, S.S.-H., Murthy, M., & Shaevitz, J. W. (2019). Fast animal pose estimation using deep neural networks. Nature Methods, 16(1), 117–125." href="/article/10.1007/s11263-024-02006-w#ref-CR75" id="ref-link-section-d3557877e2995">2019</a>), DeepLabCut (DLC) (Mathis et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 21(9), 1281." href="/article/10.1007/s11263-024-02006-w#ref-CR65" id="ref-link-section-d3557877e2998">2018</a>, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Mathis, A., Biasi, T., Schneider, S., Yuksekgonul, M., Rogers, B., Bethge, M., & Mathis, M. W. (2021). Pretraining boosts out-of-domain robustness for pose estimation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1859–1868)." href="/article/10.1007/s11263-024-02006-w#ref-CR64" id="ref-link-section-d3557877e3001">2021</a>; Nath et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Nath, T., Mathis, A., Chen, A. C., Patel, A., Bethge, M., & Mathis, M. W. (2019). Using deeplabcut for 3D markerless pose estimation across species and behaviors. Nature Protocols, 14(7), 2152–2176." href="/article/10.1007/s11263-024-02006-w#ref-CR72" id="ref-link-section-d3557877e3004">2019</a>) and Stacked Hourglass (Newell et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2016" title="Newell, A., Yang, K., & Deng, J. (2016). Stacked hourglass networks for human pose estimation, pp. 483–499. Springer." href="/article/10.1007/s11263-024-02006-w#ref-CR73" id="ref-link-section-d3557877e3008">2016</a>). Despite the fact that these models are usually used as pose detectors (Ye et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2022" title="Ye, S., Filippova, A., Lauer, J., Vidal, M., Schneider, S., Qiu, T., Mathis, A., & Mathis, M. W. (2022). Superanimal models pretrained for plug-and-play analysis of animal behavior. 
 arXiv:2203.07436
 
 ." href="/article/10.1007/s11263-024-02006-w#ref-CR101" id="ref-link-section-d3557877e3011">2022</a>), they are capable of detecting various landmarks and are created as general multifunctional models, including those suitable for facial landmarks detection, and widely used in the literature (Labuguen et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Labuguen, R., Bardeloza, D. K., Negrete, S. B., Matsumoto, J., Inoue, K., & Shibata, T. (2019). Primate markerless pose estimation and movement analysis using deeplabcut. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd international conference on imaging, vision & pattern recognition (icIVPR) (pp. 297–300). IEEE." href="/article/10.1007/s11263-024-02006-w#ref-CR49" id="ref-link-section-d3557877e3014">2019</a>; Mathis & Mathis, <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2020" title="Mathis, M. W., & Mathis, A. (2020). Deep learning tools for the measurement of animal behavior in neuroscience. Current Opinion in Neurobiology, 60, 1–11." href="/article/10.1007/s11263-024-02006-w#ref-CR66" id="ref-link-section-d3557877e3017">2020</a>; Zhan et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2021" title="Zhan, W., Zou, Y., He, Z., & Zhang, Z. (2021). Key points tracking and grooming behavior recognition of bactrocera minax (diptera: Trypetidae) via deeplabcut. Mathematical Problems in Engineering, 2021, 1–15." href="/article/10.1007/s11263-024-02006-w#ref-CR104" id="ref-link-section-d3557877e3020">2021</a>).</p><p>We trained the models on the CatFLW training set and evaluated them on the test set using the NME metric (in all cases, we used pre-cropped faces with a resolution of <span class="mathjax-tex">\(224 \times 224\)</span> according to the CatFLW’s bounding boxes). The training process was performed using the DeepPoseKit platform (Graving et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e3050">2019</a>). All models were trained for 300 epochs with a batch size of 16, mean squared error loss, the ADAM optimizer, and optimal parameters for each model (indicated in the corresponding papers). During preprocessing, we used a similar selection of landmark regions as in the ELD, as well as a similar approach to image and landmarks augmentation. The results of the experiments are shown in Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab5">5</a>.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-5"><figure><figcaption class="c-article-table__figcaption"><b id="Tab5" data-test="table-caption">Table 5 Comparison of landmark detection error on the CatFLW dataset using different detection models</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/5" aria-label="Full size table 5"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-6"><figure><figcaption class="c-article-table__figcaption"><b id="Tab6" data-test="table-caption">Table 6 Evaluation of landmark detection on WFLW</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/6" aria-label="Full size table 6"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>From these results, it is evident that our proposed ELD model, even with a fewer parameters demonstrates a superior accuracy compared to other models, and only improves performance, albeit with a significant increase in the number of parameters.</p><h3 class="c-article__sub-heading" id="Sec16"><span class="c-article-section__title-number">5.4 </span>WFLW Dataset</h3><p>We additionally evaluate our method on the popular human face WFLW (Wider Facial Landmarks n-the-Wild) dataset (Wu et al., <a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2018" title="Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., & Zhou, Q. (2018). Look at boundary: A boundary-aware face alignment algorithm. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2129–2138)." href="/article/10.1007/s11263-024-02006-w#ref-CR93" id="ref-link-section-d3557877e3815">2018</a>). It contains 7,500 training images and 2,500 test images annotated with 98 landmarks.</p><p>For ensemble detection, we disable our face detection model, since the bounding boxes for this dataset are commonly preset by landmarks. We do not change the architecture of the model, besides dividing the human face into four regions of interest: eyes, nose and mouth. We separately detect 33 landmarks on the lower part of the face, as they are not suitable for our magnifying method. The same training parameters were used to train the models as for training similar models on CatFLW. We compare ELD with several state-of-the-art methods, as shown in the Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab6">6</a>.</p><p>Some of the results for our model are given without the detection of the landmarks on the lower border of the face. Due to the impossibility of using the magnifying ensemble method, the detection accuracy on jaw landmarks is significantly worse than on the rest. Table <a data-track="click" data-track-label="link" data-track-action="table anchor" href="/article/10.1007/s11263-024-02006-w#Tab7">7</a> shows the NME for landmarks taken on individual parts of the face.</p><p>The provided results demonstrate that our model handles localized groups of landmarks better, being significantly inferior in the case of non-localized ones. When detecting a partial set of landmarks, our model demonstrates results comparable to the state-of-the-art.</p><h3 class="c-article__sub-heading" id="Sec17"><span class="c-article-section__title-number">5.5 </span>Complex Cases</h3><p>To further evaluate the effectiveness of the ELD, we conducted a qualitative analysis of its operation on particularly challenging images that are not presented in the training dataset. By challenging images, we mean images of cats not from the CatFLW, in which their faces are partially occluded, heavily rotated, or there are several cats in the image. We studied the performance of the model on 50 selected images that fit the criteria.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-8" data-title="Fig. 8"><figure><figcaption><b id="Fig8" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 8</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/8" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig8_HTML.png?as=webp"><img aria-describedby="Fig8" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig8_HTML.png" alt="figure 8" loading="lazy" width="685" height="228"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-8-desc"><p>ELD predictions on complex images. Left to right: multiple cats, partial occlusion, extreme head tilt angle. Images are taken from Unsplash (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Unsplash. 
 https://unsplash.com
 
 . Accessed 6 Oct 2023." href="/article/10.1007/s11263-024-02006-w#ref-CR89" id="ref-link-section-d3557877e3851">2023</a>)</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/8" data-track-dest="link:Figure8 Full size image" aria-label="Full size image figure 8" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><p>The limitation of the current ELD architecture is the detection of a single object in the image due to the fixed output of the facial detection model (regardless of whether the object is present at all or there are many of them). The solution to this problem can be the use of a YOLOv8-based detector, in which case both the detection of multiple objects and their classification are possible.</p><p>The peculiarity of the ensemble structure is the incoherence of the parts between each other. Therefore, for example, with partial occlusion or extreme position of the cat’s face, unlike other models, which in such cases produce “average” predictions, ELD demonstrates a different behavior. For those parts that are visible, landmarks are determined almost precisely, and for occluded ones, relatively random prediction in relation to other landmarks (but still consistent within the region) may occur.</p><p>Examples of the ELD’s performance on complex cases are shown in Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig8">8</a>.</p><div class="c-article-table" data-test="inline-table" data-container-section="table" id="table-7"><figure><figcaption class="c-article-table__figcaption"><b id="Tab7" data-test="table-caption">Table 7 NME on different face parts using EfficientNetV2B1 as a backbone</b></figcaption><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="table-link" data-track="click" data-track-action="view table" data-track-label="button" rel="nofollow" href="/article/10.1007/s11263-024-02006-w/tables/7" aria-label="Full size table 7"><span>Full size table</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div><h3 class="c-article__sub-heading" id="Sec18"><span class="c-article-section__title-number">5.6 </span>Transfer to Other Cat Species</h3><p>To investigate the generalization abilities of our model, we additionally tested it on images of other <i>felidae</i> cat species. Due to the lack of annotated facial landmarks for other species, we were able to conduct only a qualitative study of the detection accuracy. During testing, it was noted that due to the ensemble structure of the detector, several factors affect the accuracy of detection.</p><p>Firstly, the position of the detected bounding box and the five centers of the regions of interest varied greatly depending on the appearance of the cat. For example, for lions (<i>panthera leo</i>), a strong difference in detection was visible for male and female individuals, since the ELD trained on domestic cats often detected the mane as part of the face.</p><p>Secondly, the detection accuracy is influenced by specific morphological features of individuals of different families. For example, it was noticed that the <i>pantherinae</i> subfamily has a flat lower jaw shape, which is unusual for domestic cats, and therefore negatively affects the detection of the corresponding model in the ensemble. On the other hand, it turned out that the shape of the ears does not greatly affect the accuracy of detection, at least visually for both rounded and pointed ears.</p><p>As a result, we can say that the closer the anatomically examined species are to a domestic cat, the more acceptable the detection results are. So, for leopards (<i>panthera pardus</i>), jaguars (<i>panthera onca</i>), tigers (<i>panthera tigris</i>) and lions (<i>panthera leo</i>) belonging to a different subfamily than domestic cats (<i>felis catus</i>), detection visually works worse in the nose and mouth area. For more closely related specimens, such as cheetah (<i>acinonyx jubatus</i>) or lynx (various <i>lynx</i> species), detection works more accurately due to greater external similarity. This implies that when training on new data containing various feline species according to the same annotation scheme with 48 landmarks, the ELD will demonstrate great detection accuracy, although here it is worth noting the potential need to create new annotation systems based on the specifics of anatomy for different species. Examples of the ELD’s performance on various cats are shown in Fig. <a data-track="click" data-track-label="link" data-track-action="figure anchor" href="/article/10.1007/s11263-024-02006-w#Fig9">9</a>.</p><div class="c-article-section__figure js-c-reading-companion-figures-item" data-test="figure" data-container-section="figure" id="figure-9" data-title="Fig. 9"><figure><figcaption><b id="Fig9" class="c-article-section__figure-caption" data-test="figure-caption-text">Fig. 9</b></figcaption><div class="c-article-section__figure-content"><div class="c-article-section__figure-item"><a class="c-article-section__figure-link" data-test="img-link" data-track="click" data-track-label="image" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/9" rel="nofollow"><picture><source type="image/webp" srcset="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig9_HTML.png?as=webp"><img aria-describedby="Fig9" src="//media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11263-024-02006-w/MediaObjects/11263_2024_2006_Fig9_HTML.png" alt="figure 9" loading="lazy" width="685" height="169"></picture></a></div><div class="c-article-section__figure-description" data-test="bottom-caption" id="figure-9-desc"><p>ELD predictions on different felidae. Left to right: tiger, cheetah, lion, lynx. Images are taken from Unsplash (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Unsplash. 
 https://unsplash.com
 
 . Accessed 6 Oct 2023." href="/article/10.1007/s11263-024-02006-w#ref-CR89" id="ref-link-section-d3557877e4023">2023</a>)</p></div></div><div class="u-text-right u-hide-print"><a class="c-article__pill-button" data-test="article-link" data-track="click" data-track-label="button" data-track-action="view figure" href="/article/10.1007/s11263-024-02006-w/figures/9" data-track-dest="link:Figure9 Full size image" aria-label="Full size image figure 9" rel="nofollow"><span>Full size image</span><svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-chevron-right-small"></use></svg></a></div></figure></div></div></div></section><section data-title="Conclusions and Future Work"><div class="c-article-section" id="Sec19-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Sec19"><span class="c-article-section__title-number">6 </span>Conclusions and Future Work</h2><div class="c-article-section__content" id="Sec19-content"><p>The field of animal affective computing is only just beginning to emerge. One of the most significant obstacles that researchers in this field currently face is the scarcity of high-quality, comprehensive datasets, as highlighted in Broome et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2023" title="Broome, S., Feighelstein, M., Zamansky, A., Lencioni, C. G., Andersen, H. P., Pessanha, F., Mahmoud, M., Kjellström, H., & Salah, A. A. (2023). Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions. International Journal of Computer Vision, 131(2), 572–590." href="/article/10.1007/s11263-024-02006-w#ref-CR9" id="ref-link-section-d3557877e4044">2023</a>). The structure of annotations in the proposed dataset allows researchers to rely on the features of the anatomy of the feline face, which makes it possible to use it not only for the detection of facial landmarks, but also for a deeper analysis of the internal state of cats. It is our hope that the contributions of this paper will support the ongoing efforts to advance the field, and specifically automated facial analysis for a variety of species.</p><p>From the experiments conducted, it can be seen that on the CatFLW dataset, our ELD model surpasses the existing popular landmark detection models in detection accuracy, on average containing significantly more parameters and, as a result, inferior to them in speed. As noted in Graving et al. (<a data-track="click" data-track-action="reference anchor" data-track-label="link" data-test="citation-ref" aria-label="Reference 2019" title="Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. Elife, 8, 47994." href="/article/10.1007/s11263-024-02006-w#ref-CR30" id="ref-link-section-d3557877e4050">2019</a>), when choosing a model from the available ones, the researcher should focus on the accuracy-speed trade-off depending on the problem they encounter. We emphasize that for studies where landmarks are used to detect the position of an animal in real time and a detection error of a few pixels is insignificant, most of the models we tested could be preferable. In turn, in cases where experiments are carried out for a prolonged period and the time spent on data processing is not critical, while the accuracy of detection, for its part, directly affects further results, our model outperforms others.</p><p>Due to the magnifying method working on semantically grouped landmarks, the ELD model is able to solve the landmark detection problem with high accuracy in a more general sense than only on cat faces, which was demonstrated on a popular human WFLW dataset. The limitation of our approach is the need for a close grouping of landmarks, which is not fulfilled, for example, for jaw landmarks on the same dataset. One of the possible solutions is to further divide the “spaced" landmarks into groups, which, however, may entail a loss of detection efficiency.</p><p>An interesting extension of our model can be the replacement of the used face detector with a more “dynamic" YOLOv8. As shown in the Face Detection section, such a detector provides greater detection accuracy on images without preprocessing, although it tends to skip some of them. In cases where this is not critical (for example, when processing video with a large number of frames), such a replacement can provide a high level of detection of landmarks, skipping images in which the cat’s face is missing or not fully visible. The study of such an application is one of the directions of our further work on the ELD. Further future research also includes extending the CatFLW dataset with more images and improving the ELD model performance. To do this, we plan to explore new backbones, use new approaches to images augmentation and possible parallelization of computations. We will also investigate how well the model generalizes to other species, such as dogs and monkeys.</p><p>We believe that the proposed approaches can serve as a guideline for similar solutions that can solve the problems of identification, detection of emotions and internal states of various animals in an effective and non-invasive way.</p></div></div></section> </div> <section data-title="Data availability"><div class="c-article-section" id="data-availability-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="data-availability">Data availability</h2><div class="c-article-section__content" id="data-availability-content"> <p>The dataset generated during and/or analysed during the current study is available from the corresponding author on reasonable request.</p> </div></div></section><div id="MagazineFulltextArticleBodySuffix"><section aria-labelledby="Bib1" data-title="References"><div class="c-article-section" id="Bib1-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Bib1">References</h2><div class="c-article-section__content" id="Bib1-content"><div data-container-section="references"><ul class="c-article-references" data-track-component="outbound reference" data-track-context="references section"><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR1">Aghdam, H. H., Gonzalez-Garcia, A., Weijer, J. v. d., & López, A. M. (2019). Active learning for deep detection neural networks. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 3672–3680).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR2">Akinyelu, A. A., & Blignaut, P. (2022). Convolutional neural network-based technique for gaze estimation on mobile devices. <i>Frontiers in Artificial Intelligence,</i> <i>4</i>, 796825.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 2" href="http://scholar.google.com/scholar_lookup?&title=Convolutional%20neural%20network-based%20technique%20for%20gaze%20estimation%20on%20mobile%20devices&journal=Frontiers%20in%20Artificial%20Intelligence&volume=4&publication_year=2022&author=Akinyelu%2CAA&author=Blignaut%2CP"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR3">Al-Eidan, R. M., Al-Khalifa, H. S., & Al-Salman, A. S. (2020). Deep-learning-based models for pain recognition: A systematic review. <i>Applied Sciences,</i> <i>10</i>, 5984.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 3" href="http://scholar.google.com/scholar_lookup?&title=Deep-learning-based%20models%20for%20pain%20recognition%3A%20A%20systematic%20review&journal=Applied%20Sciences&volume=10&publication_year=2020&author=Al-Eidan%2CRM&author=Al-Khalifa%2CHS&author=Al-Salman%2CAS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR4">Belhumeur, P. N., Jacobs, D. W., Kriegman, D. J., & Kumar, N. (2013). Localizing parts of faces using a consensus of exemplars. <i>IEEE Transactions on Pattern Analysis and Machine Intelligence,</i> <i>35</i>(12), 2930–2940.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 4" href="http://scholar.google.com/scholar_lookup?&title=Localizing%20parts%20of%20faces%20using%20a%20consensus%20of%20exemplars&journal=IEEE%20Transactions%20on%20Pattern%20Analysis%20and%20Machine%20Intelligence&volume=35&issue=12&pages=2930-2940&publication_year=2013&author=Belhumeur%2CPN&author=Jacobs%2CDW&author=Kriegman%2CDJ&author=Kumar%2CN"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR5">Bennett, V., Gourkow, N., & Mills, D. S. (2017). Facial correlates of emotional behaviour in the domestic cat (felis catus). <i>Behavioural Processes,</i> <i>141</i>, 342–350.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 5" href="http://scholar.google.com/scholar_lookup?&title=Facial%20correlates%20of%20emotional%20behaviour%20in%20the%20domestic%20cat%20%28felis%20catus%29&journal=Behavioural%20Processes&volume=141&pages=342-350&publication_year=2017&author=Bennett%2CV&author=Gourkow%2CN&author=Mills%2CDS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR6">Bierbach, D., Laskowski, K. L., & Wolf, M. (2017). Behavioural individuality in clonal fish arises despite near-identical rearing conditions. <i>Nature Communications,</i> <i>8</i>(1), 15361.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 6" href="http://scholar.google.com/scholar_lookup?&title=Behavioural%20individuality%20in%20clonal%20fish%20arises%20despite%20near-identical%20rearing%20conditions&journal=Nature%20Communications&volume=8&issue=1&publication_year=2017&author=Bierbach%2CD&author=Laskowski%2CKL&author=Wolf%2CM"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR7">Billah, M., Wang, X., Yu, J., & Jiang, Y. (2022). Real-time goat face recognition using convolutional neural network. <i>Computers and Electronics in Agriculture,</i> <i>194</i>, 106730.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 7" href="http://scholar.google.com/scholar_lookup?&title=Real-time%20goat%20face%20recognition%20using%20convolutional%20neural%20network&journal=Computers%20and%20Electronics%20in%20Agriculture&volume=194&publication_year=2022&author=Billah%2CM&author=Wang%2CX&author=Yu%2CJ&author=Jiang%2CY"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR8">Brondani, J. T., Mama, K. R., Luna, S. P., Wright, B. D., Niyom, S., Ambrosio, J., Vogel, P. R., & Padovani, C. R. (2013). Validation of the English version of the UNESP-Botucatu multidimensional composite pain scale for assessing postoperative pain in cats. <i>BMC Veterinary Research,</i> <i>9</i>(1), 1–15.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 8" href="http://scholar.google.com/scholar_lookup?&title=Validation%20of%20the%20English%20version%20of%20the%20UNESP-Botucatu%20multidimensional%20composite%20pain%20scale%20for%20assessing%20postoperative%20pain%20in%20cats&journal=BMC%20Veterinary%20Research&volume=9&issue=1&pages=1-15&publication_year=2013&author=Brondani%2CJT&author=Mama%2CKR&author=Luna%2CSP&author=Wright%2CBD&author=Niyom%2CS&author=Ambrosio%2CJ&author=Vogel%2CPR&author=Padovani%2CCR"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR9">Broome, S., Feighelstein, M., Zamansky, A., Lencioni, C. G., Andersen, H. P., Pessanha, F., Mahmoud, M., Kjellström, H., & Salah, A. A. (2023). Going deeper than tracking: A survey of computer-vision based recognition of animal pain and emotions. <i>International Journal of Computer Vision,</i> <i>131</i>(2), 572–590.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 9" href="http://scholar.google.com/scholar_lookup?&title=Going%20deeper%20than%20tracking%3A%20A%20survey%20of%20computer-vision%20based%20recognition%20of%20animal%20pain%20and%20emotions&journal=International%20Journal%20of%20Computer%20Vision&volume=131&issue=2&pages=572-590&publication_year=2023&author=Broome%2CS&author=Feighelstein%2CM&author=Zamansky%2CA&author=Lencioni%2CCG&author=Andersen%2CHP&author=Pessanha%2CF&author=Mahmoud%2CM&author=Kjellstr%C3%B6m%2CH&author=Salah%2CAA"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR10">Brown, A. E., Yemini, E. I., Grundy, L. J., Jucikas, T., & Schafer, W. R. (2013). A dictionary of behavioral motifs reveals clusters of genes affecting caenorhabditis elegans locomotion. <i>Proceedings of the National Academy of Sciences,</i> <i>110</i>(2), 791–796.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 10" href="http://scholar.google.com/scholar_lookup?&title=A%20dictionary%20of%20behavioral%20motifs%20reveals%20clusters%20of%20genes%20affecting%20caenorhabditis%20elegans%20locomotion&journal=Proceedings%20of%20the%20National%20Academy%20of%20Sciences&volume=110&issue=2&pages=791-796&publication_year=2013&author=Brown%2CAE&author=Yemini%2CEI&author=Grundy%2CLJ&author=Jucikas%2CT&author=Schafer%2CWR"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR11">Caeiro, C. C., Burrows, A. M., & Waller, B. M. (2017). Development and application of catfacs: Are human cat adopters influenced by cat facial expressions? <i>Applied Animal Behaviour Science,</i> <i>189</i>, 66–78.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 11" href="http://scholar.google.com/scholar_lookup?&title=Development%20and%20application%20of%20catfacs%3A%20Are%20human%20cat%20adopters%20influenced%20by%20cat%20facial%20expressions%3F&journal=Applied%20Animal%20Behaviour%20Science&volume=189&pages=66-78&publication_year=2017&author=Caeiro%2CCC&author=Burrows%2CAM&author=Waller%2CBM"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR12">Cao, J., Tang, H., Fang, H. -S., Shen, X., Lu, C., & Tai, Y. -W. (2019). Cross-domain adaptation for animal pose estimation. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 9498–9507).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR13">Chen, P., Swarup, P., Matkowski, W. M., Kong, A. W. K., Han, S., Zhang, Z., & Rong, H. (2020). A study on giant panda recognition based on images of a large proportion of captive pandas. <i>Ecology and Evolution,</i> <i>10</i>(7), 3561–3573.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 13" href="http://scholar.google.com/scholar_lookup?&title=A%20study%20on%20giant%20panda%20recognition%20based%20on%20images%20of%20a%20large%20proportion%20of%20captive%20pandas&journal=Ecology%20and%20Evolution&volume=10&issue=7&pages=3561-3573&publication_year=2020&author=Chen%2CP&author=Swarup%2CP&author=Matkowski%2CWM&author=Kong%2CAWK&author=Han%2CS&author=Zhang%2CZ&author=Rong%2CH"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR14">Clapham, M., Miller, E., Nguyen, M., & Van Horn, R. C. (2022). Multispecies facial detection for individual identification of wildlife: A case study across ursids. <i>Mammalian Biology,</i> <i>102</i>(3), 943–955.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 14" href="http://scholar.google.com/scholar_lookup?&title=Multispecies%20facial%20detection%20for%20individual%20identification%20of%20wildlife%3A%20A%20case%20study%20across%20ursids&journal=Mammalian%20Biology&volume=102&issue=3&pages=943-955&publication_year=2022&author=Clapham%2CM&author=Miller%2CE&author=Nguyen%2CM&author=Horn%2CRC"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR15">Collins, B., Deng, J., Li, K., & Fei-Fei, L. (2008). Towards scalable dataset construction: An active learning approach. In: <i>Proceedings of computer vision–ECCV 2008: 10th European conference on computer vision, Marseille, France, October 12-18, 2008, Part I 10</i> (pp. 86–98). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR16">Dapogny, A., Bailly, K., & Cord, M. (2019). Decafa: Deep convolutional cascade for face alignment in the wild. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 6893–6901).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR17">Dawson, L. C., Cheal, J., Niel, L., & Mason, G. (2019). Humans can identify cats’ affective states from subtle facial expressions. <i>Animal Welfare,</i> <i>28</i>(4), 519–531.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 17" href="http://scholar.google.com/scholar_lookup?&title=Humans%20can%20identify%20cats%E2%80%99%20affective%20states%20from%20subtle%20facial%20expressions&journal=Animal%20Welfare&volume=28&issue=4&pages=519-531&publication_year=2019&author=Dawson%2CLC&author=Cheal%2CJ&author=Niel%2CL&author=Mason%2CG"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR18">Deb, D., Wiper, S., Gong, S., Shi, Y., Tymoszek, C., Fletcher, A., & Jain, A. K. (2018). Face recognition: Primates in the wild. In: <i>2018 IEEE 9th international conference on biometrics theory, applications and systems (BTAS)</i> (pp. 1–10). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR19">Deputte, B. L., Jumelet, E., Gilbert, C., & Titeux, E. (2021). Heads and tails: An analysis of visual signals in cats, felis catus. <i>Animals,</i> <i>11</i>(9), 2752.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 19" href="http://scholar.google.com/scholar_lookup?&title=Heads%20and%20tails%3A%20An%20analysis%20of%20visual%20signals%20in%20cats%2C%20felis%20catus&journal=Animals&volume=11&issue=9&publication_year=2021&author=Deputte%2CBL&author=Jumelet%2CE&author=Gilbert%2CC&author=Titeux%2CE"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR20">Elhamifar, E., Sapiro, G., Yang, A., & Sasrty, S. S. (2013). A convex optimization framework for active learning. In: <i>Proceedings of the IEEE international conference on computer vision</i> (pp. 209–216).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR21">Evangelista, M. C., Benito, J., Monteiro, B. P., Watanabe, R., Doodnaught, G. M., Pang, D. S., & Steagall, P. V. (2020). Clinical applicability of the feline grimace scale: Real-time versus image scoring and the influence of sedation and surgery. <i>PeerJ,</i> <i>8</i>, 8967.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 21" href="http://scholar.google.com/scholar_lookup?&title=Clinical%20applicability%20of%20the%20feline%20grimace%20scale%3A%20Real-time%20versus%20image%20scoring%20and%20the%20influence%20of%20sedation%20and%20surgery&journal=PeerJ&volume=8&publication_year=2020&author=Evangelista%2CMC&author=Benito%2CJ&author=Monteiro%2CBP&author=Watanabe%2CR&author=Doodnaught%2CGM&author=Pang%2CDS&author=Steagall%2CPV"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR22">Evangelista, M. C., Watanabe, R., Leung, V. S., Monteiro, B. P., O’Toole, E., Pang, D. S., & Steagall, P. V. (2019). Facial expressions of pain in cats: The development and validation of a feline grimace scale. <i>Scientific Reports,</i> <i>9</i>(1), 1–11.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 22" href="http://scholar.google.com/scholar_lookup?&title=Facial%20expressions%20of%20pain%20in%20cats%3A%20The%20development%20and%20validation%20of%20a%20feline%20grimace%20scale&journal=Scientific%20Reports&volume=9&issue=1&pages=1-11&publication_year=2019&author=Evangelista%2CMC&author=Watanabe%2CR&author=Leung%2CVS&author=Monteiro%2CBP&author=O%E2%80%99Toole%2CE&author=Pang%2CDS&author=Steagall%2CPV"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR23">Feighelstein, M., Henze, L., Meller, S., Shimshoni, I., Hermoni, B., Berko, M., Twele, F., Schütter, A., Dorn, N., Kästner, S., et al. (2023). Explainable automated pain recognition in cats. <i>Scientific Reports,</i> <i>13</i>(1), 8973.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 23" href="http://scholar.google.com/scholar_lookup?&title=Explainable%20automated%20pain%20recognition%20in%20cats&journal=Scientific%20Reports&volume=13&issue=1&publication_year=2023&author=Feighelstein%2CM&author=Henze%2CL&author=Meller%2CS&author=Shimshoni%2CI&author=Hermoni%2CB&author=Berko%2CM&author=Twele%2CF&author=Sch%C3%BCtter%2CA&author=Dorn%2CN&author=K%C3%A4stner%2CS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR24">Feighelstein, M., Shimshoni, I., Finka, L. R., Luna, S. P., Mills, D. S., & Zamansky, A. (2022). Automated recognition of pain in cats. <i>Scientific Reports,</i> <i>12</i>(1), 9575.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 24" href="http://scholar.google.com/scholar_lookup?&title=Automated%20recognition%20of%20pain%20in%20cats&journal=Scientific%20Reports&volume=12&issue=1&publication_year=2022&author=Feighelstein%2CM&author=Shimshoni%2CI&author=Finka%2CLR&author=Luna%2CSP&author=Mills%2CDS&author=Zamansky%2CA"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR25">Ferres, K., Schloesser, T., & Gloor, P. A. (2022). Predicting dog emotions based on posture analysis using deeplabcut. <i>Future Internet,</i> <i>14</i>(4), 97.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 25" href="http://scholar.google.com/scholar_lookup?&title=Predicting%20dog%20emotions%20based%20on%20posture%20analysis%20using%20deeplabcut&journal=Future%20Internet&volume=14&issue=4&publication_year=2022&author=Ferres%2CK&author=Schloesser%2CT&author=Gloor%2CPA"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR26">Finka, L. R., Luna, S. P., Brondani, J. T., Tzimiropoulos, Y., McDonagh, J., Farnworth, M. J., Ruta, M., & Mills, D. S. (2019). Geometric morphometrics for the study of facial expressions in non-human animals, using the domestic cat as an exemplar. <i>Scientific Reports,</i> <i>9</i>(1), 1–12.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 26" href="http://scholar.google.com/scholar_lookup?&title=Geometric%20morphometrics%20for%20the%20study%20of%20facial%20expressions%20in%20non-human%20animals%2C%20using%20the%20domestic%20cat%20as%20an%20exemplar&journal=Scientific%20Reports&volume=9&issue=1&pages=1-12&publication_year=2019&author=Finka%2CLR&author=Luna%2CSP&author=Brondani%2CJT&author=Tzimiropoulos%2CY&author=McDonagh%2CJ&author=Farnworth%2CMJ&author=Ruta%2CM&author=Mills%2CDS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR27">Finlayson, K., Lampe, J. F., Hintze, S., Würbel, H., & Melotti, L. (2016). Facial indicators of positive emotions in rats. <i>PLoS ONE,</i> <i>11</i>(11), 0166446.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 27" href="http://scholar.google.com/scholar_lookup?&title=Facial%20indicators%20of%20positive%20emotions%20in%20rats&journal=PLoS%20ONE&volume=11&issue=11&publication_year=2016&author=Finlayson%2CK&author=Lampe%2CJF&author=Hintze%2CS&author=W%C3%BCrbel%2CH&author=Melotti%2CL"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR28">Friesen, E., & Ekman, P. (1978). Facial action coding system: A technique for the measurement of facial movement. <i>Palo Alto,</i> <i>3</i>(2), 5.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 28" href="http://scholar.google.com/scholar_lookup?&title=Facial%20action%20coding%20system%3A%20A%20technique%20for%20the%20measurement%20of%20facial%20movement&journal=Palo%20Alto&volume=3&issue=2&publication_year=1978&author=Friesen%2CE&author=Ekman%2CP"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR29">Gong, C., Zhang, Y., Wei, Y., Du, X., Su, L., & Weng, Z. (2022). Multicow pose estimation based on keypoint extraction. <i>PLoS ONE,</i> <i>17</i>(6), 0269259.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 29" href="http://scholar.google.com/scholar_lookup?&title=Multicow%20pose%20estimation%20based%20on%20keypoint%20extraction&journal=PLoS%20ONE&volume=17&issue=6&publication_year=2022&author=Gong%2CC&author=Zhang%2CY&author=Wei%2CY&author=Du%2CX&author=Su%2CL&author=Weng%2CZ"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR30">Graving, J. M., Chae, D., Naik, H., Li, L., Koger, B., Costelloe, B. R., & Couzin, I. D. (2019). DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. <i>Elife,</i> <i>8</i>, 47994.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 30" href="http://scholar.google.com/scholar_lookup?&title=DeepPoseKit%2C%20a%20software%20toolkit%20for%20fast%20and%20robust%20animal%20pose%20estimation%20using%20deep%20learning&journal=Elife&volume=8&publication_year=2019&author=Graving%2CJM&author=Chae%2CD&author=Naik%2CH&author=Li%2CL&author=Koger%2CB&author=Costelloe%2CBR&author=Couzin%2CID"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR31">Grishchenko, I., Ablavatski, A., Kartynnik, Y., Raveendran, K., & Grundmann, M. (2020). Attention mesh: High-fidelity face mesh prediction in real-time. <a href="http://arxiv.org/abs/2006.10962" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/2006.10962">arXiv:2006.10962</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR32">Gu, Y., Jin, Z., & Chiu, S. C. (2015). Active learning combining uncertainty and diversity for multi-class image classification. <i>IET Computer Vision,</i> <i>9</i>(3), 400–407.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 32" href="http://scholar.google.com/scholar_lookup?&title=Active%20learning%20combining%20uncertainty%20and%20diversity%20for%20multi-class%20image%20classification&journal=IET%20Computer%20Vision&volume=9&issue=3&pages=400-407&publication_year=2015&author=Gu%2CY&author=Jin%2CZ&author=Chiu%2CSC"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR33">Guo, S., Xu, P., Miao, Q., Shao, G., Chapman, C.A., Chen, X., He, G., Fang, D., Zhang, H., & Sun, Y., et al. (2020). Automatic identification of individual primates with deep learning techniques. <i>Iscience, 23</i>(8).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR34">He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. <i>CVPR</i>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR35">Hewitt, C., & Mahmoud, M. (2019). Pose-informed face alignment for extreme head pose variations in animals. In: <i>2019 8th international conference on affective computing and intelligent interaction (ACII)</i> (pp. 1–6). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR36">Holden, E., Calvo, G., Collins, M., Bell, A., Reid, J., Scott, E., & Nolan, A. M. (2014). Evaluation of facial expression in acute pain in cats. <i>Journal of Small Animal Practice,</i> <i>55</i>(12), 615–621.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 36" href="http://scholar.google.com/scholar_lookup?&title=Evaluation%20of%20facial%20expression%20in%20acute%20pain%20in%20cats&journal=Journal%20of%20Small%20Animal%20Practice&volume=55&issue=12&pages=615-621&publication_year=2014&author=Holden%2CE&author=Calvo%2CG&author=Collins%2CM&author=Bell%2CA&author=Reid%2CJ&author=Scott%2CE&author=Nolan%2CAM"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR37">Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In: <i>Proceedings of the IEEE conference on computer vision and pattern recognition</i> (pp. 4700–4708).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR38">Huang, Y., Yang, H., Li, C., Kim, J., & Wei, F. (2021). Adnet: Leveraging error-bias towards normal direction in face alignment. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 3080–3090).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR39">Hummel, H. I., Pessanha, F., Salah, A. A., van Loon, T.J ., & Veltkamp, R. C. (2020). Automatic pain detection on horse and donkey faces. In: <i>2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020)</i> (pp. 793–800). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR40">Humphrey, T., Proops, L., Forman, J., Spooner, R., & McComb, K. (2020). The role of cat eye narrowing movements in cat-human communication. <i>Scientific Reports,</i> <i>10</i>(1), 16503.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 40" href="http://scholar.google.com/scholar_lookup?&title=The%20role%20of%20cat%20eye%20narrowing%20movements%20in%20cat-human%20communication&journal=Scientific%20Reports&volume=10&issue=1&publication_year=2020&author=Humphrey%2CT&author=Proops%2CL&author=Forman%2CJ&author=Spooner%2CR&author=McComb%2CK"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR41">Jin, H., Liao, S., & Shao, L. (2021). Pixel-in-pixel net: Towards efficient facial landmark detection in the wild. <i>International Journal of Computer Vision,</i> <i>129</i>, 3174–3194.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 41" href="http://scholar.google.com/scholar_lookup?&title=Pixel-in-pixel%20net%3A%20Towards%20efficient%20facial%20landmark%20detection%20in%20the%20wild&journal=International%20Journal%20of%20Computer%20Vision&volume=129&pages=3174-3194&publication_year=2021&author=Jin%2CH&author=Liao%2CS&author=Shao%2CL"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR42">Jocher, G., Chaurasia, A., & Qiu, J. (2023). YOLO by Ultralytics. <a href="https://github.com/ultralytics/ultralytics" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="https://github.com/ultralytics/ultralytics">https://github.com/ultralytics/ultralytics</a></p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR43">Kain, J., Stokes, C., Gaudry, Q., Song, X., Foley, J., Wilson, R., & De Bivort, B. (2013). Leg-tracking and automated behavioural classification in drosophila. <i>Nature Communications,</i> <i>4</i>(1), 1910.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 43" href="http://scholar.google.com/scholar_lookup?&title=Leg-tracking%20and%20automated%20behavioural%20classification%20in%20drosophila&journal=Nature%20Communications&volume=4&issue=1&publication_year=2013&author=Kain%2CJ&author=Stokes%2CC&author=Gaudry%2CQ&author=Song%2CX&author=Foley%2CJ&author=Wilson%2CR&author=Bivort%2CB"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR44">Kellenberger, B., Marcos, D., Lobry, S., & Tuia, D. (2019). Half a percent of labels is enough: Efficient animal detection in UAV imagery using deep CNNS and active learning. <i>IEEE Transactions on Geoscience and Remote Sensing,</i> <i>57</i>(12), 9524–9533.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 44" href="http://scholar.google.com/scholar_lookup?&title=Half%20a%20percent%20of%20labels%20is%20enough%3A%20Efficient%20animal%20detection%20in%20UAV%20imagery%20using%20deep%20CNNS%20and%20active%20learning&journal=IEEE%20Transactions%20on%20Geoscience%20and%20Remote%20Sensing&volume=57&issue=12&pages=9524-9533&publication_year=2019&author=Kellenberger%2CB&author=Marcos%2CD&author=Lobry%2CS&author=Tuia%2CD"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR45">Khan, M. H., McDonagh, J., Khan, S., Shahabuddin, M., Arora, A., Khan, F. S., Shao, L., & Tzimiropoulos, G. (2020). Animalweb: A large-scale hierarchical dataset of annotated animal faces. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 6939–6948).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR46">Körschens, M., Barz, B., & Denzler, J. (2018). Towards automatic identification of elephants in the wild. <a href="http://arxiv.org/abs/1812.04418" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/1812.04418">arXiv:1812.04418</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR47">Kumar, A., Marks, T. K., Mou, W., Wang, Y., Jones, M., Cherian, A., Koike-Akino, T., Liu, X., & Feng, C. (2020). Luvli face alignment: Estimating landmarks’ location, uncertainty, and visibility likelihood. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 8236–8246).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR48">Labelbox (2023). "Labelbox. <a href="https://labelbox.com" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="https://labelbox.com">https://labelbox.com</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR49">Labuguen, R., Bardeloza, D. K., Negrete, S. B., Matsumoto, J., Inoue, K., & Shibata, T. (2019). Primate markerless pose estimation and movement analysis using deeplabcut. In: <i>2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd international conference on imaging, vision & pattern recognition (icIVPR)</i> (pp. 297–300). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR50">Lan, X., Hu, Q., Chen, Q., Xue, J., & Cheng, J. (2021). Hih: Towards more accurate face alignment via heatmap in heatmap. <a href="http://arxiv.org/abs/2104.03100" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/2104.03100">arXiv:2104.03100</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR51">Lascelles, B. D. X., & Robertson, S. A. (2010). Djd-associated pain in cats: What can we do to promote patient comfort? <i>Journal of Feline Medicine and Surgery,</i> <i>12</i>(3), 200–212.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 51" href="http://scholar.google.com/scholar_lookup?&title=Djd-associated%20pain%20in%20cats%3A%20What%20can%20we%20do%20to%20promote%20patient%20comfort%3F&journal=Journal%20of%20Feline%20Medicine%20and%20Surgery&volume=12&issue=3&pages=200-212&publication_year=2010&author=Lascelles%2CBDX&author=Robertson%2CSA"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR52">Le, V., Brandt, J., Lin, Z., Bourdev, L., & Huang, T. S. (2012). Interactive facial feature localization. In: <i>Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy,</i> October 7-13, 2012, Part III 12 (pp. 679–692). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR53">Li, X., & Guo, Y. (2013). Adaptive active learning for image classification. In: <i>Proceedings of the IEEE conference on computer vision and pattern recognition</i> (pp. 859–866).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR54">Li, H., Guo, Z., Rhee, S. -M., Han, S., & Han, J. -J. (2022). Towards accurate facial landmark detection via cascaded transformers. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 4176–4185).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR55">Li, J., Jin, H., Liao, S., Shao, L., & Heng, P.-A. (2022). Repformer: Refinement pyramid transformer for robust facial landmark detection. <a href="http://arxiv.org/abs/2207.03917" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/2207.03917">arXiv:2207.03917</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR56">Li, W., Lu, Y., Zheng, K., Liao, H., Lin, C., Luo, J., Cheng, C. -T., Xiao, J., Lu, L., & Kuo, C. -F., et al. (2020). Structured landmark detection via topology-adapting deep graph learning. In: <i>Computer vision–ECCV 2020: Proceedings of the 16th European conference, Glasgow, UK, August 23–28, 2020</i>, Part IX 16 (pp. 266–283). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR57">Li, S., & Deng, W. (2020). Deep facial expression recognition: A survey. <i>IEEE Transactions on Affective Computing,</i> <i>13</i>(3), 1195–1215.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" rel="nofollow noopener" data-track-label="link" data-track-item_id="link" data-track-value="mathscinet reference" data-track-action="mathscinet reference" href="http://www.ams.org/mathscinet-getitem?mr=4097608" aria-label="MathSciNet reference 57">MathSciNet</a> <a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 57" href="http://scholar.google.com/scholar_lookup?&title=Deep%20facial%20expression%20recognition%3A%20A%20survey&journal=IEEE%20Transactions%20on%20Affective%20Computing&volume=13&issue=3&pages=1195-1215&publication_year=2020&author=Li%2CS&author=Deng%2CW"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR58">Liu, Z., Ding, H., Zhong, H., Li, W., Dai, J., & He, C. (2021). Influence selection for active learning. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 9274–9283).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR59">Liu, J., Kanazawa, A., Jacobs, D., & Belhumeur, P. (2012). Dog breed classification using part localization. In: <i>Computer Vision–ECCV 2012: Proceedings of 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Part I 12</i> (pp. 172–185). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR60">Liu, Z., Mao, H., Wu, C. -Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 11976–11986).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR61">Llewelyn, H., & Kiddie, J. (2022). Can a facial action coding system (catfacs) be used to determine the welfare state of cats with cerebellar hypoplasia? <i>Veterinary Record, 190</i>(8).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR62">Malek, S., & Rossi, S. (2021). Head pose estimation using facial-landmarks classification for children rehabilitation games. <i>Pattern Recognition Letters,</i> <i>152</i>, 406–412.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 62" href="http://scholar.google.com/scholar_lookup?&title=Head%20pose%20estimation%20using%20facial-landmarks%20classification%20for%20children%20rehabilitation%20games&journal=Pattern%20Recognition%20Letters&volume=152&pages=406-412&publication_year=2021&author=Malek%2CS&author=Rossi%2CS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR63">Ma, J., Li, X., Ren, Y., Yang, R., & Zhao, Q. (2021). Landmark-based facial feature construction and action unit intensity prediction. <i>Mathematical Problems in Engineering,</i> <i>2021</i>, 1–12.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 63" href="http://scholar.google.com/scholar_lookup?&title=Landmark-based%20facial%20feature%20construction%20and%20action%20unit%20intensity%20prediction&journal=Mathematical%20Problems%20in%20Engineering&volume=2021&pages=1-12&publication_year=2021&author=Ma%2CJ&author=Li%2CX&author=Ren%2CY&author=Yang%2CR&author=Zhao%2CQ"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR64">Mathis, A., Biasi, T., Schneider, S., Yuksekgonul, M., Rogers, B., Bethge, M., & Mathis, M. W. (2021). Pretraining boosts out-of-domain robustness for pose estimation. In: <i>Proceedings of the IEEE/CVF winter conference on applications of computer vision</i> (pp. 1859–1868).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR65">Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., & Bethge, M. (2018). Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. <i>Nature Neuroscience,</i> <i>21</i>(9), 1281.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 65" href="http://scholar.google.com/scholar_lookup?&title=Deeplabcut%3A%20Markerless%20pose%20estimation%20of%20user-defined%20body%20parts%20with%20deep%20learning&journal=Nature%20Neuroscience&volume=21&issue=9&publication_year=2018&author=Mathis%2CA&author=Mamidanna%2CP&author=Cury%2CKM&author=Abe%2CT&author=Murthy%2CVN&author=Mathis%2CMW&author=Bethge%2CM"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR66">Mathis, M. W., & Mathis, A. (2020). Deep learning tools for the measurement of animal behavior in neuroscience. <i>Current Opinion in Neurobiology,</i> <i>60</i>, 1–11.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 66" href="http://scholar.google.com/scholar_lookup?&title=Deep%20learning%20tools%20for%20the%20measurement%20of%20animal%20behavior%20in%20neuroscience&journal=Current%20Opinion%20in%20Neurobiology&volume=60&pages=1-11&publication_year=2020&author=Mathis%2CMW&author=Mathis%2CA"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR67">McLennan, K., & Mahmoud, M. (2019). Development of an automated pain facial expression detection system for sheep (ovis aries). <i>Animals,</i> <i>9</i>(4), 196.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 67" href="http://scholar.google.com/scholar_lookup?&title=Development%20of%20an%20automated%20pain%20facial%20expression%20detection%20system%20for%20sheep%20%28ovis%20aries%29&journal=Animals&volume=9&issue=4&publication_year=2019&author=McLennan%2CK&author=Mahmoud%2CM"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR68">McLennan, K. M., Rebelo, C. J., Corke, M. J., Holmes, M. A., Leach, M. C., & Constantino-Casas, F. (2016). Development of a facial expression scale using footrot and mastitis as models of pain in sheep. <i>Applied Animal Behaviour Science,</i> <i>176</i>, 19–26.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 68" href="http://scholar.google.com/scholar_lookup?&title=Development%20of%20a%20facial%20expression%20scale%20using%20footrot%20and%20mastitis%20as%20models%20of%20pain%20in%20sheep&journal=Applied%20Animal%20Behaviour%20Science&volume=176&pages=19-26&publication_year=2016&author=McLennan%2CKM&author=Rebelo%2CCJ&author=Corke%2CMJ&author=Holmes%2CMA&author=Leach%2CMC&author=Constantino-Casas%2CF"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR69">Merola, I., & Mills, D. S. (2016). Behavioural signs of pain in cats: An expert consensus. <i>PLoS ONE,</i> <i>11</i>(2), 0150040.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 69" href="http://scholar.google.com/scholar_lookup?&title=Behavioural%20signs%20of%20pain%20in%20cats%3A%20An%20expert%20consensus&journal=PLoS%20ONE&volume=11&issue=2&publication_year=2016&author=Merola%2CI&author=Mills%2CDS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR70">Micaelli, P., Vahdat, A., Yin, H., Kautz, J., & Molchanov, P. (2023). Recurrence without recurrence: Stable video landmark detection with deep equilibrium models. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 22814–22825).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR71">Mougeot, G., Li, D., & Jia, S. (2019). A deep learning approach for dog face verification and recognition. In: <i>PRICAI 2019: Trends in artificial intelligence: proceedings of 16th Pacific rim international conference on artificial intelligence, Cuvu, Yanuca Island, Fiji, August 26-30, 2019, Part III 16</i> (pp. 418–430). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR72">Nath, T., Mathis, A., Chen, A. C., Patel, A., Bethge, M., & Mathis, M. W. (2019). Using deeplabcut for 3D markerless pose estimation across species and behaviors. <i>Nature Protocols,</i> <i>14</i>(7), 2152–2176.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 72" href="http://scholar.google.com/scholar_lookup?&title=Using%20deeplabcut%20for%203D%20markerless%20pose%20estimation%20across%20species%20and%20behaviors&journal=Nature%20Protocols&volume=14&issue=7&pages=2152-2176&publication_year=2019&author=Nath%2CT&author=Mathis%2CA&author=Chen%2CAC&author=Patel%2CA&author=Bethge%2CM&author=Mathis%2CMW"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR73">Newell, A., Yang, K., & Deng, J. (2016). Stacked hourglass networks for human pose estimation, pp. 483–499. Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR74">Paul, E. S., & Mendl, M. T. (2018). Animal emotion: Descriptive and prescriptive definitions and their implications for a comparative perspective. <i>Applied Animal Behaviour Science,</i> <i>205</i>, 202–209.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 74" href="http://scholar.google.com/scholar_lookup?&title=Animal%20emotion%3A%20Descriptive%20and%20prescriptive%20definitions%20and%20their%20implications%20for%20a%20comparative%20perspective&journal=Applied%20Animal%20Behaviour%20Science&volume=205&pages=202-209&publication_year=2018&author=Paul%2CES&author=Mendl%2CMT"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR75">Pereira, T. D., Aldarondo, D. E., Willmore, L., Kislin, M., Wang, S.S.-H., Murthy, M., & Shaevitz, J. W. (2019). Fast animal pose estimation using deep neural networks. <i>Nature Methods,</i> <i>16</i>(1), 117–125.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 75" href="http://scholar.google.com/scholar_lookup?&title=Fast%20animal%20pose%20estimation%20using%20deep%20neural%20networks&journal=Nature%20Methods&volume=16&issue=1&pages=117-125&publication_year=2019&author=Pereira%2CTD&author=Aldarondo%2CDE&author=Willmore%2CL&author=Kislin%2CM&author=Wang%2CSS-H&author=Murthy%2CM&author=Shaevitz%2CJW"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR76">Prados-Torreblanca, A., Buenaposada, J. M., & Baumela, L. (2022). Shape preserving facial landmarks with graph attention networks. <a href="http://arxiv.org/abs/2210.07233" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/2210.07233">arXiv:2210.07233</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR77">Quan, Q., Yao, Q., Li, J., & Zhou, S. K. (2022). Which images to label for few-shot medical landmark detection? In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 20606–20616).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR78">Reid, J., Scott, E., Calvo, G., & Nolan, A. (2017). Definitive glasgow acute pain scale for cats: Validation and intervention level. <i>Veterinary Record, 108</i>(18).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR79">Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. -C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In: <i>Proceedings of the IEEE conference on computer vision and pattern recognition</i> (pp. 4510–4520).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR80">Scott, L., & Florkiewicz, B. N. (2023). Feline faces: Unraveling the social function of domestic cat facial signals. <i>Behavioural Processes</i>, 104959.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR81">Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. <i>Psychological Bulletin,</i> <i>86</i>(2), 420.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 81" href="http://scholar.google.com/scholar_lookup?&title=Intraclass%20correlations%3A%20Uses%20in%20assessing%20rater%20reliability&journal=Psychological%20Bulletin&volume=86&issue=2&publication_year=1979&author=Shrout%2CPE&author=Fleiss%2CJL"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR82">Sinha, S., Ebrahimi, S., & Darrell, T. (2019). Variational adversarial active learning. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 5972–5981).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR83">Sotocina, S. G., Sorge, R. E., Zaloum, A., Tuttle, A. H., Martin, L. J., Wieskopf, J. S., Mapplebeck, J. C., Wei, P., Zhan, S., Zhang, S., et al. (2011). The rat grimace scale: A partially automated method for quantifying pain in the laboratory rat via facial expressions. <i>Molecular Pain,</i> <i>7</i>, 1744–8069.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 83" href="http://scholar.google.com/scholar_lookup?&title=The%20rat%20grimace%20scale%3A%20A%20partially%20automated%20method%20for%20quantifying%20pain%20in%20the%20laboratory%20rat%20via%20facial%20expressions&journal=Molecular%20Pain&volume=7&pages=1744-8069&publication_year=2011&author=Sotocina%2CSG&author=Sorge%2CRE&author=Zaloum%2CA&author=Tuttle%2CAH&author=Martin%2CLJ&author=Wieskopf%2CJS&author=Mapplebeck%2CJC&author=Wei%2CP&author=Zhan%2CS&author=Zhang%2CS"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR84">Sun, Y., & Murata, N. (2020). Cafm: A 3d morphable model for animals. In: <i>Proceedings of the IEEE/CVF winter conference on applications of computer vision workshops</i> (pp. 20–24).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR85">Sun, K., Zhao, Y., Jiang, B., Cheng, T., Xiao, B., Liu, D., Mu, Y., Wang, X., Liu, W., & Wang, J. (2019). High-resolution representations for labeling pixels and regions. <a href="http://arxiv.org/abs/1904.04514" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/1904.04514">arXiv:1904.04514</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR86">Tan, M., & Le, Q. (2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In: <i>International conference on machine learning</i> (pp. 6105–6114). PMLR.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR87">Tan, M., & Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In: <i>International conference on machine learning</i> (pp. 10096–10106). PMLR</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR88">Tarnowski, P., Kołodziej, M., Majkowski, A., & Rak, R. J. (2017). Emotion recognition using facial expressions. <i>Procedia Computer Science,</i> <i>108</i>, 1175–1184.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 88" href="http://scholar.google.com/scholar_lookup?&title=Emotion%20recognition%20using%20facial%20expressions&journal=Procedia%20Computer%20Science&volume=108&pages=1175-1184&publication_year=2017&author=Tarnowski%2CP&author=Ko%C5%82odziej%2CM&author=Majkowski%2CA&author=Rak%2CRJ"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR89">Unsplash. <a href="https://unsplash.com" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="https://unsplash.com">https://unsplash.com</a>. Accessed 6 Oct 2023.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR90">Vojtkovská, V., Voslářová, E., & Večerek, V. (2020). Methods of assessment of the welfare of shelter cats: A review. <i>Animals,</i> <i>10</i>(9), 1527.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 90" href="http://scholar.google.com/scholar_lookup?&title=Methods%20of%20assessment%20of%20the%20welfare%20of%20shelter%20cats%3A%20A%20review&journal=Animals&volume=10&issue=9&publication_year=2020&author=Vojtkovsk%C3%A1%2CV&author=Vosl%C3%A1%C5%99ov%C3%A1%2CE&author=Ve%C4%8Derek%2CV"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR91">Wang, X., Bo, L., & Fuxin, L. (2019). Adaptive wing loss for robust face alignment via heatmap regression. In: <i>Proceedings of the IEEE/CVF international conference on computer vision</i> (pp. 6971–6981).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR92">Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., Abraira, V. E., Adams, R. P., & Datta, S. R. (2015). Mapping sub-second structure in mouse behavior. <i>Neuron,</i> <i>88</i>(6), 1121–1135.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 92" href="http://scholar.google.com/scholar_lookup?&title=Mapping%20sub-second%20structure%20in%20mouse%20behavior&journal=Neuron&volume=88&issue=6&pages=1121-1135&publication_year=2015&author=Wiltschko%2CAB&author=Johnson%2CMJ&author=Iurilli%2CG&author=Peterson%2CRE&author=Katon%2CJM&author=Pashkovski%2CSL&author=Abraira%2CVE&author=Adams%2CRP&author=Datta%2CSR"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR93">Wu, W., Qian, C., Yang, S., Wang, Q., Cai, Y., & Zhou, Q. (2018). Look at boundary: A boundary-aware face alignment algorithm. In: <i>Proceedings of the IEEE conference on computer vision and pattern recognition</i> (pp. 2129–2138).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR94">Wu, Y., & Ji, Q. (2019). Facial landmark detection: A literature survey. <i>International Journal of Computer Vision,</i> <i>127</i>(2), 115–142.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 94" href="http://scholar.google.com/scholar_lookup?&title=Facial%20landmark%20detection%3A%20A%20literature%20survey&journal=International%20Journal%20of%20Computer%20Vision&volume=127&issue=2&pages=115-142&publication_year=2019&author=Wu%2CY&author=Ji%2CQ"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR95">Wu, M., Li, C., & Yao, Z. (2022). Deep active learning for computer vision tasks: Methodologies, applications, and challenges. <i>Applied Sciences,</i> <i>12</i>(16), 8103.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR96">Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In: <i>Proceedings of the IEEE conference on computer vision and pattern recognition</i> (pp. 1492–1500).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR97">Yang, Y., & Sinnott, R. O. (2023). Automated recognition and classification of cat pain through deep learning. <i>Lecture Notes in Computer Science, 13864</i>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR98">Yang, J., et al. (2003). Automatically labeling video data using multi-class active learning. In: <i>Proceedings of ninth IEEE international conference on computer vision</i> (pp. 516–523). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR99">Yang, H., Zhang, R., & Robinson, P. (2016). Human and sheep facial landmarks localisation by triplet interpolated features. In: <i>2016 IEEE winter conference on applications of computer vision (WACV)</i> (pp. 1–8). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR100">Yang, J., Zhang, F., Chen, B., & Khan, S. U. (2019). Facial expression recognition based on facial action unit. In: <i>2019 tenth international green and sustainable computing conference (IGSC)</i> (pp. 1–6). IEEE.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR101">Ye, S., Filippova, A., Lauer, J., Vidal, M., Schneider, S., Qiu, T., Mathis, A., & Mathis, M. W. (2022). Superanimal models pretrained for plug-and-play analysis of animal behavior. <a href="http://arxiv.org/abs/2203.07436" data-track="click_references" data-track-action="external reference" data-track-value="external reference" data-track-label="http://arxiv.org/abs/2203.07436">arXiv:2203.07436</a>.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR102">Yoo, D., & Kweon, I. S. (2019) Learning loss for active learning. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 93–102).</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR103">Zhang, W., Sun, J., & Tang, X. (2008). Cat head detection-how to effectively exploit shape and texture features. In: <i>Computer vision–ECCV 2008: 10th european conference on computer vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10</i> (pp. 802–816). Springer.</p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR104">Zhan, W., Zou, Y., He, Z., & Zhang, Z. (2021). Key points tracking and grooming behavior recognition of bactrocera minax (diptera: Trypetidae) via deeplabcut. <i>Mathematical Problems in Engineering,</i> <i>2021</i>, 1–15.</p><p class="c-article-references__links u-hide-print"><a data-track="click_references" data-track-action="google scholar reference" data-track-value="google scholar reference" data-track-label="link" data-track-item_id="link" rel="nofollow noopener" aria-label="Google Scholar reference 104" href="http://scholar.google.com/scholar_lookup?&title=Key%20points%20tracking%20and%20grooming%20behavior%20recognition%20of%20bactrocera%20minax%20%28diptera%3A%20Trypetidae%29%20via%20deeplabcut&journal=Mathematical%20Problems%20in%20Engineering&volume=2021&pages=1-15&publication_year=2021&author=Zhan%2CW&author=Zou%2CY&author=He%2CZ&author=Zhang%2CZ"> Google Scholar</a> </p></li><li class="c-article-references__item js-c-reading-companion-references-item"><p class="c-article-references__text" id="ref-CR105">Zhou, Z., Li, H., Liu, H., Wang, N., Yu, G., & Ji, R. (2023). Star loss: Reducing semantic ambiguity in facial landmark detection. In: <i>Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</i> (pp. 15475–15484).</p></li></ul><p class="c-article-references__download u-hide-print"><a data-track="click" data-track-action="download citation references" data-track-label="link" rel="nofollow" href="https://citation-needed.springer.com/v2/references/10.1007/s11263-024-02006-w?format=refman&flavour=references">Download references<svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-download-medium"></use></svg></a></p></div></div></div></section></div><section data-title="Acknowledgements"><div class="c-article-section" id="Ack1-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Ack1">Acknowledgements</h2><div class="c-article-section__content" id="Ack1-content"><p>The research was supported by the Data Science Research Center at the University of Haifa. We thank Ephantus Kanyugi for his contribution with data annotation and management. We thank Yaron Yossef and Nareed Farhat for their technical support.</p></div></div></section><section data-title="Funding"><div class="c-article-section" id="Fun-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="Fun">Funding</h2><div class="c-article-section__content" id="Fun-content"><p>Open access funding provided by University of Haifa.</p></div></div></section><section aria-labelledby="author-information" data-title="Author information"><div class="c-article-section" id="author-information-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="author-information">Author information</h2><div class="c-article-section__content" id="author-information-content"><h3 class="c-article__sub-heading" id="affiliations">Authors and Affiliations</h3><ol class="c-article-author-affiliation__list"><li id="Aff1"><p class="c-article-author-affiliation__address">Information Systems Department, University of Haifa, Haifa, Israel</p><p class="c-article-author-affiliation__authors-list">George Martvel, Ilan Shimshoni & Anna Zamansky</p></li></ol><div class="u-js-hide u-hide-print" data-test="author-info"><span class="c-article__sub-heading">Authors</span><ol class="c-article-authors-search u-list-reset"><li id="auth-George-Martvel-Aff1"><span class="c-article-authors-search__title u-h3 js-search-name">George Martvel</span><div class="c-article-authors-search__list"><div class="c-article-authors-search__item c-article-authors-search__list-item--left"><a href="/search?dc.creator=George%20Martvel" class="c-article-button" data-track="click" data-track-action="author link - publication" data-track-label="link" rel="nofollow">View author publications</a></div><div class="c-article-authors-search__item c-article-authors-search__list-item--right"><p class="search-in-title-js c-article-authors-search__text">You can also search for this author in <span class="c-article-identifiers"><a class="c-article-identifiers__item" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=George%20Martvel" data-track="click" data-track-action="author link - pubmed" data-track-label="link" rel="nofollow">PubMed</a><span class="u-hide"> </span><a class="c-article-identifiers__item" href="http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22George%20Martvel%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en" data-track="click" data-track-action="author link - scholar" data-track-label="link" rel="nofollow">Google Scholar</a></span></p></div></div></li><li id="auth-Ilan-Shimshoni-Aff1"><span class="c-article-authors-search__title u-h3 js-search-name">Ilan Shimshoni</span><div class="c-article-authors-search__list"><div class="c-article-authors-search__item c-article-authors-search__list-item--left"><a href="/search?dc.creator=Ilan%20Shimshoni" class="c-article-button" data-track="click" data-track-action="author link - publication" data-track-label="link" rel="nofollow">View author publications</a></div><div class="c-article-authors-search__item c-article-authors-search__list-item--right"><p class="search-in-title-js c-article-authors-search__text">You can also search for this author in <span class="c-article-identifiers"><a class="c-article-identifiers__item" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Ilan%20Shimshoni" data-track="click" data-track-action="author link - pubmed" data-track-label="link" rel="nofollow">PubMed</a><span class="u-hide"> </span><a class="c-article-identifiers__item" href="http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Ilan%20Shimshoni%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en" data-track="click" data-track-action="author link - scholar" data-track-label="link" rel="nofollow">Google Scholar</a></span></p></div></div></li><li id="auth-Anna-Zamansky-Aff1"><span class="c-article-authors-search__title u-h3 js-search-name">Anna Zamansky</span><div class="c-article-authors-search__list"><div class="c-article-authors-search__item c-article-authors-search__list-item--left"><a href="/search?dc.creator=Anna%20Zamansky" class="c-article-button" data-track="click" data-track-action="author link - publication" data-track-label="link" rel="nofollow">View author publications</a></div><div class="c-article-authors-search__item c-article-authors-search__list-item--right"><p class="search-in-title-js c-article-authors-search__text">You can also search for this author in <span class="c-article-identifiers"><a class="c-article-identifiers__item" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=search&term=Anna%20Zamansky" data-track="click" data-track-action="author link - pubmed" data-track-label="link" rel="nofollow">PubMed</a><span class="u-hide"> </span><a class="c-article-identifiers__item" href="http://scholar.google.co.uk/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=%22Anna%20Zamansky%22&as_publication=&as_ylo=&as_yhi=&as_allsubj=all&hl=en" data-track="click" data-track-action="author link - scholar" data-track-label="link" rel="nofollow">Google Scholar</a></span></p></div></div></li></ol></div><h3 class="c-article__sub-heading" id="corresponding-author">Corresponding author</h3><p id="corresponding-author-list">Correspondence to <a id="corresp-c1" href="mailto:martvelge@gmail.com">George Martvel</a>.</p></div></div></section><section data-title="Additional information"><div class="c-article-section" id="additional-information-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="additional-information">Additional information</h2><div class="c-article-section__content" id="additional-information-content"><p>Communicated by Helge Rhodin.</p><h3 class="c-article__sub-heading">Publisher's Note</h3><p>Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p></div></div></section><section data-title="Rights and permissions"><div class="c-article-section" id="rightslink-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="rightslink">Rights and permissions</h2><div class="c-article-section__content" id="rightslink-content"> <p><b>Open Access</b> This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">http://creativecommons.org/licenses/by/4.0/</a>.</p> <p class="c-article-rights"><a data-track="click" data-track-action="view rights and permissions" data-track-label="link" href="https://s100.copyright.com/AppDispatchServlet?title=Automated%20Detection%20of%20Cat%20Facial%20Landmarks&author=George%20Martvel%20et%20al&contentID=10.1007%2Fs11263-024-02006-w&copyright=The%20Author%28s%29&publication=0920-5691&publicationDate=2024-03-05&publisherName=SpringerNature&orderBeanReset=true&oa=CC%20BY">Reprints and permissions</a></p></div></div></section><section aria-labelledby="article-info" data-title="About this article"><div class="c-article-section" id="article-info-section"><h2 class="c-article-section__title js-section-title js-c-reading-companion-sections-item" id="article-info">About this article</h2><div class="c-article-section__content" id="article-info-content"><div class="c-bibliographic-information"><div class="u-hide-print c-bibliographic-information__column c-bibliographic-information__column--border"><a data-crossmark="10.1007/s11263-024-02006-w" target="_blank" rel="noopener" href="https://crossmark.crossref.org/dialog/?doi=10.1007/s11263-024-02006-w" data-track="click" data-track-action="Click Crossmark" data-track-label="link" data-test="crossmark"><img loading="lazy" width="57" height="81" alt="Check for updates. Verify currency and authenticity via CrossMark" src="data:image/svg+xml;base64,<svg height="81" width="57" xmlns="http://www.w3.org/2000/svg"><g fill="none" fill-rule="evenodd"><path d="m17.35 35.45 21.3-14.2v-17.03h-21.3" fill="#989898"/><path d="m38.65 35.45-21.3-14.2v-17.03h21.3" fill="#747474"/><path d="m28 .5c-12.98 0-23.5 10.52-23.5 23.5s10.52 23.5 23.5 23.5 23.5-10.52 23.5-23.5c0-6.23-2.48-12.21-6.88-16.62-4.41-4.4-10.39-6.88-16.62-6.88zm0 41.25c-9.8 0-17.75-7.95-17.75-17.75s7.95-17.75 17.75-17.75 17.75 7.95 17.75 17.75c0 4.71-1.87 9.22-5.2 12.55s-7.84 5.2-12.55 5.2z" fill="#535353"/><path d="m41 36c-5.81 6.23-15.23 7.45-22.43 2.9-7.21-4.55-10.16-13.57-7.03-21.5l-4.92-3.11c-4.95 10.7-1.19 23.42 8.78 29.71 9.97 6.3 23.07 4.22 30.6-4.86z" fill="#9c9c9c"/><path d="m.2 58.45c0-.75.11-1.42.33-2.01s.52-1.09.91-1.5c.38-.41.83-.73 1.34-.94.51-.22 1.06-.32 1.65-.32.56 0 1.06.11 1.51.35.44.23.81.5 1.1.81l-.91 1.01c-.24-.24-.49-.42-.75-.56-.27-.13-.58-.2-.93-.2-.39 0-.73.08-1.05.23-.31.16-.58.37-.81.66-.23.28-.41.63-.53 1.04-.13.41-.19.88-.19 1.39 0 1.04.23 1.86.68 2.46.45.59 1.06.88 1.84.88.41 0 .77-.07 1.07-.23s.59-.39.85-.68l.91 1c-.38.43-.8.76-1.28.99-.47.22-1 .34-1.58.34-.59 0-1.13-.1-1.64-.31-.5-.2-.94-.51-1.31-.91-.38-.4-.67-.9-.88-1.48-.22-.59-.33-1.26-.33-2.02zm8.4-5.33h1.61v2.54l-.05 1.33c.29-.27.61-.51.96-.72s.76-.31 1.24-.31c.73 0 1.27.23 1.61.71.33.47.5 1.14.5 2.02v4.31h-1.61v-4.1c0-.57-.08-.97-.25-1.21-.17-.23-.45-.35-.83-.35-.3 0-.56.08-.79.22-.23.15-.49.36-.78.64v4.8h-1.61zm7.37 6.45c0-.56.09-1.06.26-1.51.18-.45.42-.83.71-1.14.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.36c.07.62.29 1.1.65 1.44.36.33.82.5 1.38.5.29 0 .57-.04.83-.13s.51-.21.76-.37l.55 1.01c-.33.21-.69.39-1.09.53-.41.14-.83.21-1.26.21-.48 0-.92-.08-1.34-.25-.41-.16-.76-.4-1.07-.7-.31-.31-.55-.69-.72-1.13-.18-.44-.26-.95-.26-1.52zm4.6-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.07.45-.31.29-.5.73-.58 1.3zm2.5.62c0-.57.09-1.08.28-1.53.18-.44.43-.82.75-1.13s.69-.54 1.1-.71c.42-.16.85-.24 1.31-.24.45 0 .84.08 1.17.23s.61.34.85.57l-.77 1.02c-.19-.16-.38-.28-.56-.37-.19-.09-.39-.14-.61-.14-.56 0-1.01.21-1.35.63-.35.41-.52.97-.52 1.67 0 .69.17 1.24.51 1.66.34.41.78.62 1.32.62.28 0 .54-.06.78-.17.24-.12.45-.26.64-.42l.67 1.03c-.33.29-.69.51-1.08.65-.39.15-.78.23-1.18.23-.46 0-.9-.08-1.31-.24-.4-.16-.75-.39-1.05-.7s-.53-.69-.7-1.13c-.17-.45-.25-.96-.25-1.53zm6.91-6.45h1.58v6.17h.05l2.54-3.16h1.77l-2.35 2.8 2.59 4.07h-1.75l-1.77-2.98-1.08 1.23v1.75h-1.58zm13.69 1.27c-.25-.11-.5-.17-.75-.17-.58 0-.87.39-.87 1.16v.75h1.34v1.27h-1.34v5.6h-1.61v-5.6h-.92v-1.2l.92-.07v-.72c0-.35.04-.68.13-.98.08-.31.21-.57.4-.79s.42-.39.71-.51c.28-.12.63-.18 1.04-.18.24 0 .48.02.69.07.22.05.41.1.57.17zm.48 5.18c0-.57.09-1.08.27-1.53.17-.44.41-.82.72-1.13.3-.31.65-.54 1.04-.71.39-.16.8-.24 1.23-.24s.84.08 1.24.24c.4.17.74.4 1.04.71s.54.69.72 1.13c.19.45.28.96.28 1.53s-.09 1.08-.28 1.53c-.18.44-.42.82-.72 1.13s-.64.54-1.04.7-.81.24-1.24.24-.84-.08-1.23-.24-.74-.39-1.04-.7c-.31-.31-.55-.69-.72-1.13-.18-.45-.27-.96-.27-1.53zm1.65 0c0 .69.14 1.24.43 1.66.28.41.68.62 1.18.62.51 0 .9-.21 1.19-.62.29-.42.44-.97.44-1.66 0-.7-.15-1.26-.44-1.67-.29-.42-.68-.63-1.19-.63-.5 0-.9.21-1.18.63-.29.41-.43.97-.43 1.67zm6.48-3.44h1.33l.12 1.21h.05c.24-.44.54-.79.88-1.02.35-.24.7-.36 1.07-.36.32 0 .59.05.78.14l-.28 1.4-.33-.09c-.11-.01-.23-.02-.38-.02-.27 0-.56.1-.86.31s-.55.58-.77 1.1v4.2h-1.61zm-47.87 15h1.61v4.1c0 .57.08.97.25 1.2.17.24.44.35.81.35.3 0 .57-.07.8-.22.22-.15.47-.39.73-.73v-4.7h1.61v6.87h-1.32l-.12-1.01h-.04c-.3.36-.63.64-.98.86-.35.21-.76.32-1.24.32-.73 0-1.27-.24-1.61-.71-.33-.47-.5-1.14-.5-2.02zm9.46 7.43v2.16h-1.61v-9.59h1.33l.12.72h.05c.29-.24.61-.45.97-.63.35-.17.72-.26 1.1-.26.43 0 .81.08 1.15.24.33.17.61.4.84.71.24.31.41.68.53 1.11.13.42.19.91.19 1.44 0 .59-.09 1.11-.25 1.57-.16.47-.38.85-.65 1.16-.27.32-.58.56-.94.73-.35.16-.72.25-1.1.25-.3 0-.6-.07-.9-.2s-.59-.31-.87-.56zm0-2.3c.26.22.5.37.73.45.24.09.46.13.66.13.46 0 .84-.2 1.15-.6.31-.39.46-.98.46-1.77 0-.69-.12-1.22-.35-1.61-.23-.38-.61-.57-1.13-.57-.49 0-.99.26-1.52.77zm5.87-1.69c0-.56.08-1.06.25-1.51.16-.45.37-.83.65-1.14.27-.3.58-.54.93-.71s.71-.25 1.08-.25c.39 0 .73.07 1 .2.27.14.54.32.81.55l-.06-1.1v-2.49h1.61v9.88h-1.33l-.11-.74h-.06c-.25.25-.54.46-.88.64-.33.18-.69.27-1.06.27-.87 0-1.56-.32-2.07-.95s-.76-1.51-.76-2.65zm1.67-.01c0 .74.13 1.31.4 1.7.26.38.65.58 1.15.58.51 0 .99-.26 1.44-.77v-3.21c-.24-.21-.48-.36-.7-.45-.23-.08-.46-.12-.7-.12-.45 0-.82.19-1.13.59-.31.39-.46.95-.46 1.68zm6.35 1.59c0-.73.32-1.3.97-1.71.64-.4 1.67-.68 3.08-.84 0-.17-.02-.34-.07-.51-.05-.16-.12-.3-.22-.43s-.22-.22-.38-.3c-.15-.06-.34-.1-.58-.1-.34 0-.68.07-1 .2s-.63.29-.93.47l-.59-1.08c.39-.24.81-.45 1.28-.63.47-.17.99-.26 1.54-.26.86 0 1.51.25 1.93.76s.63 1.25.63 2.21v4.07h-1.32l-.12-.76h-.05c-.3.27-.63.48-.98.66s-.73.27-1.14.27c-.61 0-1.1-.19-1.48-.56-.38-.36-.57-.85-.57-1.46zm1.57-.12c0 .3.09.53.27.67.19.14.42.21.71.21.28 0 .54-.07.77-.2s.48-.31.73-.56v-1.54c-.47.06-.86.13-1.18.23-.31.09-.57.19-.76.31s-.33.25-.41.4c-.09.15-.13.31-.13.48zm6.29-3.63h-.98v-1.2l1.06-.07.2-1.88h1.34v1.88h1.75v1.27h-1.75v3.28c0 .8.32 1.2.97 1.2.12 0 .24-.01.37-.04.12-.03.24-.07.34-.11l.28 1.19c-.19.06-.4.12-.64.17-.23.05-.49.08-.76.08-.4 0-.74-.06-1.02-.18-.27-.13-.49-.3-.67-.52-.17-.21-.3-.48-.37-.78-.08-.3-.12-.64-.12-1.01zm4.36 2.17c0-.56.09-1.06.27-1.51s.41-.83.71-1.14c.29-.3.63-.54 1.01-.71.39-.17.78-.25 1.18-.25.47 0 .88.08 1.23.24.36.16.65.38.89.67s.42.63.54 1.03c.12.41.18.84.18 1.32 0 .32-.02.57-.07.76h-4.37c.08.62.29 1.1.65 1.44.36.33.82.5 1.38.5.3 0 .58-.04.84-.13.25-.09.51-.21.76-.37l.54 1.01c-.32.21-.69.39-1.09.53s-.82.21-1.26.21c-.47 0-.92-.08-1.33-.25-.41-.16-.77-.4-1.08-.7-.3-.31-.54-.69-.72-1.13-.17-.44-.26-.95-.26-1.52zm4.61-.62c0-.55-.11-.98-.34-1.28-.23-.31-.58-.47-1.06-.47-.41 0-.77.15-1.08.45-.31.29-.5.73-.57 1.3zm3.01 2.23c.31.24.61.43.92.57.3.13.63.2.98.2.38 0 .65-.08.83-.23s.27-.35.27-.6c0-.14-.05-.26-.13-.37-.08-.1-.2-.2-.34-.28-.14-.09-.29-.16-.47-.23l-.53-.22c-.23-.09-.46-.18-.69-.3-.23-.11-.44-.24-.62-.4s-.33-.35-.45-.55c-.12-.21-.18-.46-.18-.75 0-.61.23-1.1.68-1.49.44-.38 1.06-.57 1.83-.57.48 0 .91.08 1.29.25s.71.36.99.57l-.74.98c-.24-.17-.49-.32-.73-.42-.25-.11-.51-.16-.78-.16-.35 0-.6.07-.76.21-.17.15-.25.33-.25.54 0 .14.04.26.12.36s.18.18.31.26c.14.07.29.14.46.21l.54.19c.23.09.47.18.7.29s.44.24.64.4c.19.16.34.35.46.58.11.23.17.5.17.82 0 .3-.06.58-.17.83-.12.26-.29.48-.51.68-.23.19-.51.34-.84.45-.34.11-.72.17-1.15.17-.48 0-.95-.09-1.41-.27-.46-.19-.86-.41-1.2-.68z" fill="#535353"/></g></svg>"></a></div><div class="c-bibliographic-information__column"><h3 class="c-article__sub-heading" id="citeas">Cite this article</h3><p class="c-bibliographic-information__citation">Martvel, G., Shimshoni, I. & Zamansky, A. Automated Detection of Cat Facial Landmarks. <i>Int J Comput Vis</i> <b>132</b>, 3103–3118 (2024). https://doi.org/10.1007/s11263-024-02006-w</p><p class="c-bibliographic-information__download-citation u-hide-print"><a data-test="citation-link" data-track="click" data-track-action="download article citation" data-track-label="link" data-track-external="" rel="nofollow" href="https://citation-needed.springer.com/v2/references/10.1007/s11263-024-02006-w?format=refman&flavour=citation">Download citation<svg width="16" height="16" focusable="false" role="img" aria-hidden="true" class="u-icon"><use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-eds-i-download-medium"></use></svg></a></p><ul class="c-bibliographic-information__list" data-test="publication-history"><li class="c-bibliographic-information__list-item"><p>Received<span class="u-hide">: </span><span class="c-bibliographic-information__value"><time datetime="2023-08-29">29 August 2023</time></span></p></li><li class="c-bibliographic-information__list-item"><p>Accepted<span class="u-hide">: </span><span class="c-bibliographic-information__value"><time datetime="2024-01-14">14 January 2024</time></span></p></li><li class="c-bibliographic-information__list-item"><p>Published<span class="u-hide">: </span><span class="c-bibliographic-information__value"><time datetime="2024-03-05">05 March 2024</time></span></p></li><li class="c-bibliographic-information__list-item"><p>Issue Date<span class="u-hide">: </span><span class="c-bibliographic-information__value"><time datetime="2024-08">August 2024</time></span></p></li><li class="c-bibliographic-information__list-item c-bibliographic-information__list-item--full-width"><p><abbr title="Digital Object Identifier">DOI</abbr><span class="u-hide">: </span><span class="c-bibliographic-information__value">https://doi.org/10.1007/s11263-024-02006-w</span></p></li></ul><div data-component="share-box"><div class="c-article-share-box u-display-none" hidden=""><h3 class="c-article__sub-heading">Share this article</h3><p class="c-article-share-box__description">Anyone you share the following link with will be able to read this content:</p><button class="js-get-share-url c-article-share-box__button" type="button" id="get-share-url" data-track="click" data-track-label="button" data-track-external="" data-track-action="get shareable link">Get shareable link</button><div class="js-no-share-url-container u-display-none" hidden=""><p class="js-c-article-share-box__no-sharelink-info c-article-share-box__no-sharelink-info">Sorry, a shareable link is not currently available for this article.</p></div><div class="js-share-url-container u-display-none" hidden=""><p class="js-share-url c-article-share-box__only-read-input" id="share-url" data-track="click" data-track-label="button" data-track-action="select share url"></p><button class="js-copy-share-url c-article-share-box__button--link-like" type="button" id="copy-share-url" data-track="click" data-track-label="button" data-track-action="copy share url" data-track-external="">Copy to clipboard</button></div><p class="js-c-article-share-box__additional-info c-article-share-box__additional-info"> Provided by the Springer Nature SharedIt content-sharing initiative </p></div></div><h3 class="c-article__sub-heading">Keywords</h3><ul class="c-article-subject-list"><li class="c-article-subject-list__subject"><span><a href="/search?query=Landmarks&facet-discipline="Computer%20Science"" data-track="click" data-track-action="view keyword" data-track-label="link">Landmarks</a></span></li><li class="c-article-subject-list__subject"><span><a href="/search?query=Detection&facet-discipline="Computer%20Science"" data-track="click" data-track-action="view keyword" data-track-label="link">Detection</a></span></li><li class="c-article-subject-list__subject"><span><a href="/search?query=Ensemble%20models&facet-discipline="Computer%20Science"" data-track="click" data-track-action="view keyword" data-track-label="link">Ensemble models</a></span></li></ul><div data-component="article-info-list"></div></div></div></div></div></section> </div> </main> <div class="c-article-sidebar u-text-sm u-hide-print l-with-sidebar__sidebar" id="sidebar" data-container-type="reading-companion" data-track-component="reading companion"> <aside> <div class="app-card-service" data-test="article-checklist-banner"> <div> <a class="app-card-service__link" data-track="click_presubmission_checklist" data-track-context="article page top of reading companion" data-track-category="pre-submission-checklist" data-track-action="clicked article page checklist banner test 2 old version" data-track-label="link" href="https://beta.springernature.com/pre-submission?journalId=11263" data-test="article-checklist-banner-link"> <span class="app-card-service__link-text">Use our pre-submission checklist</span> <svg class="app-card-service__link-icon" aria-hidden="true" focusable="false"><use xlink:href="#icon-eds-i-arrow-right-small"></use></svg> </a> <p class="app-card-service__description">Avoid common mistakes on your manuscript.</p> </div> <div class="app-card-service__icon-container"> <svg class="app-card-service__icon" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-clipboard-check-medium"></use> </svg> </div> </div> <div data-test="collections"> <aside> <div class="c-article-associated-content__container"> <h2 class="c-article-associated-content__title u-h3 u-mb-24 u-visually-hidden">Associated Content</h2> <div class="c-article-associated-content__collection collection u-mb-24"> <p class="c-article-associated-content__collection-label u-sans-serif u-text-bold u-mb-8">Part of a collection:</p> <h3 class="c-article-associated-content__collection-title u-mt-0 u-h3 u-mb-8" itemprop="name headline"> <a href="/journal/11263/topicalCollection/AC_28d2412bfdfbbd8a143d75526b0407ab" data-track="click" data-track-action="view collection" data-track-label="link">Special Issue on Computer Vision Approaches for Animal Tracking and Modeling 2023</a> </h3> </div> </div> </aside> <script> window.dataLayer = window.dataLayer || []; window.dataLayer[0] = window.dataLayer[0] || {}; window.dataLayer[0].content = window.dataLayer[0].content || {}; window.dataLayer[0].content.collections = 'AC_28d2412bfdfbbd8a143d75526b0407ab'; </script> </div> <div data-test="editorial-summary"> </div> <div class="c-reading-companion"> <div class="c-reading-companion__sticky" data-component="reading-companion-sticky" data-test="reading-companion-sticky"> <div class="c-reading-companion__panel c-reading-companion__sections c-reading-companion__panel--active" id="tabpanel-sections"> <div class="u-lazy-ad-wrapper u-mt-16 u-hide" data-component-mpu><div class="c-ad c-ad--300x250"> <div class="c-ad__inner"> <p class="c-ad__label">Advertisement</p> <div id="div-gpt-ad-MPU1" class="div-gpt-ad grade-c-hide" data-pa11y-ignore data-gpt data-gpt-unitpath="/270604982/springerlink/11263/article" data-gpt-sizes="300x250" data-test="MPU1-ad" data-gpt-targeting="pos=MPU1;articleid=s11263-024-02006-w;"> </div> </div> </div> </div> </div> <div class="c-reading-companion__panel c-reading-companion__figures c-reading-companion__panel--full-width" id="tabpanel-figures"></div> <div class="c-reading-companion__panel c-reading-companion__references c-reading-companion__panel--full-width" id="tabpanel-references"></div> </div> </div> </aside> </div> </div> </article> <div class="app-elements"> <div class="eds-c-header__expander eds-c-header__expander--search" id="eds-c-header-popup-search"> <h2 class="eds-c-header__heading">Search</h2> <div class="u-container"> <search class="eds-c-header__search" role="search" aria-label="Search from the header"> <form method="GET" action="//link.springer.com/search" data-test="header-search" data-track="search" data-track-context="search from header" data-track-action="submit search form" data-track-category="unified header" data-track-label="form" > <label for="eds-c-header-search" class="eds-c-header__search-label">Search by keyword or author</label> <div class="eds-c-header__search-container"> <input id="eds-c-header-search" class="eds-c-header__search-input" autocomplete="off" name="query" type="search" value="" required> <button class="eds-c-header__search-button" type="submit"> <svg class="eds-c-header__icon" aria-hidden="true" focusable="false"> <use xlink:href="#icon-eds-i-search-medium"></use> </svg> <span class="u-visually-hidden">Search</span> </button> </div> </form> </search> </div> </div> <div class="eds-c-header__expander eds-c-header__expander--menu" id="eds-c-header-nav"> <h2 class="eds-c-header__heading">Navigation</h2> <ul class="eds-c-header__list"> <li class="eds-c-header__list-item"> <a class="eds-c-header__link" href="https://link.springer.com/journals/" data-track="nav_find_a_journal" data-track-context="unified header" data-track-action="click find a journal" data-track-category="unified header" data-track-label="link" > Find a journal </a> </li> <li class="eds-c-header__list-item"> <a class="eds-c-header__link" href="https://www.springernature.com/gp/authors" data-track="nav_how_to_publish" data-track-context="unified header" data-track-action="click publish with us link" data-track-category="unified header" data-track-label="link" > Publish with us </a> </li> <li class="eds-c-header__list-item"> <a class="eds-c-header__link" href="https://link.springernature.com/home/" data-track="nav_track_your_research" data-track-context="unified header" data-track-action="click track your research" data-track-category="unified header" data-track-label="link" > Track your research </a> </li> </ul> </div> <footer > <div class="eds-c-footer" > <div class="eds-c-footer__container"> <div class="eds-c-footer__grid eds-c-footer__group--separator"> <div class="eds-c-footer__group"> <h3 class="eds-c-footer__heading">Discover content</h3> <ul class="eds-c-footer__list"> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://link.springer.com/journals/a/1" data-track="nav_journals_a_z" data-track-action="journals a-z" data-track-context="unified footer" data-track-label="link">Journals A-Z</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://link.springer.com/books/a/1" data-track="nav_books_a_z" data-track-action="books a-z" data-track-context="unified footer" data-track-label="link">Books A-Z</a></li> </ul> </div> <div class="eds-c-footer__group"> <h3 class="eds-c-footer__heading">Publish with us</h3> <ul class="eds-c-footer__list"> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://link.springer.com/journals" data-track="nav_journal_finder" data-track-action="journal finder" data-track-context="unified footer" data-track-label="link">Journal finder</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/authors" data-track="nav_publish_your_research" data-track-action="publish your research" data-track-context="unified footer" data-track-label="link">Publish your research</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/open-research/about/the-fundamentals-of-open-access-and-open-research" data-track="nav_open_access_publishing" data-track-action="open access publishing" data-track-context="unified footer" data-track-label="link">Open access publishing</a></li> </ul> </div> <div class="eds-c-footer__group"> <h3 class="eds-c-footer__heading">Products and services</h3> <ul class="eds-c-footer__list"> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/products" data-track="nav_our_products" data-track-action="our products" data-track-context="unified footer" data-track-label="link">Our products</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/librarians" data-track="nav_librarians" data-track-action="librarians" data-track-context="unified footer" data-track-label="link">Librarians</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/societies" data-track="nav_societies" data-track-action="societies" data-track-context="unified footer" data-track-label="link">Societies</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springernature.com/gp/partners" data-track="nav_partners_and_advertisers" data-track-action="partners and advertisers" data-track-context="unified footer" data-track-label="link">Partners and advertisers</a></li> </ul> </div> <div class="eds-c-footer__group"> <h3 class="eds-c-footer__heading">Our imprints</h3> <ul class="eds-c-footer__list"> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.springer.com/" data-track="nav_imprint_Springer" data-track-action="Springer" data-track-context="unified footer" data-track-label="link">Springer</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.nature.com/" data-track="nav_imprint_Nature_Portfolio" data-track-action="Nature Portfolio" data-track-context="unified footer" data-track-label="link">Nature Portfolio</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.biomedcentral.com/" data-track="nav_imprint_BMC" data-track-action="BMC" data-track-context="unified footer" data-track-label="link">BMC</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.palgrave.com/" data-track="nav_imprint_Palgrave_Macmillan" data-track-action="Palgrave Macmillan" data-track-context="unified footer" data-track-label="link">Palgrave Macmillan</a></li> <li class="eds-c-footer__item"><a class="eds-c-footer__link" href="https://www.apress.com/" data-track="nav_imprint_Apress" data-track-action="Apress" data-track-context="unified footer" data-track-label="link">Apress</a></li> </ul> </div> </div> </div> <div class="eds-c-footer__container"> <nav aria-label="footer navigation"> <ul class="eds-c-footer__links"> <li class="eds-c-footer__item"> <button class="eds-c-footer__link" data-cc-action="preferences" data-track="dialog_manage_cookies" data-track-action="Manage cookies" data-track-context="unified footer" data-track-label="link"><span class="eds-c-footer__button-text">Your privacy choices/Manage cookies</span></button> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://www.springernature.com/gp/legal/ccpa" data-track="nav_california_privacy_statement" data-track-action="california privacy statement" data-track-context="unified footer" data-track-label="link">Your US state privacy rights</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://www.springernature.com/gp/info/accessibility" data-track="nav_accessibility_statement" data-track-action="accessibility statement" data-track-context="unified footer" data-track-label="link">Accessibility statement</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://link.springer.com/termsandconditions" data-track="nav_terms_and_conditions" data-track-action="terms and conditions" data-track-context="unified footer" data-track-label="link">Terms and conditions</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://link.springer.com/privacystatement" data-track="nav_privacy_policy" data-track-action="privacy policy" data-track-context="unified footer" data-track-label="link">Privacy policy</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://support.springernature.com/en/support/home" data-track="nav_help_and_support" data-track-action="help and support" data-track-context="unified footer" data-track-label="link">Help and support</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://link.springer.com/legal-notice" data-track="nav_legal_notice" data-track-action="legal notice" data-track-context="unified footer" data-track-label="link">Legal notice</a> </li> <li class="eds-c-footer__item"> <a class="eds-c-footer__link" href="https://support.springernature.com/en/support/solutions/articles/6000255911-subscription-cancellations" data-track-action="cancel contracts here">Cancel contracts here</a> </li> </ul> </nav> <div class="eds-c-footer__user"> <p class="eds-c-footer__user-info"> <span data-test="footer-user-ip">8.222.208.146</span> </p> <p class="eds-c-footer__user-info" data-test="footer-business-partners">Not affiliated</p> </div> <a href="https://www.springernature.com/" class="eds-c-footer__link"> <img src="/oscar-static/images/logo-springernature-white-19dd4ba190.svg" alt="Springer Nature" loading="lazy" width="200" height="20"/> </a> <p class="eds-c-footer__legal" data-test="copyright">© 2025 Springer Nature</p> </div> </div> </footer> </div> </body> </html>