The present study examined differences in visual search and locomotor behavior among a group of skilled 10–12 year-old football players. The participants watched video clips of a 4-to-4 position game, presented on a large screen. The participants were asked to take part in the game by choosing the best position for the reception of the ball passed by one of the players in the clip. Participants’ visual search and locomotor behavior were collected continuously throughout the presentation of the clip. A within-group comparison was made based upon the participants’ interception score, i.e., more at the correct position. The findings show that the high-score group looked more to the ball area, while the players in the low-score group concentrated on the receiving player and on the hips/upper-body region of the passing player. The players in the high-score group covered a significantly greater distance compared to the low-score group. It was concluded that differences in visual search and locomotion behavior can be used as indicators for identifying talented junior football players.
DOCUMENT
In foul decision-making by football referees, visual search is important for gathering task-specific information to determine whether a foul has occurred. Yet, little is known about the visual search behaviours underpinning excellent on-field decisions. The aim of this study was to examine the on-field visual search behaviour of elite and sub-elite football referees when calling a foul during a match. In doing so, we have also compared the accuracy and gaze behaviour for correct and incorrect calls. Elite and sub-elite referees (elite: N = 5, Mage ± SD = 29.8 ± 4.7yrs, Mexperience ± SD = 14.8 ± 3.7yrs; sub-elite: N = 9, Mage ± SD = 23.1 ± 1.6yrs, Mexperience ± SD = 8.4 ± 1.8yrs) officiated an actual football game while wearing a mobile eye-tracker, with on-field visual search behaviour compared between skill levels when calling a foul (Nelite = 66; Nsub−elite = 92). Results revealed that elite referees relied on a higher search rate (more fixations of shorter duration) compared to sub-elites, but with no differences in where they allocated their gaze, indicating that elites searched faster but did not necessarily direct gaze towards different locations. Correct decisions were associated with higher gaze entropy (i.e. less structure). In relying on more structured gaze patterns when making incorrect decisions, referees may fail to pick-up information specific to the foul situation. Referee development programmes might benefit by challenging the speed of information pickup but by avoiding pre-determined gaze patterns to improve the interpretation of fouls and increase the decision-making performance of referees.
DOCUMENT
Numerous laboratory-based studies recorded eye movements in participants with varying expertise when watching video projections in the lab. Although research in the lab offers the advantage of internal validity, reliability and ethical considerations, ecological validity is often questionable. Therefore the current study compared visual search in 13 adult cyclists, when cycling a real bicycle path and while watching a film clip of the same road. Dwell time towards five Areas of Interest (AOIs) is analysed. Dwell time (%) in the lab and real-life was comparable only for the low quality bicycle path. Both in real-life and the lab, gaze is predominantly driven towards the road. Since gaze behaviour in the lab and real-life tends to be comparable with increasing task-complexity (road quality), it is concluded that under certain task constraints laboratory experiments making use of video clips might provide valuable information regarding gaze behaviour in real-life.
DOCUMENT
This study explored how dressage judges focus their attention on different parts of horse-rider performances during competitions. By using eye tracking technology, we analyzed where judges look and how long they focus on specific areas. We included twenty judges with varying levels of experience and recorded their eye movements as they assessed Grand Prix dressage tests on video. We found that all judges mostly looked at the front of the horse compared to the rider or other parts of the horse. However, advanced level judges paid more attention to the horse’s feet, while judges engaged at the lower level of the sport looked more at the rider. These patterns suggest that judges concentrate on a few highly relevant areas, depending on the underlying criteria for evaluating performances. Understanding judges’ visual patterns and how they interpret what they see can help improve judging, making it more accurate and transparent, ensuring more consistent evaluations in competition and improving equine welfare.
LINK
In daily interaction with horses, humans primarily rely on facial expression as a non-verbal equine cue for emotional information. Difficulties in correctly recognizing these signals might arise due to the species-specificity of facial cues, possibly leading to diminished equine welfare and health. This study aimed to explore human visual search patterns when assessing equine facial expressions indicative of various pain levels, utilizing eye-tracking technology. Hundred and eight individuals (N = 108), classified into three groups (affinity with horses (N = 60), pet owners with no affinity with horses (N = 32), and individuals with no affinity with animals (N = 16)) participated in the study; with their eye movements recorded using eye tracking glasses they evaluated four photos of horses with different levels of pain. Error score, calculated by comparing participant scores to Gold Standard Visual Analogue Score levels and fixation metrics (number of fixations and duration of fixations) were analysed across the four photos, participant group and Areas of Interest (AOIs): eyes, ears, nostrils, and mouth. Statistical analysis utilized linear mixed models. Highlighting the critical role of the eyes as key indicators of pain, findings showed that the eyes played a significant role in assessing equine emotional states, as all groups focused on them for a longer time and more frequently compared to other facial features. Also, participants showed a consistent pattern in how they looked at a horse's face, first focusing on the eyes, then the ears, and finally the nose/mouth region, indicating a horse-specific pattern. Moderate pain was assessed with similar accuracy across all groups, indicating that these signals are broadly recognizable. Nevertheless, non-equestrians faced challenges with recognizing the absence of pain, possibly highlighting the role of experience in interpreting subtle equine expressions. The study's limitations, such as variability in assessment conditions may have impacted findings. Future work could further investigate why humans follow this visual search pattern and whether they recognize the significance of a horse's ears. Additionally, emphasis should be placed on developing targeted training interventions to improve equine pain recognition, possibly benefiting equine welfare and health.
LINK
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best localization systems based on GNSS cannot always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Recent works have shown the advantage of using maps as a precise, robust, and reliable way of localization. Typical approaches use the set of current readings from the vehicle sensors to estimate its position on the map. The approach presented in this paper exploits a short-range visual lane marking detector and a dead reckoning system to construct a registry of the detected back lane markings corresponding to the last 240 m driven. This information is used to search in the map the most similar section, to determine the vehicle localization in the map reference. Additional filtering is used to obtain a more robust estimation for the localization. The accuracy obtained is sufficiently high to allow autonomous driving in a narrow road. The system uses a low-cost architecture of sensors and the algorithm is light enough to run on low-power embedded architecture.
DOCUMENT
Introduction: Visuospatial neglect (VSN) is common after stroke and can seriously hamper everyday life. One of the most commonly used and highly recommended rehabilitation methods is Visual Scanning Training (VST) which requires a lot of repetition which makes the treatment intensive and less appealing for the patient. The use of eHealth in healthcare can increase options regarding improved treatment in the areas of patient satisfaction, treatment efficacy and effectiveness. One solution to motivational issues might be Augmented Reality (AR), which offers new opportunities for increasing natural interactions with the environment during treatment of VSN. Aim: The development of an AR-based scanning training program that will improve visuospatial search strategies in individuals affected by VSN. Method: We used a Design Research approach, which is characterized by the iterative and incremental use of prototypes as research instruments together with a strong human-centered focus. Several design thinking methods were used to explore which design elements the AR game should comply with. Seven patients with visuospatial neglect, eight occupational therapists, a game design professional and seven other healthcare professionals participated in this research by means of co-creation based on their own perspectives. Results: Fundamental design choices for an AR game for VSN patients included the factors extrinsic motivation, nostalgia, metaphors, direct feedback, independent movement, object contrast, search elements and competition. Designing for extrinsic motivation was considered the most important design choice, because due to less self-awareness the target group often does not fully understand and accept the consequences of VSN. Conclusion: This study produced a prototype AR game for people with VSN after stroke. The AR game and method used illustrate the promising role of AR tools in geriatric rehabilitation, specifically those aimed at increasing the independence of patients with VSN after stroke. 2020 The Authors. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
DOCUMENT
In the Netherlands, over 40% of nursing home residents are estimated to have visual impairments. This results in the loss of basic visual abilities. The nursing home environment fits more or less to residents’ activities and social participation. This is referred to as environmental fit. To raise professional awareness of environmental fit, an Environmental Observation tool for the Visually Impaired was developed. This tool targets aspects of the nursing home environment such as ‘light’, the use of ‘colours and contrasts’ and ‘furnishing and obstacles’. Objective of this study is to validate the content of the observation tool to have a tool applicable for practice. Based on the content validity approach, we invited a total of eight experts, six eye care professionals and two building engineering researchers, to judge the relevance of the items. The Item Content Validity approach was applied to determine items to retain and reject. The content validity approach led to a decrease in the number of items from 63 to 52. The definitive tool of 52 items contains 21 for Corridors, 17 for the Common Room, and 14 for the Bathroom. All items of the definite tool received an Item-Content Validity Index of 0.875 and a Scale-Content Validity Index of 0.71. The content validity index of the scale and per item has been applied, resulting in a tool that can be applied in nursing homes. The tool might be a starting point of a discussion among professional caregivers on environmental interventions for visually impaired older adults in nursing homes
MULTIFILE
We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG.
DOCUMENT
This article interrogates platform-specific bias in the contemporary algorithmic media landscape through a comparative study of the representation of pregnancy on the Web and social media. Online visual materials such as social media content related to pregnancy are not void of bias, nor are they very diverse. The case study is a cross-platform analysis of social media imagery for the topic of pregnancy, through which distinct visual platform vernaculars emerge. The authors describe two visualization methods that can support comparative analysis of such visual vernaculars: the image grid and the composite image. While platform-specific perspectives range from lists of pregnancy tips on Pinterest to pregnancy information and social support systems on Twitter, and pregnancy humour on Reddit, each of the platforms presents a predominantly White, able-bodied and heteronormative perspective on pregnancy.
DOCUMENT