-
Wiberg Broussard posted an update 1 day, 14 hours ago
Objectives The purpose of this study was to provide data on the long-term efficacy and safety of left atrial appendage occlusion (LAAO) in patients with atrial fibrillation (AF) and chronic thrombocytopenia (cTCP). Methods Between January 2016 and December 2018, a total of 32 AF patients with thrombocytopenia (platelet count 0.9). Major (12.50 vs. 3.75%, p = 0.065) and minor bleeding (15.63 vs. 1.25%, p = 0.002) was more frequent in cTCP patients but no statistical difference was reached in major bleeding. Moreover, thrombocytopenia was also identified as an independent predictor of any bleeding events (OR 8.150, 95% CI 2.579-25.757, p less then 0.001), while an inverse relationship between higher absolute platelet count and stroke events was revealed (OR 1.015; 95% CI 1.002~1.029, p = 0.022). However, in both groups we saw a significant reduction in observed annualized rates of non-procedural complications compared with the predicted values. In the cTCP and control groups, clinical thromboembolism was reduced by 100 and 74.32%, and major bleeding by 42.47 and 71.67%, respectively. Conclusion Our preliminary results indicate that LAAO using the Watchman device could be a safe and effective means of preventing stroke in AF patients with or without thrombocytopenia, but bleeding complications should be monitored intensively in cTCP patients.Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people’s perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants’ gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot’s uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze toward an object of shared attention, rather than gaze toward a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people’s mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people’s perception of robots in a social chat and of their engagement and task performance in a joint task.Robots increasingly act as our social counterparts in domains such as healthcare and retail. For these human-robot interactions (HRI) to be effective, a question arises on whether we trust robots the same way we trust humans. We investigated whether the determinants competence and warmth, known to influence interpersonal trust development, influence trust development in HRI, and what role anthropomorphism plays in this interrelation. In two online studies with 2 × 2 between-subjects design, we investigated the role of robot competence (Study 1) and robot warmth (Study 2) in trust development in HRI. Each study explored the role of robot anthropomorphism in the respective interrelation. Videos showing an HRI were used for manipulations of robot competence (through varying gameplay competence) and robot anthropomorphism (through verbal and non-verbal design cues and the robot’s presentation within the study introduction) in Study 1 (n = 155) as well as robot warmth (through varying compatibility of intentions wand support a combined consideration of these variables in future studies. Insights deepen the understanding of key variables and their interaction in trust dynamics in HRI and suggest possibly relevant design factors to enable appropriate trust levels and a resulting desirable HRI. click here Methodological and conceptual limitations underline benefits of a rather robot-specific approach for future research.The Covid-19 pandemic has had a widespread effect across the globe. The major effect on health-care workers and the vulnerable populations they serve has been of particular concern. Near-complete lockdown has been a common strategy to reduce the spread of the pandemic in environments such as live-in care facilities. Robotics is a promising area of research that can assist in reducing the spread of covid-19, while also preventing the need for complete physical isolation. The research presented in this paper demonstrates a speech-controlled, self-sanitizing robot that enables the delivery of items from a visitor to a resident of a care facility. The system is automated to reduce the burden on facility staff, and it is controlled entirely through hands-free audio interaction in order to reduce transmission of the virus. We demonstrate an end-to-end delivery test, and an in-depth evaluation of the speech interface. We also recorded a speech dataset with two conditions the talker wearing a face mask and the talker not wearing a face mask. We then used this dataset to evaluate the speech recognition system. This enabled us to test the effect of face masks on speech recognition interfaces in the context of autonomous systems.Most people touch their faces unconsciously, for instance to scratch an itch or to rest one’s chin in their hands. To reduce the spread of the novel coronavirus (COVID-19), public health officials recommend against touching one’s face, as the virus is transmitted through mucous membranes in the mouth, nose and eyes. Students, office workers, medical personnel and people on trains were found to touch their faces between 9 and 23 times per hour. This paper introduces FaceGuard, a system that utilizes deep learning to predict hand movements that result in touching the face, and provides sensory feedback to stop the user from touching the face. The system utilizes an inertial measurement unit (IMU) to obtain features that characterize hand movement involving face touching. Time-series data can be efficiently classified using 1D-Convolutional Neural Network (CNN) with minimal feature engineering; 1D-CNN filters automatically extract temporal features in IMU data. Thus, a 1D-CNN based prediction model is developed and trained with data from 4,800 trials recorded from 40 participants.