Through a qualitative examination, the moral evaluations of Dutch care professionals regarding healthcare robots for eldercare in terms of biomedical ethical principles and non-utility are researched. Results showed that care professionals primarily focused on maleficence (potential harm done by the robot), deriving from diminishing human contact. Worries about potential maleficence were more pronounced from intermediate compared to higher educated professionals. However, both groups deemed companion robots more beneficiary than devices that monitor and assist, which were deemed potentially harmful physically and psychologically. The perceived utility was not related to the professionals' moral stances, countering prevailing views. Increasing patient's autonomy by applying robot care was not part of the discussion and justice as a moral evaluation was rarely mentioned. Awareness of the care professionals' point of view is important for policymakers, educational institutes, and for developers of healthcare robots to tailor designs to the wants of older adults along with the needs of the much-undervalued eldercare professionals.
Abstract Background: Healthcare professionals encounter ethical dilemmas and concerns in their practice. More research is needed to understand these ethical problems and to know how to educate professionals to respond to them. Research objective: To describe ethical dilemmas and concerns at work from the perspectives of Finnish and Dutch healthcare professionals studying at the master’s level. Research design: Exploratory, qualitative study that used the text of student online discussions of ethical dilemmas at work as data. Method: Participants’ online discussions were analyzed using inductive content analysis. Participants: The sample consisted of 49 students at master’s level enrolled in professional ethics courses at universities in Finland and the Netherlands. Ethical considerations: Permission for conducting the study was granted from both universities of applied sciences. All students provided their informed consent for the use of their assignments as research data. Findings: Participants described 51 problematic work situations. Among these, 16 were found to be ethical dilemmas, and the remaining were work issues with an ethical concern and did not meet criteria of a dilemma. The most common problems resulted from concerns about quality care, safety of healthcare professionals, patients’ rights, and working with too few staff and inadequate resources. Discussion: The results indicated that participants were concerned about providing quality of care and raised numerous questions about how to provide it in challenging situations. The results show that it was difficult for students to differentiate ethical dilemmas from other ethical work concerns. Conclusion: Online discussions among healthcare providers give them an opportunity to relate ethical principles to real ethical dilemmas and problems in their work as well as to critically analyze ethical issues. We found that discussions with descriptions of ethical dilemmas and concerns by health professionals provide important information and recommendations not only for education and practice but also for health policy.
This presentation reports on the status of an assessment-tool for Moral Authorship that is being developed for teachers and discusses its reliability and validation. Moral Authorship refers to the ability of teachers to observe, identify, articulate and reflect on moral aspects in their work in a thoughtful and dialogical way. The developed assessment tool is based on the concept of Moral Authorship, which describes moral meaning-making in a narrative way and distinguishes six tasks as points of attention, to identify topics of concern which arise when reflecting on the development of one’s morality (Gertsen, Schaap & Bakker, 2017). Paper presented at the AME 2017 Conference
De robotassistent is een nieuwe, veelbelovende technologie om docenten in het primair onderwijs te ondersteunen en leerprestaties te verbeteren. In dit onderzoek ontwikkelen we een morele theorie voor het inzetten van deze robotassistenten in het onderwijs.Doel Met dit onderzoek ontwikkelen we een theorie over het moreel verantwoord inzetten van robotassistenten in het onderwijs, waarbij kwalitatieve en kwantitatieve data wordt gecombineerd. Resultaten Dit onderzoek loopt. Hieronder vind je een overzicht van de resultaten tot nu toe. Smakman, M. (2019) De robotdocent komt eraan, maar hoe? AG Connect. Januari/ Februari 2019. pp 70-73 Smakman, M., & Konijn, E. (2019). Robot Tutors: Welcome or Ethically Questionable? In M. Merdan, W. Lepuschitz, G. Koppensteiner, R. Balogh, & D. Obdržálek (Eds.), Robotics in Education ‐ Current Research and Innovations. Vienna, Austria: Springer. [in press] Smakman, M. and Konijn, E.A. (2019-02-07) Onderwijsrobots: van harte welkom of ethisch onverantwoord? Presented at Robots en AI in het onderwijs. Den Haag, The Netherlands. Smakman, M. And Konijn, E.A. (2019-01-31) Moral challenges and opportunities for educational robots Presented at Workshop How do we work with educational robots? De Waag, Amsterdam, The Netherlands. Goudzwaard, M., Smakman, M., Konijn, E.A. Robots are Good for Profit: A Business Perspective on Robots in Education. [accepted] to 9th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics Smakman, M., Konijn, E.A. (2019, February) Moral Considerations Regarding Robots in Education: A Systematic Literature Review. Paper presented at Etmaal van de Communicatiewetenschap, 7-8 February 2019. Nijmegen, The Netherlands Smakman, M., Konijn. E.A. (2018, December) Considerations on moral values regarding robot tutors. Presented at the Symposium on Robots for Language Learning. 12-13 December 2018. Koç University, Istanbul, Turkey Smakman, M. (2018, February). Moral concerns regarding robot tutors, a review.Poster presented at the ATEE 2018 Winter Conference – Technology and Innovative Learning, Utrecht, The Netherlands. Looptijd 01 januari 2017 - 01 januari 2022 Aanpak Dit onderzoek maakt gebruik van de Value Sensitive Design (VSD) methodology. VSD is een methode om rekening te houden met morele waarden tijdens het ontwerpen en inzetten van technologie. Eerst richt dit onderzoek op het benoemen van relevante (morele) waarden. Door verschillende focusgroepen met onder meer ouders, leraren, overheid en robotbouwers, worden de waarden verder uitgewerkt. Vervolgens wegen we de waarden door ze voor te leggen aan diverse groepen. Daarna stellen we richtlijnen op hoe robots op een verantwoorde manier kunnen worden ingezet.