首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction  相似文献   

2.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

3.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

4.
We investigated how people react emotionally to working with robots in three scenario-based role-playing survey experiments collected in 2019 and 2020 from the United States (Study 1: N = 1003; Study 2: N = 969, Study 3: N = 1059). Participants were randomly assigned to groups and asked to write a short post about a scenario in which we manipulated the number of robot teammates or the size of the social group (work team vs. organization). Emotional content of the corpora was measured using six sentiment analysis tools, and socio-demographic and other factors were assessed through survey questions and LIWC lexicons and further analyzed in Study 4. The results showed that people are less enthusiastic about working with robots than with humans. Our findings suggest these more negative reactions stem from feelings of oddity in an unusual situation and the lack of social interaction.  相似文献   

5.
In order to investigate how the use of robots may impact everyday tasks, twelve participants in our study interacted with a University of Hertfordshire Sunflower robot over a period of 8 weeks in the university's Robot House. Participants performed two constrained tasks, one physical and one cognitive, four times over this period. Participant responses were recorded using a variety of measures including the System Usability Scale and the NASA Task Load Index. The use of the robot had an impact on the experienced workload of the participants di?erently for the two tasks, and this e?ect changed over time. In the physical task, there was evidence of adaptation to the robot's behavior. For the cognitive task, the use of the robot was experienced as more frustrating in the later weeks.  相似文献   

6.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

7.
随着人工智能技术的发展,自治型智能机器人开始走进人们的生活视阈。"机器人伦理学"在国外的兴起正是这一背景下的伦理反思。然而,"机器人伦理学"的研究对象"机器人"有着特定的涵义,其存在领域也涵盖劳动服务、军事安全、教育科研、娱乐、医疗保健、环境、个人护理与感情慰藉等各个方面。其中,安全性问题、法律与伦理问题和社会问题成为"机器人伦理学"研究的三大问题域。  相似文献   

8.
It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the design process of robots known as Care Centered Value Sensitive Design (CCVSD). Although this approach was originally and intentionally designed for the healthcare domain, the aim of this paper is to present a preliminary study of how personal and professional service robots might also be evaluated using the CCVSD approach. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (1) structuring the analysis and, (2) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. By investigating the foundations of the approach I hope to show why it may be applicable for service robots and further to give examples of current robot prototypes that can and cannot be evaluated using CCVSD.  相似文献   

9.
This paper explores the relationship between dignity and robot care for older people. It highlights the disquiet that is often expressed about failures to maintain the dignity of vulnerable older people, but points out some of the contradictory uses of the word ‘dignity’. Certain authors have resolved these contradictions by identifying different senses of dignity; contrasting the inviolable dignity inherent in human life to other forms of dignity which can be present to varying degrees. The capability approach (CA) is introduced as a different but tangible account of what it means to live a life worthy of human dignity. It is used here as a framework for the assessment of the possible effects of eldercare robots on human dignity. The CA enables the identification of circumstances in which robots could enhance dignity by expanding the set of capabilities that are accessible to frail older people. At the same time, it is also possible within its framework to identify ways in which robots could have a negative impact, by impeding the access of older people to essential capabilities. It is concluded that the CA has some advantages over other accounts of dignity, but that further work and empirical study is needed in order to adapt it to the particular circumstances and concerns of those in the latter part of their lives.  相似文献   

10.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

11.
This paper examines one particular problem of values in cloud computing: how individuals can take advantage of the cloud to store data without compromising their privacy and autonomy. Through the creation of Lockbox, an encrypted cloud storage application, we explore how designers can use reflection in designing for human values to maintain both privacy and usability in the cloud.  相似文献   

12.
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.  相似文献   

13.
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading.  相似文献   

14.
[目的/意义]社交网络已广泛渗入到人们的日常生活中,用户对所提供个人数据的隐私关注一直存在,但却依然持续使用社交网络,需要解析产生这种现象的原因。[方法/过程]运用社会影响理论,通过解析用户使用社交网络的隐私成本(即隐私关注)和隐私收益(即行为诱导:人际关系管理,自我展示,主观规范),建立隐私权衡模型,分析用户的隐私权衡行为,寻找隐私关注困扰下用户持续使用社交网络的动机。[结果/结论]用户提供个人数据并持续使用社交网络的动机并非是忽略了隐私关注,而是行为诱导(包括人际关系管理和自我展示)的正面影响超过隐私关注的负面影响。  相似文献   

15.
This article addresses some of the most important legal and ethical issues posed by robot companions. Firstly, we clarify that robots are to be deemed objects and more precisely products. This on the one hand excludes the legitimacy of all such considerations involving robots as bearers of own rights and obligations, and forces a functional approach in the analysis. Secondly, pursuant to these methodological considerations we address the most relevant ethical and legal concerns, ranging from the risk of dehumanization and isolation of the user, to privacy and liability concerns, as well as financing of the diffusion of this—still expensive—technology. Solutions are briefly sketched, in order to provide the reader with sufficient indications on what strategies could and should be implemented, already in the design phase, as well as what kind of intervention ought to be favored and expected by national and European legislators. The recent Report with Recommendations to the Commission on Civil Law Rules on Robotics of January 24, 2017 by the European Parliament is specifically taken into account.  相似文献   

16.
Users increasingly use mobile devices to engage in social activity and commerce, enabling new forms of data collection by firms and marketers. User privacy expectations for these new forms of data collection remain unclear. A particularly difficult challenge is meeting expectations for contextual integrity, as user privacy expectations vary depending upon data type collected and context of use. This article illustrates how fine-grained, contextual privacy expectations can be measured. It presents findings from a factorial vignette survey that measured the impact of diverse real-world contexts (e.g., medical, navigation, music), data types, and data uses on user privacy expectations. Results demonstrate that individuals’ general privacy preferences are of limited significance for predicting their privacy judgments in specific scenarios. Instead, the results present a nuanced portrait of the relative importance of particular contextual factors and information uses, and demonstrate how those contextual factors can be found and measured. The results also suggest that current common activities of mobile application companies, such as harvesting and reusing location data, images, and contact lists, do not meet users’ privacy expectations. Understanding how user privacy expectations vary according to context, data types, and data uses highlights areas requiring stricter privacy protections by governments and industry.  相似文献   

17.

Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.

  相似文献   

18.
In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the scientific literature on the dominant technologies: privacy, autonomy, security, human dignity, justice, and balance of power. This study shows that the new wave of digitization is putting pressure on these public values. In order to effectively shape the digital society in a socially and ethically responsible way, stakeholders need to have a clear understanding of what such issues might be. Supervision has been developed the most in the areas of privacy and data protection. For other ethical issues concerning digitization such as discrimination, autonomy, human dignity and unequal balance of power, the supervision is not as well organized.  相似文献   

19.
黎常  金杨华 《科研管理》2021,42(8):9-16
人工智能在深刻影响人类社会生产生活方式的同时,也引发诸多伦理困境与挑战,建立新的科技伦理规范以推动人工智能更好服务人类,成为全社会共同关注的主题。本文从科技伦理的视角,围绕机器人、算法、大数据、无人驾驶等人工智能领域所出现的伦理主体、责任分担、技术安全、歧视与公平性、隐私与数据保护等问题,以及人工智能技术的伦理治理,对国内外相关研究成果进行回顾分析,并提出未来需要在中国情境下伦理原则与治理体系的建立、人工智能伦理研究的跨学科合作、理论分析与实践案例的融合、多元主体伦理角色分工与协作等方面进行进一步研究。  相似文献   

20.
This paper provides an analysis of the current and potential ethical implications of RFID technology for the library and information professions. These issues are analysed as a series of ethical dilemmas, or hard-to-resolve competing ethical obligations, which the librarian has in relationship to information objects, library users and the wider social and political environment or state. A process model of the library is used as a framework for the discussion to illustrate the relationship between the different participants in the library system and it is argued that ethical analysis should involve the identification of future developments as well as current issues. The analysis shows that RFIDs do currently pose some dilemmas for librarians in terms of the conflicts between efficient service, privacy of users and an obligation to protect the safety of society as a whole, and that these are likely to become more problematic as the technology develops. This paper is part 2 of a series of papers on RFIDs and the library and information professions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号