首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 29 毫秒
1.
目前应用领域的机器人缺乏意识、精神状态和感觉这些情感条件,机器人只是按照人类设定的程序进行遵循一定的规则行为。判定一个机器人能否称得上人工物道德行为体(AMAs).似乎取决于是否具有情感因素。道德与情感之间有着紧密联系的关系。然而.行为主义和表现主义认为,即使缺乏情感的机器也应当受到道德关护。从机器人的应用实践来看.无论是认知缺陷角色的机器人、奴仆角色机器人还是财产物角色机器人.他们都有相应的道德地位,都应当受到不同方式的伦理关护。随着人工智能的发展,我们认为,未来我们一定能够制造出一种具有情感的AMAs机器人。  相似文献   

2.
The morality of virtual representations and the enactment of prohibited activities within single-player gamespace (e.g., murder, rape, paedophilia) continues to be debated and, to date, a consensus is not forthcoming. Various moral arguments have been presented (e.g., virtue theory and utilitarianism) to support the moral prohibition of virtual enactments, but their applicability to gamespace is questioned. In this paper, I adopt a meta-ethical approach to moral utterances about virtual representations, and ask what it means when one declares that a virtual interaction ‘is morally wrong’. In response, I present constructive ecumenical expressivism to (i) explain what moral utterances should be taken to mean, (ii) argue that they mean the same when referring to virtual and non-virtual interactions and (iii), given (ii), explain why consensus with regard to virtual murder, rape and paedophilia is not forthcoming even though such consensus is readily found with regard to their non-virtual equivalents.  相似文献   

3.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

4.
If, as a number of writers have predicted, the computers of the future will possess intelligence and capacities that exceed our own then it seems as though they will be worthy of a moral respect at least equal to, and perhaps greater than, human beings. In this paper I propose a test to determine when we have reached that point. Inspired by Alan Turing’s (1950) original “Turing test”, which argued that we would be justified in conceding that machines could think if they could fill the role of a person in a conversation, I propose a test for when computers have achieved moral standing by asking when a computer might take the place of a human being in a moral dilemma, such as a “triage” situation in which a choice must be made as to which of two human lives to save. We will know that machines have achieved moral standing comparable to a human when the replacement of one of these people with an artificial intelligence leaves the character of the dilemma intact. That is, when we might sometimes judge that it is reasonable to preserve the continuing existence of a machine over the life of a human being. This is the “Turing Triage Test”. I argue that if personhood is understood as a matter of possessing a set of important cognitive capacities then it seems likely that future AIs will be able to pass this test. However this conclusion serves as a reductio of this account of the nature of persons. I set out an alternative account of the nature of persons, which places the concept of a person at the centre of an interdependent network of moral and affective responses, such as remorse, grief and sympathy. I argue that according to this second, superior, account of the nature of persons, machines will be unable to pass the Turing Triage Test until they possess bodies and faces with expressive capacities akin to those of the human form.  相似文献   

5.
6.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

7.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

8.
Can a player be held morally responsible for the choices that she makes within a videogame? Do the moral choices that the player makes reflect in any way on the player’s actual moral sensibilities? Many videogames offer players the options to make numerous choices within the game, including moral choices. But the scope of these choices is quite limited. I attempt to analyze these issues by drawing on philosophical debates about the nature of free will. Many philosophers worry that, if our actions are predetermined, then we cannot be held morally responsible for them. However, Harry Frankfurt’s compatibilist account of free will suggests that an agent can be held morally responsible for actions that she wills, even if the agent is not free to act otherwise. Using Frankfurt’s analysis, I suggest that videogames represent deterministic worlds in which players lack the ability to freely choose what they do, and yet players can be held morally responsible for some of their actions, specifically those actions that the player wants to do. Finally, I offer some speculative comments on how these considerations might impact our understanding of the player’s moral psychology as it relates to the ethics of imagined fictional events.  相似文献   

9.
This paper raises three objections to the argument presented by Ostritsch in The amoralist challenge to gaming and the gamer’s moral obligation, in which the amoralist’s mantra “it’s just a game” is viewed as an illegitimate rebuttal of all moral objections to (typically violent) video games. The first objection focuses on Ostritsch’s ‘strong sense’ of player enjoyment, which I argue is too crude, given the moral work it is meant to be doing. Next, I question the legitimacy of Ostritsch’s claim that certain video games are immoral. I examine what is involved in making this claim and what would be required for a normative position to be established: none of which is addressed by Ostritsch. Finally, I challenge the legitimacy of his claim that players are obliged not to play certain video games in certain ways (i.e., games endorsing immorality as ‘fun games’). I distinguish between immoral and suberogatory actions, arguing that the latter is in fact more applicable to cases Ostritsch has in mind, and that one is not obliged not to engage in these actions.  相似文献   

10.
In a recent and provocative essay, Christopher Bartel attempts to resolve the gamer’s dilemma. The dilemma, formulated by Morgan Luck, goes as follows: there is no principled distinction between virtual murder and virtual pedophilia. So, we’ll have to give up either our intuition that virtual murder is morally permissible—seemingly leaving us over-moralizing our gameplay—or our intuition that acts of virtual pedophilia are morally troubling—seemingly leaving us under-moralizing our game play. Bartel’s attempted resolution relies on establishing the following three theses: (1) virtual pedophilia is child pornography, (2) the consumption of child pornography is morally wrong, and (3) virtual murder is not murder. Relying on Michael Rea’s definition of pornography, I argue that we should reject thesis one, but since Bartel’s moral argument in thesis two does not actually rely thesis one that his resolution is not thereby undermined. Still, even if we grant that there are adequate resources internal to Bartel’s account to technically resolve the gamer’s dilemma his reasoning is still unsatisfying. This is so because Bartel follows Neil Levy in arguing that virtual pedophilia is wrong because it harms women. While I grant Levy’s account, I argue that this is the wrong kind of reason to resolve the gamer’s dilemma because it is indirect. What we want is to know what is wrong with virtual child pornography itself. Finally, I suggest alternate moral resources for resolving the gamer’s dilemma that are direct in a way that Bartel’s resources are not.  相似文献   

11.
I examine the nature of human-robot pet relations that appear to involve genuine affective responses on behalf of humans towards entities, such as robot pets, that, on the face of it, do not seem to be deserving of these responses. Such relations have often been thought to involve a certain degree of sentimentality, the morality of which has in turn been the object of critical attention (Sparrow in Ethics Inf Technol 78:346–359, 2002; Blackford in Ethics Inf Technol 14:41–51, 2012). In this paper, I dispel the claim that sentimentality is involved in this type of relations. My challenge draws on literature in the philosophy of art and in cognitive science that attempts to solve the so called paradox of fictional emotions, i.e., the seemingly paradoxical way in which we respond emotionally to fictional or imaginary characters and events. If sentimentality were not at issue, neither would its immorality. For the sake of argument, however, I assume in the remaining part of the paper that sentimentality is indeed at play and bring to the fore aspects of its badness or viciousness that have not yet been discussed in connection with robot pets. I conclude that not even these aspects of sentimentality are at issue here. Yet, I argue that there are other reasons to be worried about the wide-spread use of ersatz companionship technology that have to do with the potential loss of valuable, self-defining forms of life.  相似文献   

12.
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.  相似文献   

13.
Anonymising technologies are cyber-tools that protect people from online surveillance, hiding who they are, what information they have stored and what websites they are looking at. Whether it is anonymising online activity through ‘TOR’ and its onion routing, 256-bit encryption on communications sent or smart phone auto-deletes, the user’s identity and activity is protected from the watchful eyes of the intelligence community. This represents a clear challenge to intelligence actors as it prevents them access to information that many would argue plays a vital part in locating and preventing threats from being realised. Moreover, such technology offers more than ordinary information protections as it erects ‘warrant-proof’ spaces, technological black boxes that no matter what some authority might deem as being legitimately searchable is protected to the extent that there are very limited or non-existent means of forcing oneself in. However, it will be argued here that not only is using such anonymising technology and its extra layer of protection people’s right, but that it is ethically mandatory. That is, due to the en masse surveillance—from both governments and corporations—coupled with people’s limited awareness and ability to comprehend such data collections, anonymising technology should be built into the fabric of cyberspace to provide a minimal set of protections over people’s information, and in doing so force the intelligence community to develop more targeted forms of data collection.  相似文献   

14.
This paper will address the question of the morality of technology. I believe this is an important question for our contemporary society in which technology, especially information technology, is increasingly becoming the default mode of social ordering. I want to suggest that the conventional manner of conceptualising the morality of technology is inadequate – even dangerous. The conventional view of technology is that technology represents technical means to achieve social ends. Thus, the moral problem of technology, from this perspective, is the way in which the given technical means are applied to particular (good or bad) social ends. In opposition to this I want to suggest that the assumed separation, of this approach, between technical means and social ends are inappropriate. It only serves to hide the most important political and ethical dimensions of technology. I want to suggest that the morality of technology is much more embedded and implicit than such a view would suggest. In order to critique this approach I will draw on phenomenology and the more recent work of Bruno Latour. With these intellectual resources in mind I will propose disclosive ethics as a way to make the morality of technology visible. I will give a brief account of this approach and show how it might guide our␣understanding of the ethics and politics of technology by considering two examples of contemporary information technology: search engines and plagiarism detection systems.  相似文献   

15.
There has been increasing attention in sociology and internet studies to the topic of ‘digital remains’: the artefacts users of social network services (SNS) and other online services leave behind when they die. But these artefacts also pose philosophical questions regarding what impact, if any, these artefacts have on the ontological and ethical status of the dead. One increasingly pertinent question concerns whether these artefacts should be preserved, and whether deletion counts as a harm to the deceased user and therefore provides pro tanto reasons against deletion. In this paper, I build on previous work invoking a distinction between persons and selves to argue that SNS offer a particularly significant material instantiation of persons. The experiential transparency of the SNS medium allows for genuine co-presence of SNS users, and also assists in allowing persons (but not selves) to persist as ethical patients in our lifeworld after biological death. Using Blustein’s “rescue from insignificance” argument for duties of remembrance, I argue that this persistence function supplies a nontrivial (if defeasible) obligation not to delete these artefacts. Drawing on Luciano Floridi’s account of “constitutive” information, I further argue that the “digital remains” metaphor is surprisingly apt: these artefacts in fact enjoy a claim to moral regard akin to that of corpses.  相似文献   

16.
We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew of a warship mistakenly shot down a civilian airliner. To support a combat commander’s moral agency, designers should strive for systems that help commanders and command teams to think and manipulate information at the level of meaning. ‘Down conversions’ of information from meaning to symbols must be adequately recovered by ‘up conversions’, and commanders must be able to check that their sensors are working and are being used correctly. Meanwhile ethicists should establish a mechanism that tracks the potential moral implications of choices in a system’s design and intended operation. Finally we highlight a gap in normative ethics, in that we have ways to deny moral agency, but not to affirm it.  相似文献   

17.
Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.  相似文献   

18.
Trusting Virtual Trust   总被引:2,自引:0,他引:2  
Can trust evolve on the Internet between virtual strangers? Recently, Pettit answered this question in the negative. Focusing on trust in the sense of ‘dynamic, interactive, and trusting’ reliance on other people, he distinguishes between two forms of trust: primary trust rests on the belief that the other is trustworthy, while the more subtle secondary kind of trust is premised on the belief that the other cherishes one’s esteem, and will, therefore, reply to an act of trust in kind (‘trust-responsiveness’). Based on this theory Pettit argues that trust between virtual strangers is impossible: they lack all evidence about one another, which prevents the imputation of trustworthiness and renders the reliance on trust-responsiveness ridiculous. I argue that this argument is flawed, both empirically and theoretically. In several virtual communities amazing acts of trust between pure virtuals have been observed. I propose that these can be explained as follows. On the one hand, social cues, reputation, reliance on third parties, and participation in (quasi-) institutions allow imputing trustworthiness to varying degrees. On the other, precisely trust-responsiveness is also relied upon, as a necessary supplement to primary trust. In virtual markets, esteem as a fair trader is coveted while it contributes to building up one’s reputation. In task groups, a hyperactive style of action may be adopted which amounts to assuming (not: inferring) trust. Trustors expect that their virtual co-workers will reply in kind while such an approach is to be considered the most appropriate in cyberspace. In non-task groups, finally, members often display intimacies while they are confident someone else ‘out there’ will return them. This is facilitated by the one-to-many, asynchronous mode of communication within mailing lists.  相似文献   

19.
According to the amoralist, computer games cannot be subject to moral evaluation because morality applies to reality only, and games are not real but “just games”. This challenges our everyday moralist intuition that some games are to be met with moral criticism. I discuss and reject the two most common answers to the amoralist challenge and argue that the amoralist is right in claiming that there is nothing intrinsically wrong in simply playing a game. I go on to argue for the so-called “endorsement view” according to which there is nevertheless a sense in which games themselves can be morally problematic, viz. when they do not only represent immoral actions but endorse a morally problematic worldview. Based on the endorsement view, I argue against full blown amoralism by claiming that gamers do have a moral obligation when playing certain games even if their moral obligation is not categorically different from that of readers and moviegoers.  相似文献   

20.
《Research Policy》2023,52(1):104607
This paper examines how automation and digitalisation influence the way everyday scientific work practices are organised and conducted. Drawing on a practice-based study of the field of synthetic biology, the paper uses ethnographic, interview and survey data to offer a sociomaterial and relational perspective of technological change. As automation and digitalisation are deployed in research settings, our results show the emergence and persistence of what we call ‘mundane knowledge work’, including practices of checking, sharing and standardising data; and preparing, repairing and supervising laboratory robots. While these are subsidiary practices that are often invisible in comparison to scientific outputs used to measure performance, we find that mundane knowledge work constitutes a fundamental part of automated and digitalised biosciences, shaping scientists' working time and responsibilities. Contrary to expectations of the removal of such work by automation and digitalisation, we show that mundane work around data and robots persists through ‘amplification’ and ‘diversification’ processes. We argue that the persistence of mundane knowledge work suggests a digitalisation paradox in the context of everyday labour: while robotics and advanced data analytics aim at simplifying work processes, they also contribute to increasing their complexity in terms of number and diversity of tasks in creative, knowledge-intensive professions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号