刷题刷出新高度,偷偷领先!偷偷领先!偷偷领先! 关注我们,悄悄成为最优秀的自己!

单选题

    In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams “Save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robots decision and its calculated approach raise an important question: would humans make the same choice and which choice would we want our robotic counterparts to make?

    Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that I. robots cannot harm humans or allow humans to come to harm; 2. Robots must obey preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.

    The robot who rescues Spooner’s life in I, Robot follows Asimov’s zeroth law: robots cannot harm humanity (as opposed to individual humans or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law. A robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save others.

    Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction exposes complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.

    Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful that a computer program can do that-at least, not without some undesirable results. A roboticist at the Bristol robotics laboratory programmed a robot to save human proxies (替身) called “H-bots” from danger. When one H-bot of headed for danger, the robot successfully pushed it out the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality. How can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?

47. What does the author think of Asimov’s three laws of robotics?

A
They are apparently divorced from reality.
B
They did not follow the coding system of robotics.
C
They laid a solid foundation for robotics.
D
They did not take moral issues into consideration.
使用微信搜索喵呜刷题,轻松应对考试!

答案:

D

解析:

47. D)They did not take moral issues into consideration.

解析:首先在题目中找到定位词the author think of和Asimov’s three laws of robotics,然后回原文定位至第2段第1句。定位句指出,艾萨克·阿西莫夫(Isaac Asimov)在设计机器人三大定律时,回避了道德的全部概念。最后看选项:A)它们显然脱离了现实,机器人的三大定律显然是有现实基础的,故错误;B)它们没有遵循机器人的编程体系,通过阅读定位段后面的信息,我们可以得知,这些定律被编入阿西莫夫的机器人中,故排除;C)它们为机器人奠定了坚实的基础,这句话本身没有错误,但不是作者的观点,故排除;D)它们没有考虑道德问题,与定位句信息一致,故正确。

创作类型:
原创

本文链接:47. What does the author think of Asimov’s three l

版权声明:本站点所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明文章出处。

让学习像火箭一样快速,微信扫码,获取考试解析、体验刷题服务,开启你的学习加速器!

分享考题
share