刷题刷出新高度,偷偷领先!偷偷领先!偷偷领先! 关注我们,悄悄成为最优秀的自己!

单选题

    The AlphaGo program’s victory is an example of how smart computers have become.

    But can artificial intelligence (AI) machines act ethically, meaning can they be honest and fair?

    One example of AI is driverless cars. They are already on California roads, so it is not too soon to ask whether we can program a machine to act ethically. As driverless cars improve, they will save lives. They will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should the cars be programmed to avoid hitting a child running across the road, even if that will put their passengers at risk? What about making a sudden turn to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?

    Perhaps there will be lessons to learn from driverless cars, but they are not super-intelligent beings. Teaching ethics to a machine even more intelligent than we are will be the bigger challenge.

    About the same time as AlphaGo’s triumph, Microsoft’s ‘chatbot’ took a bad turn. The software, named Taylor, was designed to answer messages from people aged 18-24. Taylor was supposed to be able to learn from the messages she received. She was designed to slowly improve her ability to handle conversations, but some people were teaching Taylor racist ideas. When she started saying nice things about Hitler, Microsoft turned her off and deleted her ugliest messages.

    AlphaGo’s victory and Taylor’s defeat happened at about the same time. This should be a warning to us. It is one thing to use AI within a game with clear rules and clear goals. It is something very different to use AI in the real world. The unpredictability of the real world may bring to the surface a troubling software problem.

    Eric Schmidt is one of the bosses of Google, which own AlphaGo. He thinks AI will be positive for humans. He said people will be the winner, whatever the outcome. Advances in AI will make human beings smarter, more able and “just better human beings”.

54. What do we learn about Microsoft’s ‘chatbot’ Taylor?

A
She could not distinguish good from bad.
B
She could turn herself off when necessary.
C
She was not made to handle novel situations.
D
She was good at performing routine tasks.
使用微信搜索喵呜刷题,轻松应对考试!

答案:

A

解析:

A。根据题干中的Microsoft’s ‘chatbot’ Taylor可定位至原文第五段,该段提到了微软的“聊天机器人”泰勒被设计成可以慢慢地提高处理对话的能力,但是当有人教她种族主义思想时,她开始赞颂希特勒。作者用这一事例说明,这个“聊天机器人”无法辨别好坏,人们教她什么,她就学什么,故正确答案为A。第五段最后一句提到,当她传递一些不好的信息时,微软关停了她,并没有说明她可以把自己关停,故B项错误。原文提到“她可以慢慢提高处理对话的能力”,说明泰勒可以适应新情况,C错。D两项在原文中并无依据,故排除。

创作类型:
原创

本文链接:54. What do we learn about Microsoft’s ‘chatbot’ T

版权声明:本站点所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明文章出处。

让学习像火箭一样快速,微信扫码,获取考试解析、体验刷题服务,开启你的学习加速器!

分享考题
share