2023-06-12 16:04:32|已浏览:53次
剑桥雅思16 Test4 Passage3阅读原文翻译,今天就来为大家分析这个问题。
剑桥雅思16 Test4 Passage3阅读原文翻译
A部分
Artificial intelligence(AI)can already predict the future.Police forces are using it to map when and where crime is likely to occur.Doctors can use it to predict when a patient is most likely to have a heart attack or stroke.Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
人工智能已经可以预测未来。警察用它来标记犯罪可能在什么时候在哪里发生。医生用它来预测病人什么时候最有可能患上心脏病或者中风。研究者甚至尝试赋予人工智能想象力,以便它能够对未曾预料到的事情进行规划。
Many decisions in our lives require a good forecast,and AI is almost always better at forecasting than we are.Yet for all these technological advances,we still seem to deeply lack confidence in AI predictions.Recent cases show that people don’t like relying on AI and prefer to trust human experts,even if these experts are wrong.
我们生活中的许多决策都需要优秀的预测,而人工智能几乎总是要比我们更擅长预测一些。然而,就像对所有技术进步一样,我们似乎对人工智能的预测相当缺乏信心。最近的案例表明,人们不喜欢依赖人工智能,而更倾向于相信人类专家,即使这些专家是错的。
If we want AI to really benefit people,we need to find a way to get people to trust it.To do that,we need to understand why people are so reluctant to trust AI in the first place.
如果我们想让人工智能真正惠及人类,我们需要找到让人类信任它的方法。要做到这一点,我们需要理解为什么人们一开始就不愿意相信人工智能。
B部分
Take the case of Watson for Oncology,one of technology giant IBM’s supercomputer programs.Their attempt to promote this program to cancer doctors was a PR disaster.The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80%of the world’s cases.But when doctors first interacted with Watson,they found themselves in a rather difficult situation.On the one hand,if Watson provided guidance about a treatment that coincided with their own opinions,physicians did not see much point in Watson’s recommendations.The supercomputer was simply telling them what they already knew,and these recommendations did not change the actual treatment.
以Watson for Oncology为例,它是技术巨头IBM推出的超级计算机程序。他们向肿瘤医生文章来自老烤鸭雅思推销该程序的尝试是场公共关系灾难。该人工智能承诺针对12种癌症的治疗方案提供高品质建议。这12种癌症占到世界所有病例的百分之八十。但当医生与Watson互动时,他们发现自己处于十分尴尬的境地。一方面,如果Watson提供的治疗方案与他们自己的意见恰好一致,医师并不觉得Watson的建议有什么意义。超级计算机只是告诉他们他们已经知道的东西,这些建议并不会改变实际的治疗。
On the other hand,if Watson generated a recommendation that contradicted the experts’opinion,doctors would typically conclude that Watson wasn’t competent.And the machine wouldn’t be able to explain why its treatment was plausible because its machine-learning algorithms were simply too complex to be fully understood by humans.This article is from Laokaoya website.Consequently,this has caused even more suspicion and disbelief,leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
另一方面,如果Watson给出的建议与专家意见相反,医生往往会得出Watson并不合格的结论。机器无法解释为什么它的治疗方案很有道理,因为机器学习的算法太过复杂,人类无法彻底理解。这就引发更多的怀疑和不信任,让许多医生忽略显得十分古怪的人工智能的建议,并坚持他们自己的专业知识。
C部分
This is just one example of people’s lack of confidence in AI and their reluctance to accept what AI has to offer.Trust in other people is often based on our understanding of how others think and having experience of their reliability.This helps create a psychological feeling of safety.AI,on the other hand,is still fairly new and unfamiliar to most people.Even if it can be technically explained(and that’s not always the case),Al’s decision-making process is usually too difficult for most people to comprehend.And interacting with something we don’t understand can cause anxiety and give us a sense that we’re losing control.
这只是人们对人工智能缺乏信心、不愿意接受人工智能所提供的服务的一个例子。对其他人的信任往往基于我们理解他们的思考方式,并对他们的可靠性有相关经验。这有助于营造一种心理上的安全感。另一方面,人工智能对于大多数人来说仍然属于崭新、陌生的事物。即便它能够从技术上得以解释(并不总是这样),人工智能的决策过程对于大多数人来说仍然难以理解。与某种我们无法理解的东西互动会引发焦虑,并让我们产生一种失控的感觉。
Many people are also simply not familiar with many instances of AI actually working,because it often happens in the background.Instead,they are acutely aware of instances where AI goes wrong.Embarrassing AI failures receive a disproportionate amount of media attention,emphasising the message that we cannot rely on technology.Machine learning is not foolproof,in part because the humans who design it aren’t.
许多人也不熟悉人工智能实际发挥作用的大量案例,因为这往往发生在背景中。相反,他们强烈意识到人工智能出错的例子。人工智能尴尬的失败吸引着不成比例的媒体注意,强调我们不能依赖科技。机器学习并非万无一失,这部分是由于设计它的人类也是如此。
D部分
Feelings about AI run deep.In a recent experiment,people from a range of backgrounds were given various sci-fi films about AI to watch and then asked questions about automation in everyday life.It was found that,regardless of whether the film they watched depicted AI in a positive or negative light,simply watching a cinematic vision of our technological future polarised the participants’attitudes.Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.
针对人工智能的情绪拥有极深的根源。在最近的一项实验中,来自不同背景的人们观看了各种各样有关人工智能的电影,然后被问一些有关日常生活中自动化的问题。研究人员发现,无论他们所看的电影中人工智能是正面角色还是反面角色,仅仅观看有关我们技术未来的电影画面就会让参与者的态度极化。乐观主义者对人工智能的热情变得更加极端,而怀疑论者则变得更加谨慎。
This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes,a deep-rooted human tendency known as“confirmation bias”.As AI is represented more and more in media and entertainment,it could lead to a society split between those who benefit from AI and those who reject it.More pertinently,refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.
这表明人们会用一种充满偏见的方式看待有关人工智能的证据,以支持他们现有的态度。这一根植于人类本性中的倾向被称为“确认偏误”。随着人工智能在媒体和娱乐方式中出现的越来越多,它会在从中受益的人和拒绝它的人之间造成分裂。更确切的说,拒绝接受人工智能所提供的好处会将一大批人置于严重不利的地位。
E部分
Fortunately,we already have some ideas about how to improve trust in AI.Simply having previous experience with AI can significantly improve people’s opinions about the technology,as was found in the study mentioned above.Evidence also suggests the more you use other technologies such as the internet,the more you trust them.
幸运的是,对于如何提升对人工智能的信任,我们已经有了一些想法。正如上述所提到的研究所发现的那样,仅仅有过使用人工智能的经验就可以显著提升人们有关技术的看法。证据也表明,你使用的其他技术越多,如互联网,你也会越信任它们。
Another solution may be to reveal more about the algorithms which AI uses and the purposes they serve.Several high-profile social media companies and online marketplaces already release transparency reports about government requests and surveillance disclosures.A similar practice for AI could help people have a better understanding of the way algorithmic decisions are made.
另一项解决方案可能是更多的披露人工智能所使用的算法,以及它们服务的目的。几家高调的社交媒体公司和线上交易平台已经发布了有关政府要求和监管的透明性报告。人工智能类似的操作也可以帮助人们更好的理解算法决策的方式。
F部分
Research suggests that allowing people some control over AI decision-making could also improve trust and enable AI to learn from human experience.For example,one study showed that when people were allowed the freedom to slightly modify an algorithm,they felt more satisfied with its decisions,more likely to believe it was superior and more likely to use it in the future.
研究表明,让人们对人工智能的决策制定拥有一定的控制也能够提升信任,并让人工智能可以学习人类的经验。例如,一项研究显示,当人们拥有稍微修改算法的自由时,他们会对人工智能的决策更加满意,更可能相信其更胜一筹,并更可能在未来使用它。
We don’t need to understand the intricate inner workings of AI systems,but if people are given a degree of responsibility for how they are implemented,they will be more willing to accept AI into their lives.
我们不需要理解人工智能系统复杂的内部工作机制,但如果人们拥有一定的权责决定它们如何生效,他们会更加愿意在生活中接受人工智能。
>> 雅思 托福 免费测试、量身规划、让英语学习不再困难<<