https://online2.tingclass.net/lesson/shi0529/0008/8394/2022年10月21日 專家:人類參與有助于防止人工智能錯誤.mp3
https://image.tingclass.net/statics/js/2012
Security experts say artificial intelligence (AI) systems used by businesses can make serious, costly mistakes. But one way to avoid such mistakes is for companies to employ humans to closely watch the AI.
安全專家表示,企業(yè)使用的人工智能(AI) 系統(tǒng)可能會犯嚴(yán)重且代價高昂的錯誤。但避免此類錯誤的一種方法是讓公司雇傭人類密切關(guān)注人工智能。
One example of AI problems that can affect businesses happened early in the COVID-19 pandemic. It involved the credit scoring company Fair Isaac Corporation, which is known as FICO.
可能影響企業(yè)的 AI 問題的一個例子發(fā)生在 COVID-19 大流行初期。它涉及信用評分公司 Fair Isaac Corporation,該公司被稱為 FICO。
FICO is used by about two-thirds of the world's largest banks to help make lending decisions. The company's systems are also used to identify possible cases of credit fraud.
世界上大約三分之二的最大銀行使用 FICO 來幫助做出貸款決策。該公司的系統(tǒng)還用于識別可能的信用欺詐案件。
FICO officials recently told Reuters news agency that one of the company's AI systems misidentified a large number of credit card fraud cases. At the time, the pandemic had caused a large increase in online shopping. The AI tool considered the rise in online shopping to be the result of fraudulent activity.
FICO 官員最近告訴路透社,該公司的一個人工智能系統(tǒng)誤判了大量信用卡欺詐案件。當(dāng)時,大流行導(dǎo)致網(wǎng)上購物大量增加。人工智能工具認(rèn)為網(wǎng)上購物的興起是欺詐活動的結(jié)果。
As a result, the AI system told banks to deny millions of purchase attempts from online buyers. The incident happened just as people were hurrying to buy products that were in short supply in stores.
結(jié)果,人工智能系統(tǒng)告訴銀行拒絕在線買家的數(shù)百萬次購買嘗試。事情發(fā)生在人們爭相搶購店內(nèi)供不應(yīng)求的商品之際。
But FICO told Reuters that in the end, very few buyers had their purchase requests denied. This is because a group of experts the company employs to observe, or monitor, its AI systems recognized the false fraud identifications. The workers made temporary adjustments to avoid an AI-ordered block on spending.
但 FICO 告訴路透社,最終很少有買家的購買請求被拒絕。這是因為該公司雇用的一組專家來觀察或監(jiān)控其人工智能系統(tǒng)識別出虛假的欺詐識別。工人們進行了臨時調(diào)整,以避免人工智能命令的支出限制。
FICO says the expert team is quickly informed about any unusual buying activity that the AI systems might misidentify.
FICO 表示,專家團隊會迅速獲悉人工智能系統(tǒng)可能誤認(rèn)的任何異常購買活動。
But these kinds of corporate teams are not that common, Reuters reports. Last year, FICO and the business advisory company McKinsey & Company carried out separate studies on the subject. They found that most organizations involved in the study were not closely watching their AI-based programs.
但據(jù)路透社報道,這類公司團隊并不常見。去年,F(xiàn)ICO 和商業(yè)咨詢公司麥肯錫公司就該主題進行了單獨研究。他們發(fā)現(xiàn)參與這項研究的大多數(shù)組織都沒有密切關(guān)注他們基于人工智能的程序。
Experts say AI systems mainly make mistakes when real-world situations differ from the situations used in creating the intelligence. In FICO's case, it said its software expected more in-person than online shopping. This led the system to identify a greater share of financial activity as problematic.
專家表示,當(dāng)現(xiàn)實世界的情況與創(chuàng)建智能所使用的情況不同時,人工智能系統(tǒng)主要會犯錯誤。在 FICO 的案例中,它表示其軟件預(yù)計比在線購物更多。這導(dǎo)致系統(tǒng)將更大比例的金融活動識別為有問題的。
Seasonal differences, data-quality changes or extremely unusual events – such as the pandemic – can lead to a series of bad AI predictions.
季節(jié)性差異、數(shù)據(jù)質(zhì)量變化或極不尋常的事件(例如大流行)可能導(dǎo)致一系列糟糕的 AI 預(yù)測。
Aleksander Madry is the director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology. He told Reuters the pandemic must have been a "wake-up call" for businesses not closely monitoring their AI systems. This is because AI mistakes can cause huge problems for businesses that do not effectively manage the systems.
Aleksander Madry 是麻省理工學(xué)院可部署機器學(xué)習(xí)中心的主任。他告訴路透社,對于沒有密切監(jiān)控其人工智能系統(tǒng)的企業(yè)來說,這種流行病一定是一個“警鐘”。這是因為人工智能錯誤可能會給無法有效管理系統(tǒng)的企業(yè)帶來巨大的問題。
"That's what really stops us currently from this dream of AI revolutionizing everything," Madry said.
“這就是目前真正阻止我們實現(xiàn)人工智能徹底改變一切的夢想的原因,”Madry 說。
Urgency has been added to the issue since the European Union plans to pass a new AI law as soon as next year. The law requires companies to do some observation of their AI systems. Earlier this month, the U.S. administration also proposed new guidelines aimed at protecting citizens from the harmful effects of AI. In the guidelines, U.S. officials called for observers to ensure AI system "performance does not fall below an acceptable level over time."
自歐盟計劃最早于明年通過一項新的人工智能法以來,這一問題變得更加緊迫。法律要求公司對其人工智能系統(tǒng)進行一些觀察。本月早些時候,美國政府還提出了新的指導(dǎo)方針,旨在保護公民免受人工智能的有害影響。在指導(dǎo)方針中,美國官員呼吁觀察員確保人工智能系統(tǒng)“隨著時間的推移,性能不會低于可接受的水平”。