包阅导读总结
1. 关键词:AI、算法、解释性、决策、限制
2. 总结:本文探讨面对 AI 工具部署增加带来的影响,不应限制算法和 AI 的使用,而应让软件决策具备解释性。通过案例说明不解释决策的危害,提出 AI 系统解释决策存在困难,在某些情况下人类决策需独立解释,AI 可作为理解决策的工具。
3.
– 部署 AI 工具引发对其决策影响生活的担忧,如社交媒体推送和商业决策等。
– 面临限制算法使用的呼声,如限制社交媒体为儿童生成内容。
– 但作者认为不应限制算法和 AI,而应要求软件决策可解释。
– 以租车被乱收费为例,说明不解释决策的不良后果。
– 若监管社交媒体推送,应让公司向用户解释推送机制。
– 现有 AI 工具难以解释决策,应限制使用直至可解释性改善。
– 并非所有软件决策都需详细解释,但在争议中应考虑解释性。
– AI 可提供建议但人类决策需能解释。
– AI 可作为理解决策过程的工具。
思维导图:
文章地址:https://martinfowler.com/articles/2024-restrict-algorithm.html
文章来源:martinfowler.com
作者:Martin Fowler
发布时间:2024/7/30 15:57
语言:英文
总字数:805字
预计阅读时间:4分钟
评分:89分
标签:人工智能透明度,可解释的人工智能,算法决策,人机交互,监管方法
以下为原文内容
本内容来源于用户推荐转载,旨在分享知识与观点,如有侵权请联系删除 联系邮箱 media@ilingban.com
The steady increase in deployment of AI tools has led a lot of people concerned about how software makes decisions that affect our lives. In one example, its about “algorithmic” feeds in social media that promote posts that drive engagement. A more serious impact can come from business decisions, such as how much premium to charge in car insurance. This can extend to affecting legal decisions, such as suggesting sentencing guidelines to judges.
Faced with these concerns, there is often a movement to restrict the use of algorithms, such as a recent activity in New York to restrict how social media networks generate feeds for children. Should we draw up more laws to fence in the rampaging algorithms?
In my view, the restricting the use of algorithms and AI here isn’t the right target. A regulation that says a social media company should forego its “algorithm” for a reverse-chronological feed misses the fact that a reverse-chronological feed is itself an algorithm. Software decision-making can lead to bad outcomes even without a hint of AI in the bits.
The general principle should be that decisions made by software must be explainable.
When a decision is made that affects my life, I need to understand what led to that decision. Perhaps the decision was based on incorrect information. Perhaps there is a logical flaw in the decision-making process that I need to question and escalate. I may need to better understand the decision process so that I can alter my actions to get better outcomes in the future.
A couple of years ago I rented a car from Avis. I returned the car to the same airport that I rented it from, yet was charged an additional one-way fee that was over 150% of the cost of the rental. Naturally I objected to this, but was just told that my appeal against the fee was denied, and the customer service agent was not able to explain the decision. As well as the time and annoyance this caused me, it also cost Avis my future custom. (And thanks to the intervention of American Express, they had to refund that fee anyway). That bad customer outcome was caused by opacity – refusing to explain their decision meant they weren’t able to realize they had made an error until they had probably incurred more costs than the fee itself. I suspect the error could be blamed on software, but probably too early for AI. The mechanism of the decision-making wasn’t the issue, the opacity was.
So if I’m looking to regulate social media feeds, rather than ban AI-driven algorithms, I would say that social media companies should be able to show the user why a post appears in their feed, and why it appears in the position it does. The reverse-chronological feed algorithm can do this quite trivially, any “more sophisticated” feed should be similarly explainable.
This, of course, is the rub for our AI systems. With explicit logic we can, at least in principle, explain a decision by examining the source code and relevant data. Such explanations are beyond most current AI tools. For me this is a reasonable rationale to restrict their usage, at least until developments to improve the explainability of AI bear fruit. (Such restrictions would, of course, happily incentivize the development of more explainable AI.)
This is not to say that we should have laws saying that all software decisions need detailed explanations. It would be excessive for me to demand a full pricing justification for every hotel room I want to book. But we should consider explainability as a vital principle when looking into disputes. If a friend of mine consistently sees different prices for the same goods, then we are in a position where justification is needed.
One consequence of this limitation is that AI can suggest options for a human to decide, but the human decider must be able to explain their reasoning irrespective of the computer suggestion. Computer prompting always introduces the the danger here that a person may just do what the computer says, but our principle should make clear that is not a justifiable response. (Indeed we should consider it as a smell for human to agree with computer suggestions too often.)
I’ve often felt that the best use of an opaque but effective AI model is as a tool to better understand a decision making process, possibly replacing it with more explicit logic. We’ve already seen expert players of go studying the computer’s play in order to improve their understanding of the game and thus their own strategies. Similar thinking uses AI to help understand tangled legacy systems. We rightly fear that AI may lead to more opaque decision making, but perhaps with the right incentives we can use AI tools as stepping stones to greater human knowledge.