The Uses and Need for Interpretable AI
A popular topic in the deep learning research space is interpretable AI. Which means AI that you can understand why it made a decision. This is a growing field as AI is creeping into lots of industries. And accuracy is not just important but the method or reasoning of the decision is important as well. This is important were the consequences of the decision is very critical like medical and law. In a Wired article, they highlighted this when the AI was flagging patients to check up. Which the doctors have no clue why the AI did that. So some doctors took it upon themselves to learn some stuff about the AI. To guess what it was thinking when it decided. Not having reasoning for the AI lead to some disagreements between the nurses and doctors. So having an explainable AI would help in that scenario.
Interpretable AI can help discover bias in the AI. When AI was used in the criminal law realm. Lots of AI tended to make harsher judgements of black people compared to white people. Having an explainable AI would have made it much clearer that the AI was being driven by the race of a person. Explainable AI can give us peace of mind if done correctly. As it can list the variables that affected the sentence of the person. Instead of a black box-like now where it just spits out a number. It could tell why it spat out the number.
I'm going, to be frank. I only have a surface-level understanding of the topic. I only read one or two light papers. So more reading will be needed. And also implementation. But I think interpretable AI can be very useful for many AI systems like I explained above.
One video I watched. Said her and her team used interpretable models to debug models. And was able to increase the accuracy of the model significantly. So we can be able to do stuff like that. Debugging deep learning models is hard. Due to the black-box nature of them. An interpretable model can help us shine the light on these models. Helping us improve our models even more. In an unreleased post, I wrote about interpretable AI can help make recommendation systems used by the major tech companies more transparent. Also leading to more understanding by users and other stakeholders like the regulators. This can help people identify harmful rabbit holes like conspiracy videos and anti-vax content.
By having a better understanding of why a tech service is recommending you stuff. The user can take action towards changing the situation or keeping it like it is. Maybe using that information the tech company can add features to stop a user from falling too deep in an echo chamber. Like adjusting the suggested videos to more moderate content. Or videos that have different views than the echo chamber that the user is in. Or maybe have nudges saying, “you been watching the same type of videos for a while, try something else.”
Also, it can help identify videos that are going viral in certain areas. Especially if the area is problematic. So, if you can see a video in conspiracy theory land. Gaining traction, you can see how and why the algorithm is recommending the video. From there the tech company can make the decision to do a circuit breaker with the video. Or let it run its course.[1] This may be better than trying to outright ban topics on your service. Due to the whack a mole effect.
Obviously, almost all of this is automated. So the insights are taken from the interpretable AI. Will need to be transferred into another system. And be factored into the model. I don’t know how would one implement that though.
An explainable AI can help moderation teams for tech companies. As an AI can help tell the moderators why it decided to ban a piece of content. And if its an appeal then the moderator can explain to the user why he was accidentally banned. And explain to the user how to avoid it from happening again. Also, the moderator can help tell the AI that it was wrong. So the AI can get better at its job next time around.
When YouTube videos get removed from the platform. YouTube does not tend to offer a good explanation of why it was so. It normally gives some PR / Legal email saying you violated terms and conditions. But the creators do not know which terms and conditions were violated. Some YouTube creators may resort to complaining on Twitter to get a response from the company. While I think YouTube is partly vague because of some legal situation. I think having a transparent AI can help. YouTube can show creators why the situation is like this. YouTube may not know what happened due to the black-box nature of the algorithm.
Interpretable AI will not solve all of technologies problems. A lot of problems frankly is a lack of government regulation and oversight. As in many areas of technology, there are no ground rules. So the technology companies are going in blind. And people are upset about whatever the tech companies do. If the legal situation changed were YouTube can tell its creators why it violated its terms and service. That will be great. Instead of having a cat and mouse game. If government officials even knew what it was talking about when it came to technology. Right now I think the European Union has the best understanding. While I think some of its initiatives are flawed. In the USA the government is only now waking up to the fact they need to rein in big tech. But I'm not confident that the government has a good understanding of the technology there dealing with. You can see some of the viral videos of the original Mark Zuckerberg hearings. Where the congressmen were asking questions that lacked a basic understanding of what the internet and Facebook even is. Never mind how should the government deal with data. Or incentivise companies to have transparent AI.
[1] Tech companies already do this to fight this misinformation. But an Interpretable can make this process easier.