We need worry about what AI can do now. Not AGI
Whenever something interesting happens in the deep learning world. There are always a few media articles talking about how the end is nigh. And it’s only a matter of time that we will have a HAL 2000 scenario. But anybody with little knowledge of the field knows that’s not happening. With the breakthrough. The AI has not got some superhuman reasoning abilities. regardless of how it may look. This is not to say we should be sleeping on the question of AGI. But I do think that’s a question we don’t have to deal with right now. But we should be instead of focusing on how AI is affecting our lives right now.
Machine learning is probably on all major software companies. Google, Microsoft, and apple. But these are not AGI that we mentioned earlier. But narrow AI in which a machine learning model has been trained on complete one specific task. Like movie recommendations, sharing engaging content. Gives good search results etc. This is the area that most AI progress is being made. As we now have AI that can classify images 95% correctly. But due to it being narrow AI that same AI can’t do something else. Like play chess. This is the research space called transfer learning. But as these machine learning models get more powerful the consequences of them are even more serious.
Facebook. A company that everyone that loves to hate. Because people don’t like The Zucc. Or worried about Facebook questionable handling of data. Facebook normally uses AI to promote content to people’s newsfeed. Where content can be shared to users most likely to read. In the west, this has led to some issues. Where people accuse Facebook of promoting polarising and inflammatory content to their user base. Also sharing misinformation. A less talked about topic but much more serious is Facebook in Myanmar. To give context Myanmar has gone through a strong wave of violence. In which the Rohingya Muslims are being kicked out of the country. The reason why Facebook relates to this. Most Burmese use Facebook. And Facebook has been a platform to promote hate speech in Myanmar. Leading to more violence.
These are the unintended consequences of having a machine learning model. They are many questions relating to Facebook. Like Facebook outsourcing moderating of content. With moderators, not understanding Burmese. Moderators with low-pay or lack of incentives to deal with bad posts correctly. This could happen if the machine learning model has no regulations.
This is a very nuanced issue. Because in the west (America). That it’s an issue of free speech vs speech with consequences. And in the west, Facebook is not helping promote violence. But less serious like data privacy violations. Or promoting partisan content. Even if the partisan/polarising content, It still more nuanced. As there is research saying that social media may not be the sole factor. And it may be mainly an older generation thing. As in America, the cable news network normally promotes the narrative. And promote their content into social media.
Amazon had to scrap an AI used for employing new people. As it was found out it was discriminating against women and people of colour. Obviously, the developers did not intentionally do this. But AI learned from the training data that the people that amazon wanted tended to be white and male. Which is not a fair algorithm. As it does not give people an equal chance.
Or another example from Facebook(also including, other social media companies). In which disinformation is shared with the user base. Real-life consequences are felt. The famous case of anti-vaxxers in which parents denying their child vaccination as they think it's harmful. This leads to numerous areas having measles outbreaks. Something that was supposed to be eradicated. Recently disinformation about the coronavirus is getting popular. Leading to people ignoring their local health guidelines. But like I mentioned earlier this is a very nuanced take. People saying that the coronavirus is fake or causes 5G. Have a good case on getting banned. But people talking about incoming science or areas that it’s not clear. Make it hard for tech companies to do a blanket ban. For example, at the beginning of the pandemic, the science of face masks was underwhelming. But later as the pandemic went on more research started saying masks are an effective way to deal with the virus. Before face masks got popular, they were people still promoting them. Should those people get banned?
This shows the problem of regulating speech on social media platforms. And these are the philosophy and moral problems. If you solved all these issues overnight and got an AI correctly identifying problematic speech. And worked 99% of the time, you will still have problems due to the law of large numbers. If you have to scan hundreds of millions of content. You have a couple of thousand that can be wrong. So, when dealing a huge amount of data it is extra difficult. And with AI, it tends to be a black box. As we know the inputs and the outputs. But we don’t know how the machine learning model chose that decision. This is the growing field called interpretable AI. So, we have an idea of why a machine learning model makes a decision.
By talking about all of this. Is why we have to deal with AI right now. Not AGI which may or may not happen. And, is far into the future. I heard a saying that AGI is like worrying about overpopulation on Mars. It is a problem but something we don’t need to deal with right now. The machine learning models right now have far-reaching consequences already. With polarising content, public health problems and violence. It is best to fix the issue we are seeing right now. Rather than spending time on a future that may not happen. At the expense of problems of the present.
Sources
- https://www.fast.ai/2018/04/19/facebook/
- I can’t the original source(I remember reading it from stratechery article) where the social media polarising so I have these links talking about it: https://www.ox.ac.uk/news/2018-02-21-social-media-and-internet-not-cause-political-polarisation, https://www.brown.edu/Research/Shapiro/pdfs/age-polars.pdf