Machine learning needed for a moon base
This blog post is going to be slightly off the beaten path. It is still in the realm of technology and science. Which is the stuff I normally write about. But I decided to put a twist on it. As recently I was watching a few videos about space. Namely how we would be building a moon base. I wondered how AI can fit into the mix. So this blog post is asking that question.
Before we land humans on the moon. We need to find sites that are suitable for human occupation. Luckily we are a ready doing this. With satellites and rovers. Scanning the surface of the moon. Areas that will be nice to humans and places close to water. Areas that are suitable to be terraformed. The water is important because we humans need water to live. And taking water from earth to the moon. increases the price of the rocket launch because of the extra weight. Also, water can be used to make rocket fuel on the moon. By extracting hydrogen and oxygen. This reduces the cost of the rocket launch.
The satellites do some type of radar and other measurements on the surface. Looking at different signals. Certain type of results will be water. (Or ice. As the moon is pretty cold.) As it gets that data. Humans on earth can map that data on to a map of the moon. Maybe ML can speed up that process. by having the moon mapped beforehand. And using coordinates to plot the water spot onto the map. Some results are better than others. The ML model can have a probability score on how likely there is water there. Which can help space agencies plan future missions. As they can pick a spot with the most likely chance of water.
Most likely rovers may scout an area before humans land there. So AI will be involved. Like curiosity, it may come with pre-loaded instructions. Probably check if the ground is suitable. For 3D printing. As there is a need to use the materials on the moon. To create the moon base. Verify if there is water in the area. Track how much radiation hits the area. Tons of stuff to make sure humans are safe when they land.
People say that robots may build most of the moon base before humans land. Which makes sense. So other robots will be doing the 3D printing. And fetching supplies like water. So when humans land they have a place to sleep in. That is safe. The robot will probably still be there doing the work. Of creating more buildings for the moon base. Or creating other things like bricks. Humans I guess will start work on making the moon base useable. Like adding wiring and lights. Setting up the internet. And setting up a vertical farm. So they have fresh food to eat.
Capitalism and Personality worship
Capitalism and personality worship
I watched a YouTube video about Elon fanboys The video was from my favourite angry youtuber Buckley. He said the worst fanbases on the internet are BTS Stans and Elon fanboys. Which is something I can whole heartily get behind.
As I listen to K-pop but I hate listening to the drama of K-pop stans. Love the innovation that Elon musk’s companies are doing. As a person with great interest in entrepreneurship and science. But the Elon fanboys are annoying. Some make YouTube videos about his companies. Which some are very good. But after a while, it gets boring fast. As every video is the host explaining how much of a genius Elon musk is. When I subscribed to any of these channels. Any decision made by tesla. There was a video about the announcement was a 200 IQ play. And we mere souls could not comprehend them. The only one I'm subscribed to is hyper change. As he is a fantastic financial analyst. While I don’t watch all his videos for the reasons I mentioned above. He does good interviews talking about industries of the future like battery mining or electric cars.
I always wondered why Elon has fanboys. I understand why people view him as aspirational. That’s due to his world-beating achievements. But that does not explain the rabid fanbase online. Were any sight criticism of Elon. Is a personal attack. And go out of there way. To show you how you’re not smart or rich as Elon.
I think this has to do with our values in society. In our capitalistic world. People who have wealth are viewed favourably. But has money is the standard yardstick to measure people’s wealth. We look up to people that have the highest yardstick. As people want success in improving their yardstick. people will try to emulate some of the details. Which makes people feel like they are doing something about their yardstick. But it tends to be superficial actions. Not doing the work of starting a business. And researching how to make a product that will sell.
This reminds me of the quote humans make gods in their image. As capitalism is one of the forces of our society today. The gods we will look up to. Will be the ones to fill these values. People have argued rightly that celebrities are modern-day gods. Due to the excessive love and adulation of celebrities. Like massive crowds and trying to get autographs. We do not treat them as people but objects to faun over.
What I think separates Elon musk from, other rich famous people. That he has tangible proof that he is changing the future. Which people look up to. Which had been lacking for a long time. Probably since the financial crisis. So people feel he is the only one helping us make a better future. So anybody that goes against that is a bad person. And making a better future via capitalism is important to many people’s eyes. Hence the worship of his work ethic and his wealth.
His marketing does a good job. In getting people interested in what he's doing. And it may be sightly polarising as people have a propensity for Elon antics will love him. People that don’t turn away. But like religion, you need to wrap it under the banner of the common good. And have a good way to clearly define in-groups and out-groups.
Celebrity worship started to rise after popular TV. See Neil postman. This is just the latest iteration. Social media makes people into polarising characters. Which develops a cult following that lies in the character. Elon business tends to highly visual. Electric cars and rockets. So in an image-based society. That gains traction as they are highly visible items. A faster car is visual than a faster computer. Even for Elon companies. With PayPal, he wasn't as famous as he is now. As PayPal was just a payment processer. Which is fantastic but not as visual compared to his current companies. And did not have a cult following.
Improve accuracy when adding new data to machine learning model
ML model having recall of .97 and precision of 93 and accuracy of 95 on test data but in completely new data it doesn't give good results. What could be the possible reason? – From Reddit
I have seen this too many times. Your model looks perfect with high scores. And somewhat low inference time. But you add new data to test how it would fair. But the results a negligible at best. So you start to wonder what’s wrong with my model. Or maybe it’s my data.
This is a case of overfitting. When the model overly learns the data from its training phase.
To fix this. You want to make sure your data is set up correctly. So make your dataset split into testing data and training data. And depending on your preference add a validation set as well.
Now start training your model using the training data. Which should learn enough to develop a general pattern of the data.
Now check using test data. If your test data is good, then half of the problem is solved. You then want to use the validation dataset to help tune your hyperparameters.
If the new data is giving poor results. Then you may want to find any mistakes in the model or data.
First things first. Simplify your model. Find the simplest model that can deal with your problem.
Second, turn off any extra features like batch normalization or dropout
Third, verify your input data is correct
On a separate note. Make sure your new data you're adding to model is correct as well. Sometimes we can do minor mistakes like forgetting to do pre-processing correctly when using a separate piece of data.
Doing this should remove any bits of your model that are adversely affecting your results. Checking the input data and the test data. Is a simple double-check. As an error in the data can go unnoticed. And maybe affecting your model. Doing this gives you are a chance to spot any of those errors.
Hopefully by doing many of the steps above. The issue should be fixed. If not go to the article I linked above. And go through all the steps in that article.
How to find similar words in your excel table or database
I was reading a post about a person who has a problem mapping data from an excel table to the database. You maybe find it tedious to transfer data between the “cats” fields to the “cat” field.
While I'm not an expert in NLP at all. From googling around it can somewhat be done.
First, you want to move your words you have into a separate text file.
If you have past data put them into two separate files. For original data and destination data.
For example:
original data: mDepthTo
destination data: Depth_To
For pre-processing. After that, you want to remove ASCII or miscellaneous characters. And punctuation. So, you want to get rid of a couple of those underscores. To make life easier for yourself turn the data into a uniform case. The NLTK library is good at this.
Then after that, you want to encode those words into vectors. Try TF-IDF. Again, you can use it with SK-learn. So, you don’t need to install any extra modules.
A brief explanation of TF-IDF
TF-IDF is a statistical measure that evaluates how relevant a word is to a document in a collection of documents. This is done by multiplying two metrics: how many times a word appears in a document, and the inverse document frequency of the word across a set of documents https://monkeylearn.com/blog/what-is-tf-idf/
Now we want to work out the similarity between the vectors. You can use concise similarity as that’s the most common technique. Again, sklearn has this so you can try to out easily.
Cosine similarity is a metric used to measure how similar the documents are irrespective of their size. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space.
Now we make some progress on word similarity. As you can compare both words in your text files.
For testing, you may want to save some examples. So, you can use for evaluation of the NLP model. Maybe you can create a custom metric for yourself about how closely the model was able to match the destination data.
Most of the ideas came from this medium article. Just tried to adapt it to your problem
You should check it out. They know what they are talking about when it comes to NLP.
Summary:
1. Save data into separate text files
2. Pre-process the data. (Punctuation, odd characters etc)
3. Encode data with TF-IDF
4. Get word similarity with Cosine similarity
5. Create metric to use. To see if the model maps data correctly.
Personal datastores
There has been talk about having people owning their data. The same data used by tech companies. This sounds like an interesting idea. After reading in The Economist that they maybe EU internal markets for data. In which EU citizens can choose where their data should be stored. So if a German person wants their data stored in France they can. So with the increasing popularity of governments wanting to regulate tech companies. Ownership of one’s data may be on the table. I’m going to be honest, I haven't read too much on the issue. So I'm just spitballing.
Personal datastores can be very useful. Especially for interoperability. Which means users can move their data to different competitors. EU has been thinking of mandating something like this. But it’s difficult because what data should be used for what. I think have written about this before somewhere. But ticktock is different from Facebook. Youtube is different from Twitter. So the data being moved is not one to one nor the same. So how would one fix that? I don’t know.
I think allowing data to be exported. Is something future tech companies will find useful. As they can design their products to take advantage of the feature.
Also, a lot of data of you is contained in third party companies. People like data brokers. And other advertising companies. How would you get data from them? As they don’t have a website that you can access information about you. As they come in pieces. As they used by the tech companies to add them all together to get a whole picture of you. This makes it very difficult. To collect all the data of you out there.
Also, while I think having a datastore is a great idea. Most people frankly won't care about it. As they just want to watch good YouTube videos. And see good memes on Facebook. But I do think people should have their choice on how their data is used. And if they can transfer it to other services. As we the ones being monetised. So it should make sense. That we should have more control over our information. And maybe be compensated for are data. Maybe this can change some of the incentives for tech companies. The company may feel less of a need to hover up data. Or move to a business model were using data from adverts are not useful.
If the EU does go ahead with this. Hope they can write the laws correctly. To avoid bad unintended consequences. And not hurt the smaller tech companies. As the EU. Focuses on mainly curbing American tech companies. Sometimes forget that they are other tech companies that need to follow the same laws. GDPR is a great start. For allowing people more options with their data. But it is flawed. With tech companies paying major fines. And appeals costing a fortune. A small tech company can't afford the lawyers that Facebook or Google has. Or even the money to pay the fine. This is the way some websites was blocked from EU users for a long time. As they knew that they can’t follow the GDPR guidelines to the tea.
Surrounding all the hype, self-driving vehicles may prove useful. But not in the way you think
I was watching an episode in the Bloomberg Quicktake series called Hello World. If you haven’t watched the series yet then start now.
In this episode, they talked about self-driving spraying vehicles. They were big contraptions. Which drove around a farm. And sprayed the plants. While the vehicle was moving along. The vehicle uses normal self-driving sensors. Like LIDAR and cameras and GPS. For extra comfort. It is a detailed map of the farm. Due to the automation of the device. One person can manage 5 of the vehicles using their laptop. If there is a problem the vehicle will send a notification to the device. So the human can go check up on the vehicle.
After the episode ended the did a quick review of the episode. And Ashlee talked about he didn’t believe something like that existed. He only known about it when some person on Instagram gave him information about the company.
An interesting mention about the encounter. Compared to Silicon Valley people. They were less braggadocious about how they made their product and their achievements. As they didn’t hype to the moon how much their device can do. The second is that they are selling these devices around the US. And soon internationally. This is fantastic compared to other self-driving cars. From the tech companies. Which sold close to none. With billions of dollars to play with.
So self-driving vehicles are useful not in the traditional way. Not in the driving you to grocery shop and back. But in the way collecting blueberries in a farm way. In the documentary.
In my opinion they did not talk much about the engineering feat of the vehicle. As they needed to create a vehicle. That sprayed on demand. And the engine that could power that. And all the other sensors on the device working together. That I think is better engineering work and the tech companies. As they stick a few cameras on the top of the car and call it a day. It showed in the video they manufacture the vehicle in house. The only tech companies that do that is Tesla. So this company is likely way ahead of many companies in silicon valley.
The only difference.
They don’t spend millions of dollars on hype and marketing. But innovating in their sector.
And only people in the agriculture space will know about it. Talking about agriculture technology. In the end interview. The host talked about she knew a friend that used robots to milk cows. She said the robot used lasers. And robot works how to milk the cow from there.
Ashlee gave another example. In which he knew a company in Idaho. Which had a robot collect rocks for them. This is important because of the sheer amount of land. Over time the ground will churn out rocks. So they need to remove the rocks from the land. So they can plant the crops properly.
From what I can see there is a lot of movement going on in the AgriTech space. Which they don’t generate the same amount of hype of a software silicon valley firm. But may do even more important stuff. Compared to an app that helps you get snacks to your house cheaper.
Cloud comparison idea
While training your ML models. Have you ever wondered how much the same training session would have cost on a different cloud provider?
If you read this blog, you have likely used a cloud provider before. Maybe to train your models or deploy your projects.
Cloud providers always tell you they are best place to train your models. But is that actually true?
Companies like oracle, tout they are cheaper than AWS. Google cloud boast about their features. Same with Azure.
But are the features the same thing. Or fundamentally different?
I have an idea that may find that answer.
I’m thinking of creating a product that compare the costs and features between the major cloud service providers.
Imagine seeing a simple table. Which gives you the simple prices across cloud providers
Or a simple table to compare features across providers
Figure 2 http://comparecloud.in/
If you like this idea. And you want to pay for a product like this.
Give me an email. Where we can talk about pre-sales.
Why my next topic I will be learning is ML-ops
While I'm still developing my neural transfer project. A skill that I'm lacking very dearly is publishing models and projects. I have released other projects but they are not ML projects. So I want to make a project in the near future. Where I can collect data from the user can improve the product. A lot of skills regarding implementing deep learning models. Are not advertised that much. They some course that I'm thinking to learn from. Full-stack deep learning. And a YouTube course on Papers with code. But learning some continuous integration and continuous deployment skills will be nice. While notebooks are great. They are only accessible to fellow nerds. If I want to share my work with the wider public then sharing them via a website or app may be better.
ML-ops is a new field. Which probably explains why it hasn’t been getting much attention. Only until recently. I guess there is a critical mass of people that can make decent models. But starting to learn that implementing those models in real life is a bit of a pain. SO they want to learn how to deploy those models more effectively. For me, I'm still learning how to create a good model consistently. While I don’t think I'm bad. I'm not sure if I can hold it on my own. I don’t know it may be some type of impostor syndrome. As when I make models I was using a template from somewhere. Like example code from a Pytorch tutorial. While I mostly know what’s going on. Sometimes I'm a bit lost. I think the answer like Jeremy Howard says is to train more models.
While this issue should go away soon. I haven’t been spending more time on my projects. Meaning that is less time to iterate and learn. I need to increase the iteration process. Of my learning. Meaning I want to be creating more models. Right now its probably one personal project a month. Again a lot of university work. Has slowed me down. But major deadlines have passed. So I should have more time for my projects.
For ML-ops, I guess I will be using things like Github-actions. Which is a tool I tend to see a lot. When people showing a few screenshots on Twitter. I guess they are other tools that I don’t know about. I think deep learning education online is still weighted towards the research side. Which is fine if you want to do research. And a lot of research is very interesting. But they are less focus on implementing and deploying those models. I think I wrote a blog post about this a while back. Where I want to focus more on deploying products. Which I did not do for the green tea and oolong tea classifier. But should do with future projects.
ML-ops may be an important skill for me to learn. If I want to start making software I want to sell in the near future. It is unlikely to be a SasS product. Due to the fact, sass products take a lot of time and energy. And most importantly I have no experience selling products online. So it is better to get that experience on my belt first before I do anything crazy. This product could be a small ML model that people can pay a small fee to use. I don’t know. Right now I'm just thinking out loud.
While learning deep learning is cool. If I do want to show my wares. Then they should be accessible for non-technical people. A lot of the projects that go viral tend to be highly visual like toonify. And/or an easy way to play with the model. Like a website or app. This is not a real person.com was a website that produced fake people from GANs. Or the user had to do was refresh the webpage. Then a new fake person appeared. If the knowledge was stored in a notebook only a few people can access it. Even worst a research paper. Where a very small amount of people can understand and compared what's going on.
A lot of problems when it comes to me learning deep learning. Is simply increasing my output. I just need to pump out for stuff. Pytorch has allowed me to finish projects easier. But I still need to do more work.
ML-Ops is a topic I don’t think I can just read up. So it's likely I will need to apply them to many of the projects, that I will be working on in the future. Can't imagine me using some MNIST dataset and deploying that to Github. ML-Ops frankly is used for real deep learning projects. I don’t know how long learning the topic will take. But I will be happy to add that skill into my arsenal. As that means I can deploy ML models to the web. In good condition. With users even adding feedback to the model. That can be the backbone of many great products in the future.
Or maybe. I just get distracted again and start learning about a new topic. To be honest, AR is looking very interesting right now. So I may work on that pretty soon.
How video and social media effects public discourse.
Olden day speeches were serious
While reading more of Neil Postman’s book Amusing Ourselves to Death. He talked about the power of print media in American public life. Where conversations and speeches were done in a literary tone. A great example shown was the Abe Lincoln vs Douglas debate. Which lasted for 12 hours. And it was none of the speeches we saw today. As the whole debate sounded like an essay. Even the comebacks were pre-written. Postman noted that as the language was complex. Because the speakers assumed major assumptions with the audience. Which required the understanding of the political issues at the time. A couple of the jokes or statements made by the speakers would not be understood would the knowledge of the political context. Nowadays speeches are very simplified so they can turn into clips for television. If people did a speech in the 17th-century style. People will find you boring and would not know what you're talking about.
Can you understand a complex topic in less than 5 minutes?
With social media. I wonder how its effecting discourse right now. You have likely seen it on one of your feeds. A short video with subtitles at the bottom. Talking about whatever political topic. But while watching them you tend to notice that they omit a few details. Which may be important to the topic at hand. Within less than 3 minutes. This video is supposed to tell you how simple a complex political issue is. Only to tell you how obvious the solution is. As we can see the problem comes when the analysis is devoid of nuance. And most of the goal is to make you high on emotion.
I don’t know how much reasonable information you can pack into a 3-minute video. I guess people decide how much emotion they can pack into a video instead. Social media forces you to play more to a certain rule set. Incentivizing creators to play on people’s emotions more. Social media companies want engagement on their platform. This causes people to come back more often and stay longer. This means social media companies like emotional content. As it drives up engagement. So we have a forcing function pushing users for more emotional content. This is a far cry from the Lincoln debates. Also, video is easier to share. A person can watch the video and share it in less than 5 minutes. Compared to text, which it may take a while to read and comprehend.
Before video, you used to read the whole person’s argument. Now we only get 30-second clips of the incident. So at best we only get a surface-level understanding of the incident. At worst we come out of the situation misinformed. As we don’t have enough context to get a full understanding of the event. But after watching the video we are very confident about the situation. So we develop a strong Dunning-Kruger effect. So when talking to other people. We tend to be emotional when talking about the event. Because that’s how we got the information. And lack the context to understand the problem. So we tend to talk past each other.
The algorithms may be more powerful than the content
While I haven’t finished reading the whole book. Neil Postman’s does talk about the issue of television. And how its visual form takes priority above everything else. This can be same for social media videos. In which the visuals that entice users to click, matter more. This is why you can see insane thumbnails for videos. As they need to capture the attention of the user. Even along with the video, the person may be making emotionally charged statements. Because they still want to keep the user’s attention. And stop the user from clicking away. In TV at least you have broadcasting laws. But on the internet, social media companies give a wide birth. Which in a way is a good thing. But in other ways not so. As mass misinformation can be shared. Without much of a fact-check along the way. The emotionally charged nature of misinformation means people are willing to ignore fact-checking. And will actively discredit the correct information. As the correct information goes against their worldview.
I think most progress being done to stop misinformation on video. Is not the fact check panels on the bottom of the video. But adjustment of the algorithms themselves. For example, many of the social media companies now will slow down the distribution of the content. If it's getting viral and the content is misinformation. This stops it from reaching a large number of people. This is done in many ways. YouTube slowed down the distribution of conspiracy theory videos by stopping them from entering the suggested content panel. This means the video will find it difficult to find new users. Outside of the person’s subscriber base.
Adjustments to the social media algorithms also force creators to evolve to the changes. For a long time controversial topics (mainly current affairs). Got demonetised on YouTube. Meaning creators couldn’t run ads on the platform. So creators either started pivoting towards more family-friendly content. Or opening up Patreon accounts so people could get funded directly from their fans. This lead to creators creating exclusive content for Patreon. As they don’t need to worry about getting demonetised or banned from YouTube.
Facebook is known to do similar things. Like slowing down distribution of serious misinformation. This was being done for coronavirus information. Social media will rather do this. Because it’s much harder to cry censorship. While people cry about shadow banning. (stopping content from getting distribution as I explained above.) It's much harder to prove. And most of the time it’s just people’s content was not good enough for users to share it. That does not sound nice so people resort to the shadowban cope.
While Neil postman did not predict social media. I think his book is very relevant to us now. It’s not just the content affecting our discourse but the algorithms behind them as well.
The Uses and Need for Interpretable AI
A popular topic in the deep learning research space is interpretable AI. Which means AI that you can understand why it made a decision. This is a growing field as AI is creeping into lots of industries. And accuracy is not just important but the method or reasoning of the decision is important as well. This is important were the consequences of the decision is very critical like medical and law. In a Wired article, they highlighted this when the AI was flagging patients to check up. Which the doctors have no clue why the AI did that. So some doctors took it upon themselves to learn some stuff about the AI. To guess what it was thinking when it decided. Not having reasoning for the AI lead to some disagreements between the nurses and doctors. So having an explainable AI would help in that scenario.
Interpretable AI can help discover bias in the AI. When AI was used in the criminal law realm. Lots of AI tended to make harsher judgements of black people compared to white people. Having an explainable AI would have made it much clearer that the AI was being driven by the race of a person. Explainable AI can give us peace of mind if done correctly. As it can list the variables that affected the sentence of the person. Instead of a black box-like now where it just spits out a number. It could tell why it spat out the number.
I'm going, to be frank. I only have a surface-level understanding of the topic. I only read one or two light papers. So more reading will be needed. And also implementation. But I think interpretable AI can be very useful for many AI systems like I explained above.
One video I watched. Said her and her team used interpretable models to debug models. And was able to increase the accuracy of the model significantly. So we can be able to do stuff like that. Debugging deep learning models is hard. Due to the black-box nature of them. An interpretable model can help us shine the light on these models. Helping us improve our models even more. In an unreleased post, I wrote about interpretable AI can help make recommendation systems used by the major tech companies more transparent. Also leading to more understanding by users and other stakeholders like the regulators. This can help people identify harmful rabbit holes like conspiracy videos and anti-vax content.
By having a better understanding of why a tech service is recommending you stuff. The user can take action towards changing the situation or keeping it like it is. Maybe using that information the tech company can add features to stop a user from falling too deep in an echo chamber. Like adjusting the suggested videos to more moderate content. Or videos that have different views than the echo chamber that the user is in. Or maybe have nudges saying, “you been watching the same type of videos for a while, try something else.”
Also, it can help identify videos that are going viral in certain areas. Especially if the area is problematic. So, if you can see a video in conspiracy theory land. Gaining traction, you can see how and why the algorithm is recommending the video. From there the tech company can make the decision to do a circuit breaker with the video. Or let it run its course.[1] This may be better than trying to outright ban topics on your service. Due to the whack a mole effect.
Obviously, almost all of this is automated. So the insights are taken from the interpretable AI. Will need to be transferred into another system. And be factored into the model. I don’t know how would one implement that though.
An explainable AI can help moderation teams for tech companies. As an AI can help tell the moderators why it decided to ban a piece of content. And if its an appeal then the moderator can explain to the user why he was accidentally banned. And explain to the user how to avoid it from happening again. Also, the moderator can help tell the AI that it was wrong. So the AI can get better at its job next time around.
When YouTube videos get removed from the platform. YouTube does not tend to offer a good explanation of why it was so. It normally gives some PR / Legal email saying you violated terms and conditions. But the creators do not know which terms and conditions were violated. Some YouTube creators may resort to complaining on Twitter to get a response from the company. While I think YouTube is partly vague because of some legal situation. I think having a transparent AI can help. YouTube can show creators why the situation is like this. YouTube may not know what happened due to the black-box nature of the algorithm.
Interpretable AI will not solve all of technologies problems. A lot of problems frankly is a lack of government regulation and oversight. As in many areas of technology, there are no ground rules. So the technology companies are going in blind. And people are upset about whatever the tech companies do. If the legal situation changed were YouTube can tell its creators why it violated its terms and service. That will be great. Instead of having a cat and mouse game. If government officials even knew what it was talking about when it came to technology. Right now I think the European Union has the best understanding. While I think some of its initiatives are flawed. In the USA the government is only now waking up to the fact they need to rein in big tech. But I'm not confident that the government has a good understanding of the technology there dealing with. You can see some of the viral videos of the original Mark Zuckerberg hearings. Where the congressmen were asking questions that lacked a basic understanding of what the internet and Facebook even is. Never mind how should the government deal with data. Or incentivise companies to have transparent AI.
[1] Tech companies already do this to fight this misinformation. But an Interpretable can make this process easier.