The use of applied technology
I was listening to a podcast by Lex Friedman an AI researcher, interviewing Jeremy Howard the founder of fast.ai. I found the podcast very interesting.
Theory Vs Practice
In the middle of the podcast, Jeremy Howard states that most deep learning research is useless. As scientists need to work on areas that their colleagues are familiar with. So, they generally work on the same field of research. With little practical value.
One issue he gave as an example is active learning. When a user is explicitly giving feedback to the model. By labelling the data. This is an important topic when you have a custom dataset. As need a way to tell the model exactly what you are looking for.
An interesting idea they talked about is using large dataset can hinder your creativity. Relying on large datasets makes people think that they need to have a large amount of resources (e.g. multiple GPUs) to train deep learning. Also, it shuts out people who did not think they can use deep learning without large resources. Also, they added that recent progress that came from the filed did not involve datacentre level resources. Examples like Batch normalization, Dropout, Rectified Linear Unit (ReLU).
In the podcast, Jeremy was asked about what he learned from teaching the fast.ai course. He replied that anyone can learn deep learning. With the main factor being tenacity. After that Lex asked how a person can become a deep learning expert. Jeremy replied train lots models. But later added that you want to train lots of model in your domain area.
Jeremy stated that deep learning is already a powerful tool. So, trying to work out how to get incremental improvements in a few areas. Is not as good as using deep learning to other fields to solve problems and gain new insights.
The use of domain expertise
This makes me think about the history of computers and the internet. Where lots of the gains did not just come from only getting faster. Yes, having faster computers and faster internet is helpful. IBM mainframes to personal computers. And Dial-up internet to fibre optic. This progress has allowed most people to get their hands on the technology. Helping them solve their own problem with these tools.
Accountants use computers. To keep track of financial information of the company. People post video tutorials on YouTube. So, they can learn new skills online. Artists share art on Instagram. For an interested audience.
Backend work was done to make these examples possible. But it may not have been too useful if many people did not use these tools to solve their own problems.
Think about modern smartphones, Smartphones are getting faster and getting new features every year. The phone makers like Apple and Samsung. Like to dance around on stage to tell us this. But has any big gains came of these modern innovations. No. Most people already have a fast-enough smartphone that solves their personal problems. Why does a customer need a double-sided phone? Or a phone that that bends? Hint: He doesn’t.
When the first Smartphones came out, they came with excitement. Because they solved problems like they never did before. And getting them into people’s hands created lots of value alone. As people thought of new ways of how a smartphone can be used. And phone makers add new features adjusting to people’s habits of using a smartphone. This created a positive feedback loop of innovation.
I read an example were a radiologist trained a machine learning model to look for fractures in x-rays. It was trained using Google’s no-code machine learning tool. Which means it does not need to be complicated to solve a problem. Another example, from a video I watched. Where civil engineers trained a machine learning model to look for broken pipes. As this is something that civil engineers must do a lot in the beginning of their careers. She found a way to make it way more efficient and solved a problem plaguing the industry. Saving young civil engineers from watching hours of footage to look through a cracked pipe.
My story about learning theory but not practice
Back to machine learning. Because even though I was able to learn a lot. I think was only able to starch the surface of what is possible with deep learning. I was able to get a basic understanding of deep learning. But if I remember I did not make many custom projects. As I was simply adjusting TensorFlow and YouTube tutorial examples.
Custom projects if I remember fell apart or never got finished. Which means my knowledge of deep learning is mainly theory but not practical. I can explain the data pipeline to train a model. But I will struggle to build one myself. I can explain what a GAN is. But I will struggle to make one myself. I know what an image classifier is, but can I send you a link of a custom-made example?
Most of these questions the answer is no. But I want to change that. Recently I have been reading and watching about how to use machine learning to help solve climate change. It struck me how much potential there is to solve the world’s most pressing issues. These issues can be helped with just an out of the box machine learning model. Not a cutting-edge model with 10 million bells and whistles. I want to be part of that change making a difference. This is done by solving problems in the real world.
Jeremy says deep learning is a high leverage skill. I believe we can use that tool for good. And to do so we must use these to solve problems that we are dealing with right now. Like I mentioned with the examples of the internet and computers. The amount of value created by normal people learning how to use those tools is enormous. Same with deep learning.