Hidden Carbon Emissions: Discovering How Companies Impact the Environment
What is the financial supply chain?
(written around Nov 2022, the gas market has changed alot now)
This 2nd is the series of reports about the fintech space that Aaron McCreary kindly gave me.
This report introduces the concept of financed emissions. Where a company’s balance sheet is used to fund fossil products. Their emissions can be missed in their sustainability reports. Best to think about it as the type of scope 3 emissions.
The paper expands this concept with the financial supply chain. A company may not purposely want to invest in fossil fuel companies. But what happens is that company gives their money to the investment bank. Where money is used on their behalf. That bank may use the company’s money to provide loans to a new coal mine or gas field. This is where the financed emissions come in.
If you’re interested in how this dynamic plays out at the customer level.
Check out climatetowns video: What Your Bank Really Does With Your Money | Climate Town
The paper highlights some companies:
PayPal reported 24 KtCo2 of emissions, whiles financed emissions were 1,345Ktco2e. A difference of 5,512%.
Disney reported 1,190 KtCo2 of emissions while financed emissions were 2,011 KtCo2e 169%.
Why Fossil Fuels are not Going Away in the Short Term
My issue with the paper is that assumes that fossil fuel investments are all bad. Or even if you do, the organisation will still have an incentive to invest in fossil fuels. After the Ukraine war, the lack of energy is getting very acute for many countries. Germany needs to increase supply ASAP. While they added renewables, they also added coal and natural gas. This is the only way for the nation to survive. Germany’s companies were happy to talk about Net Zero. Now the Russian gas taps were turned off. Now ordered by the government to invest in fossil fuels and more renewables.
Because of this, the government is providing incentives to build more of these LNG terminals. If you’re bank how are going to say no to this?
In the short-term investing in gas is unavoidable. But can steps be taken to reduce the damage of these projects? Yes
Policy could help, like making sure these LNG terminals can be converted to hydrogen terminals in the near future. This idea has to be government-backed in my opinion.
The biggest issue here is corporations have bigger incentives than saving the world. AKA making more money. This is where policy comes in, suggested later in the report. In shareholder capitalism, the only legal obligation is to make more money for shareholders. Your fiduciary duty is the most common phrase.
In my opinion, the report slightly sides steps this issue. A CFO is not going to invest in sustainable projects if the next quarter has his job on the line.
Corporations have other incentives that may affect shareholder value. Like a government breathing down your neck. Maybe for legitimate reasons like antirust or the government needs political favour from you.
The 4 Buckets for Solutions
The paper lists out 4 mental models for solutions:
Select: identify financial institutions and products that are environmentally sustainable and socially equitable from the existing landscape.
Engage: their existing finance providers in their financial supply chain on climate and sustainability, making clear requests and incentivizing good practice.
Innovate: develop innovative new products, mechanisms, incentive schemes, data insights, behavioural drivers, etc., that enable companies to accelerate the decarbonization of their financial supply chains.
Advocate: push for climate-aligned financial regulation and policy that will increasingly drive the financial system toward progressive sustainable products and services.
This is a good mental model to think about various climate fintech ideas. The buckets have wider scope compared to the The climate fintech paper by New Energy Nexus. Where it delved into the sub-categories and touched on some of the tech used in the solutions.
1st bucket “Select”, suggests finding solutions already on the market. You don’t have to invent the wheel. So, finding products that you can use for your company and ideally on the shelf is great.
The “Engage” bucket similar to select is looking at existing providers in the market. As see how you can improve their processes for sustainable solutions. This can implement frameworks like SASB. Or promoting sustainability internally in the company.
The “Innovate” bucket talks about creating new products and business models. If you’re interested in founding a start-up this bucket will be of the most interest to you. Recommend checking out the last paper to read more.
The “Policy” Bucket is where the big stick and carrot of government are used. There could be carbon taxes or climate disclosure rules. The inflation reduction act is a classic example of the carrot approach. (Fancy billions of dollars for your new factory, Uncle Sam has got you covered 👍.) NGOs like GANFZ provide industry guidance for decarbonising.
Suggestions from the report
The first solution from the report was “Demand emissions reporting and transparency”.
This idea has been screamed from the rooftops in the industry for good reason. Getting climate data can be very shaky at times. The more reliable the better.
The financial industry needs the tools if you want to decarbonise. I can know about your financed emissions if know they exist or don’t track them.
Luckily the carbon management space is one of the most popular areas of fintech. Multiple products are offering companies to check their emissions for their supply chain. With better tech, it can become more reliable. An example TransitionZero was able to track real-time emissions of coal plants in China using satellite imagery and AI. There should be many cool examples like this as time goes on.
Green bonds are another solution suggested by the report. One of the ways companies can get financing for their green projects.
The report mentions:
“it is difficult for bond purchasers to determine the true impact of green bonds. To ensure green bonds are delivering a measurable climate benefit, companies can issue a mandate to hold their money in a green bond with specific terms and ask issuer financial institutions to bid on the funds”
If you’re interested check this video by CNBC explaining green bonds.
“Fuel green demand”
Creating new financial products that let customers make carbon-friendly decisions was suggested by the report. It provided ideas like “sustainability-linked loans [that] offer discounted lending for taking climate-aligned action”. These ideas can be a win-win for a tech company that needs to use customer data to provide products.
“Move the money where you can”
This a simple and important suggestion. Corporations move billions of dollars. Many climate-friendly banks don’t have the scale nor infrastructure in place to do that. The report suggests partly moving corporate money into more simple funds like philanthropy funds.
Patagonia to this advice to the extreme. By making the whole company a trust where all money does to environmentalism and fighting climate change. What a baller move!
Most corporations can’t do that. But with smart accounting practices, you could make a lot of progress. The authors mention that companies can reduce 60% of every dollar spent by doing this.
Conclusion
The report gives a great rundown on the climate fintech space. Touches on the challenges and opportunities in the space. I mentioned the issue of big banks still having an incentive to invest in fossil fuels explaining why that’s the biggest bottleneck and how it can be solved. The report introduces the concept of “financed emissions”. A new way looking into Scope 3 emissions using financial data.
The paper shows you can corporations can make much more progress with their climate goals. If more work was put in to prevent their holding from funding more fossil fuel projects.
The $3 Trillion Showdown to Save the Planet
I was introduced to this report by Aaron McCreary from Doconomy. After asking for resources to learn about climate fintech. The report gave a rundown of the trends and categories of the industry.
When it comes to climate change finance is something we don’t think about. But the IEA says we need $2 Trillion to fund the transition. That money needs to come from somewhere. Hence climate fintech comes in to help financial institutions and customers move into more sustainable areas.
All climate projects will require funding large or small. Creating tech to solve this issue will be important.
The report talks about the types of tools that are being used to create these solutions. Although I do wish that the whitepaper when more into technical details of tools mentioned. But I do understand it’s a whitepaper, not PhD thesis.
Blockchain is being used more in climate fintech. Because blockchain tech allows you to verify transactions and contracts digitally without a 3rd party. There is still a lot of froth in the space as people still working out how to use the tech or plain old grifters.
From what I noticed from the report many of the solutions are vertical-based. Meaning a solution will be built for insurance companies in mind or asset management.
I wonder about the various bottlenecks affecting these climate fintechs. Is it collecting data to build these ML models? Or selling these products to would-be customers. The famous product risk vs market risk conundrum.
Stakeholders in climate fintech
The report laid out the ecosystem by mapping various stakeholders into 3 main buckets:
Private capital (Central banks, Investment Banks, Retail and Commercial Banks).
Asset Managers (Assets Managers, Passive funds and Indices, Wealth Managers).
Asset Owners (Insurance companies, Sovereign Wealth Funds, Pension funds).
Your fintech startup will be helping one of these stakeholders. Either helping them invest in climate projects directly or evaluating the assets they already have.
The Other stakeholders don’t directly invest in these climate projects. But help the climate fintech startup. These are Venture capital, individual investors, accelerators, and universities.
The surprising popularity of risk analysis
Risk analysis is an area that found interesting, due amount of active interest in the field. Risk analysis I thought was a solved problem for the climate world and only a small amount of insurance companies will find it useful. But insurance companies plus consultancy have been picking up at an increasing rate.
From the whitepaper:
“Big players are actively acquiring startups. For example, Moody’s recently acquired minor stake in SynTao Green Finance in China and Four Twenty Seven in the US. Major acquisitions were also observed in other regions in both 2019 and 2020; in addition to the previously mentioned MSCI acquisition of Carbon Delta, Bain & Company acquired Ecovadis in Europe, Morningstar acquired Sustainalytics, and BlackRock formed a strategic partnership with Rhodium Group”
In hindsight, it makes sense, as asset owners and insurance companies look to value their assets in a changing world.
The insurance industry is worth $6 trillion, and many other companies need help evaluating their assets with rising sea levels and wildfires. Many houses in California are now worthless as fire strikes that house every year. No insurance company want to cover that.
The risk analysis is built on the rise of satellite imagery and AI. These trends allow companies to collect precise geographic data and model that data into something useful.
Jupiter Intel a company mentioned in the report evaluates climate risk under different temperature scenarios. (i.e. 1.5C vs 2C). By having high-resolution satellite images, they see effects within a few meters. This allows the company to take action to mitigate the climate risk for each of its assets.
Companies like First Street Foundation can use climate models and satellite imagery to put a wildfire risk for each household in the area. Then homebuyers can make their own decisions from there.
The whitepaper mentions that climate risk can be broken down into different areas.
Transition risk: What changes does a net-zero world affect the company?
Policy and Legal Risk, Technology Risk, Market Risk, and Reputation Risk. You can wrap all these into the ESG category. This may explain why consultancies are buying these risk analysis companies.
This is the practice of carbon accounting comes in. Companies will need to reduce their emissions. Going through the supply chain for emissions dealing with multiple risks. Legal risk, following a carbon tax. Reputation Risk, not fulling your very public net zero is an embarrassment.
The other main category is Physical Risk. The risk you think about when it comes to climate risk. What is the likelihood that this house is underwater in 10 years? Or what is the likelihood that this house in turned to ash next summer?
The whitepaper shows the EU taxonomy version of these definitions:
Transition Risk relates to the process of transitioning to a lower-carbon economy
Physical Climate Risk relates to the physical impacts of climate change
Physical climate risk can be bucketed into these areas:
Source: ngfs_physical_climate_risk_assessment.pdf
https://www.ngfs.net/sites/default/files/media/2022/09/02/ngfs_physical_climate_risk_assessment.pdf
You can also think of risk using this equation:
Risk = hazard x exposure x vulnerability
These startups help clients work out all areas of this equation. With simulations and data.
The whitepaper showed this workflow for risk analysis:
Collection >> Processing >> Aggregation >> Solutions.
This workflow is not a unique risk analysis. As this workflow will be used more by many ML-based companies.
I guess that a lot of value is the models, the data less so. Because a lot of value comes from which assets are at risk and what to do about it. Having a dataset of at-risk areas is helpful but a prediction of severity and likelihood is the most useful information for the company. This is where models come in.
Predicting Flooding with Python
Getting Rainfall Data and Cleaning
For this project, I will make a model that will show long term flooding risk in an area. Related to climate change and machine learning, which I have been writing a lot about recently. The idea was to predict if an area has a higher risk of flooding in 10 years. The general idea to work this out was to get rainfall data. Then work out if the rainfall exceeded land elevation. After that, the area can be counted as flooded or not.
To get started I had to find rainfall data. Luckily, it was not too hard. But the question was, what rainfall data I wanted to use. First, I found the national rainfall data (UK). Which looked very helpful. But as the analysis will be done by a geographic basis. I decided that I will use London rainfall data. When I got the rainfall data it looked like this:
Some of the columns gave information about soil moisture, which was not relevant to the project. So I had to get rid of them. Also, as it was a geographic analysis. I decided to pick the column that would be closest to the general location I wanted to map. So, I picked Lower Lee rainfall. As I will analyse East London.
To complete the data wrangling I used pandas. No surprise there. To start, I had to get rid of the first row in the dataframe. As they work as the second header in the dataframe. This makes sense as the data was meant for an excel spreadsheet.
I used this to get rid of the first row:
df = df[1:]
After that, I had to get rid of the locations I was not going to use. So, I used pandas iloc function to slice through a significant number of columns in the dataframe.
df = df.drop(df.iloc[:, 1:6], axis=1)
After that, I used the dataframe drop function to get rid of the columns by name.
df = df.drop(['Roding', 'Lower Lee.1', 'North Downs South London.1', 'Roding.1'], axis=1)
Now, before I show you the other stuff I did. I fell into some errors when trying to analyse or manipulate the contents of the dataframe. To fix these issues that I fell into. I changed the date column into Pandas DateTime, with the option of parsing the date first. Due to pandas using the American date system. Then changed the Lower Lee column into a float type. This had to be done as the first row which I sliced earlier. Changes the data type of the columns into non-numeric data types. After I did all of this I can go back into further analysis.
To make the analysis more manageable, I decided to sum up the rainfall to a monthly basis. Rather than a daily basis. As I will have to deal with a lot of extra rows. And having monthly rainfall makes it easier to see changes in rainfall from a glance. To do this I had to group the dataframe into monthly data. This is something that I was stuck for a while, but I was able to find the solution.
Initially, I had to create a new dataframe, that grouped the DateTime column by month. This is why I had to change the datatype from earlier. Then I used the dataframe aggregate function. To sum the values. Then after that, I used the unstack function which pivots the index labels. Thirdly I used reset_index(level=[0,1]) to revert the multi-index into a single index dataframe. Then dropped the level_0 column. Then renamed the rest of columns date and rain.
Analysing the Data
One of the major issues that popped up was the data type of the date column. After tonnes of digging around in stack overflow, I found the solution was to convert it to a timestamp then converted back into a DateTime format. I think this has to do with the changed dataframe into a monthly dataframe so it must have messed up the data type which is why I had to change it again.
A minor thing I had to adjust was the index because when I first plotted the graphs the forecast did not provide the date only providing an increasing numerical number. So, I went to the tutorial’s notebook and her dataframe had the date as the index. So, I changed my dataset, so the index contains the dates so when the forecast is plotted the dates are shown on the x-axis.
Now for the analysis. This is a time-series analysis as we are doing forecasting. I found this article here which I followed. I used the statsmodels package. Which helps provide models for statistical analysis. First, we did a decomposition which separated the dataframe into a trend, seasonal and residual components.
Next, the tutorial asks us to check if the time series is stationary. In the article, it's defined as “A time series is stationary when its statistical properties such as mean, variance, and autocorrelation are constant over time. In other words, the time series is stationary when it is not dependent on time and not have a trend or seasonal effects.”
To check if the data is stationary, we used autocorrelation function and partial autocorrelation function plots.
There is a quick cut off the data is stationary. The Autocorrelation and Partial autocorrelation functions give information about the reliance of time series values.
Now we used another python package called pmdarima. Which will help me decide my model.
import pmdarima as pm
model = pm.auto_arima(new_index_df_new_index['Rain'], d=1, D=1,
m=12, trend='c', seasonal=True,
start_p=0, start_q=0, max_order=6, test='adf',
stepwise=True, trace=True)
All of the settings were taken from the tutorial. I will let the tutorial explain the numbers:
“Inside auto_arima function, we will specify d=1 and D=1 as we differentiate once for the trend and once for seasonality, m=12 because we have monthly data, and trend='C' to include constant and seasonal=True to fit a seasonal-ARIMA. Besides, we specify trace=True to print status on the fits. This helps us to determine the best parameters by comparing the AIC scores.”
After than I spilt the data into train and test batches.
train_x = new_index_df_new_index[:int(0.85*(len(new_index_df_new_index)))]
test_x = new_index_df_new_index[int(0.85*(len(new_index_df_new_index))):]
When Splitting the data for the first time I used SciKit Learn’s train_test_split function to split the data. But this led to some major errors later on when plotting the data so I'm using the tutorial method.
Then we trained a SARIMAX based on the parameters produced from earlier.
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train_x['Rain'],
order=(2,1,0),seasonal_order=(2,1,0,12))
results = model.fit()
results.summary()
Plotting the forecast
Now we can start work on forecasting as we now have a trained model.
forecast_object = results.get_forecast(steps=len(test_x))
mean = forecast_object.predicted_mean
conf_int = forecast_object.conf_int()
dates = mean.index
These variables used to help us plot the forecast. The forecast is as long as the test dataset. The mean is the average prediction. The confidence interval gives us a range where the numbers lie. And dates provide an index so we can plot the date.
plt.figure(figsize=(16,8))
df = new_index_df_new_index plt.plot(df.index, df, label='real')
plt.plot(dates, mean, label='predicted')
plt.fill_between(dates, conf_int.iloc[:,0], conf_int.iloc[:,1],alpha=0.2)
plt.legend() plt.show()
This is example of an in-sample forecast. Now lets see how we make a out-sample forecast.
pred_f = results.get_forecast(steps=60)
pred_ci = pred_f.conf_int()
ax = df.plot(label='Rain', figsize=(14, 7))
pred_f.predicted_mean.plot(ax=ax, label='Forecast')
ax.fill_between(pred_ci.index,
pred_ci.iloc[:, 0],
pred_ci.iloc[:, 1], color='k', alpha=.25)
ax.set_xlabel('Date')
ax.set_ylabel('Monthly Rain in lower lee')
plt.legend()
plt.show()
This is forecasting 60 months into the future.
Now we have forecasting data. I needed to work on which area can get flooded.
Getting Elevation Data
To work out areas that are at risk of flooding I had to find elevation data. After googling around. I found that the UK government provide elevation data of the country. Using LIDAR. While I was able to download the data. I worked out that I did not have a way to view the data in python. And I may have to pay and learn a new program called ArcGIS. Which is something I did not want to do.
So I found a simpler alternative using Google Maps API elevation data. Where you can get elevation data of an area. Using coordinates. I was able to access the elevation data using the Python package requests.
import requests
r = requests.get('https://maps.googleapis.com/maps/api/elevation/json?locations=39.7391536,-104.9847034&key={}'.format(key))
r.json()
{'results': [{'elevation': 1608.637939453125,
'location': {'lat': 39.7391536, 'lng': -104.9847034},
'resolution': 4.771975994110107}],
'status': 'OK'}
Now we need to work out when the point will get flooded. So using the rainfall data we compare the difference between elevation and rainfall. And if the rain passes elevation then the place is underwater.
import json
r = requests.get('https://maps.googleapis.com/maps/api/elevation/json?locations=51.528771,0.155324&key={}'.format(key))
r.json()
json_data = r.json()
print(json_data['results'])
elevation = json_data['results'][0]['elevation']
print('elevation: ', elevation )
rainfall_dates = []
for index, values in mean.iteritems():
print(index)
rainfall_dates.append(index)
print(rainfall_dates)
for i in mean:
# print('Date: ', dates_rain)
print('Predicted Rainfall:', i)
print('Rainfall vs elevation:', elevation - i)
print('\n')
Predicted Rainfall: 8.427437412467206
Rainfall vs elevation: -5.012201654639448
Predicted Rainfall: 40.91480530998025
Rainfall vs elevation: -37.499569552152494
Predicted Rainfall: 26.277342698245548
Rainfall vs elevation: -22.86210694041779
Predicted Rainfall: 16.720892909866357
Rainfall vs elevation: -13.305657152038599
As we can see if the monthly rainfall drops all in one day. Then the area will get flooded.
diff_rain_ls = []
for f, b in zip(rainfall_dates, mean):
print('Date:', f)
print('Predicted Rainfall:', b)
diff_rain = elevation - b
diff_rain_ls.append(diff_rain)
print('Rainfall vs elevation:', elevation - b)
print('\n')
# print(f, b)
This allows me to compare the dates with rainfall vs elevation difference.
df = pd.DataFrame(list(zip(rainfall_dates, diff_rain_ls)),
columns =['Date', 'diff'])
df.plot(kind='line',x='Date',y='diff')
plt.show()
I did the same thing with the 60-month forecast
rainfall_dates_60 = []
for index, values in mean_60.iteritems():
print(index)
rainfall_dates_60.append(index)
diff_rain_ls_60 = []
for f, b in zip(rainfall_dates_60, mean_60):
print('Date:', f)
print('Predicted Rainfall:', b)
diff_rain_60 = elevation - b
diff_rain_ls_60.append(diff_rain_60)
print('Rainfall vs elevation:', elevation - b)
print('\n')
In the long term, the forecast says they will be less flooding. This is likely due to how the data is collected is not perfect and short timespan.
How the Project Fell Short
While I was able to work out the amount of rainfall to flood an area. I did not meet the goal of showing it on to a map. I could not work out the LIDAR data from earlier. And other google map packages for Jupiter notebooks did not work. So I only the coordinates and the rainfall amount.
Wanted to make something like this:
For the reasons I mentioned earlier, I could not do it. The idea was to have the map zoomed in to the local area. While showing underwater properties and land.
I think that’s the main bottleneck. Getting a map of elevation data which can be manipulated in python. As from either, I could create a script that could colour areas with a low elevation.
Why you should NOT use this model
While I learnt some stuff with the project. I do think they some major issues on how I decided which areas are at risk. Just calculating monthly rainfall and finding the difference from the elevation is arbitrary. What correlation does monthly rainfall effect if rainfall pores 10X more in a real flood? This is something I started to notice once I started going through the project. Floods happen (in the UK) from flash flooding. So a month’s worth of rain pours in one day. They will be some correlation with normal rainfall. The other data points that that real flood mappers use, like simulating the physics of the water. To see how the water will flow and affect the area. (Hydrology). Other data points can include temperature and snow. Even the data I did have could have been better. The longest national rainfall data when back to the 70s. I think I did a good job by picking the local rain gauge from the dataset. (Lower Lee). I wonder if it would have been better to take the average or sum of all the gauges to have a general idea of rainfall of the city.
So other than I did not map the flooding. This risk assessment is woefully inaccurate.
If you liked reading this article, please check out my other blog posts:
Failing to implement my first paper
How I created an API that can work out your shipping emissions
Forecasting extreme weather events
As I write this blog post, wildfires are ripping through the west coast of America. The biggest wildfires on record. This morning I was watching videos of the destruction that the fire left. Seeing distraught families come back to the homes in rubble. Many people could not claim insurance as the companies bowed out earlier due to the risk of wildfires. In northern California, a major industry there is wine production. I saw farmers rushing to pick grapes before the fire gets to them. Normally the farmer can wait till the grapes are ripe. But this time they had to collect all they can get. One town in California had a famous winery turned into rubble. The winery was a local landmark. Fires have burned over million acres of land.
While one of the wildfires started because people wanted to do a baby reveal party with polytechnics. The local climate did not help. Due to dry weather and windy weather. Making the dry vegetation work as fuel for the fire. And strong winds causing the fire to spread rapidly. As the climate generally gets warmer higher temperatures will be more normal. So experts say we need to get used to it as this is the new normal. In Colorado, temperatures dropped to around 2C with snow. Just coming from a record-breaking heatwave of 38.3C. The contrast is jarring. Between an inferno in Oregon and California and the below-freezing temperatures in Colorado.
As are weather system is complex and interconnected. People are saying that the changing weather is connected to typhoons in Asia. Affecting the local jet stream.
For extreme weather events like this. This is where short term forecasting comes along. According to climate.ai paper. People have been using ML techniques for a while to improve forecasting accuracy. Daily weather forecasts need to be produced every day and tested every day. So they can be accurate as possible before heading to the morning shows. Maybe ml can help analyse for data from the around the world. To predict weather events like this. Where we can see the connection between an event in another continent and how it would affect your local area. Which is important.
I watched a video of a Californian tech company. Using satellite data to access the risk of wildfires. Making them more precise. As insurance companies use normal maps. And block out whole neighbourhoods without going to a house by house basis. This may be the future of insuring people in fire-prone areas. As wildfires like this are forcing people to move outside the state entirely. But from watching the videos extra money will need to be put in in wildfire protection. One person in the town where it got to burn down. His house stayed put due to his wildfires mitigation techniques. Which included water hoses around the home. A battery and solar power back up if the grid gets disconnected. And fire protection liquid on to windows. But this set while very effective. Looked like it cost a lot of money. Probably in the ballpark. Of more than 60 grand. So these solutions are not available to everyone.
Climate Analytics
Recently I have been writing a lot about how we can use machine learning to help climate change. A lot of the idea from the paper climate.ai. Which tries to bridge the gap between the machine learning community and industries dealing with climate change. One of the areas that I found interesting lots of people don’t talk much about is climate analytics. Where data is collected about the climate which is then used to make financial decisions. Due to the large scale of climate change almost all countries and most industries will be affected. So, it will make sense to make sure that people do not lose out on their investments due to climate change.
Uses of collecting climate data
All of this will require a lot of data, in which machine learning is suited for. They may be some drawbacks. But I do think they will be useful. They are lots of areas that data can help financial investments. One example is flood risk, where focusing on the long-term risk will be useful for insurance companies. So, they can avoid large payouts. Wildfire risk is very similar which can burn through rural and suburb areas with a lot of woodlands. Costing landowners at a lot of money. Especially if you use land actively to create an income. Like using the land to raise cattle.
How data can be collected
The data can be collected in many ways. One increasingly popular area is remote sensing. Where we use satellite imagery to collect data of an area. This is done as the satellite can collect data in other wavelengths that are not visible to the human eye. So, we can view gases. View vegetation and track other elements. Remote sensing can have future use of enforcing regulations. Right now, a lot of satellite data covers wide pieces of an area of each image. Something hundreds of metres per image. So, you can’t be too precise in tracking a certain area. But as technology gets better, we can be able to pinpoint areas of high emissions. And see if a company is following regulations. But this leads to some privacy concerns.
For urban areas. Tracking the amount of movement using smartphones has been very useful. As people can track the usage of public transport and other services. Using information like that we can create incentives so people can use less carbon-heavy transportation rather than cars. This can help companies to invest in new transport methods by looking at supply and demand.
Energy is the most obvious example. Right now, a lot of energy companies are going bankrupt due to unprofitable energy sources, mainly coal. Because of COVID-19, other fossil companies had to size down. As lockdowns reduced the demand of many energy products, mainly oil. This forced lots of energy companies to chart a zero-carbon future. So, they survive the transition