PV Solar Installation in US: who has installed solar panel and who will be the next?

Project idea

Photovoltaic (PV) solar panels, which convert solar energy into electricity, are one of the most attractive options for the homeowners. Studies have shown that by 2015, there are about 4.8million homeowners had installed solar panels in the United States of America. Meanwhile, the solar energy market continues growing rapidly. Indeed, the estimated cost and potential saving of solar is the most concerned question. However, there is a tremendous commercial potential for the solar energy business, and visualizing the long term tendency of the market is vital for the solar energy companies’ survival in the market . The visualization process could be realized by examining the following aspects:

  1. Who has installed PV panels, and what are the characteristics of the household, e.g. what’s the age, household income, education level, current utility rate, race, home location, current PV resource, existing incentive and tax credits for those that have installed PV panels?
  2. What does the pattern of solar panel installation looks like across the nation, and at what rate? Which household is the most likely to install solar panels in the future?

The expected primary output from this proposal is a web map application . It will contain two major functions. The first is the cost and returned benefit for the households according to their home geolocation. The second is interactive maps for the companies of the geolocations of their future customers and the growth trends.

Initial outputs


The cost and payback period for the PV solar installation: Why not go solar!

NetCost

Incentive programs and tax credits bring down the cost of solar panel installation. This is the average costs for each state.

Monthly Saving

Going solar would save homeowners’ spending on the electricity bill.

Payback Years

Payback years vary from state to state, depending on incentives and costs. High cost does not necessarily mean a longer payback period because it also depends on the state’s current electricity rate and state subsidy/incentive schemes. The higher the current electricity rate, the sooner you would recoup the costs of solar panel installation. The higher the incentives from the state, the sooner you will recoup the installation cost.

How many PV panels have been installed and where?

Number of Solar Installation

The number of solar panels installed in the states that have been registered on NREL’s Open PV Project. There were about 500,000 installations I was able to collect from the Open PV Project. It’s zip-code-based data, so I’ve been able to merge it to the “zip code” package on R. My R codes file is added here at my GitHub project.

Other statistical facts : American homeowners who installed solar panels generally has $25,301.5higher household income compare to the national household income. Their home located in places that have higher electricity rate, about 4 cents/kW greater than the national average, and they are also having higher solar energy resource, about 1.42 kW/m2 higher than the national average.

Two interactive maps were produced in RStudio with “leaflet”

Solar Installation_screen shot1

An overview of the solar panel installation in the United States.

Solar Installation_screen shot2

Residents on the West Coast have installed about 32,000 solar panels from the data registered on the Open PV Project, and most of them were installed by residents in California. When zoomed in closely, one could easily browse through the details of the installation locations around San Francisco.

Solar Installation_screen shot3

Another good location would be The District of Columbia (Washington D.C.) area. The East Coast has less solar energy resource (kW/m2) compared to the West Coast, especially California. However, the solar panel installations of homeowners around DC area are very high too. From maps above, we know that because the cost of installation is much lower, and the payback period is much faster compared to other parts of the country. It would be fascinating to dig out more information/factors behind their installation motivation. We could zoom in too much more detailed locations for each installation on this interactive map.

However, some areas, like DC and San Francisco, have a much larger population compared to other parts of US, which means there are going to be much more installations. An installation rate per 10,000 people would be much more appropriate. Therefore, I produced another interactive map with the installation rate per 10,000 people, the bigger the size of the circle is the higher rate of the installation.

Solar Installation_screen shot4

The largest installation rate in the country is in the city of Ladera Ranch, located in South Orange County, California. Though, the reason behind it is not clear and more analysis is needed.

Solar Installation_screen shot5

Buckland, MA has the highest installation on the East Coast. I can’t explain what the motivation behind it yet either. Further analysis of the household characteristics would be helpful. These two interactive maps were uploaded tomy GitHub repository, where you will be able to see the R code I wrote to process the data as well.

Public Data Sources

To answer these two questions, datasets of 1670M (1.67G) were downloaded and scraped from multiple sources:
(1). Electricity rate by zip codes;

(2). A 10km resolution of solar energy resources map, in ESRI shapefile, was downloaded the National Renewable Energy Laboratory (NREL); It was later extracted by zipcode polygon downloaded from ESRI ArcGIS online.

(3). Current solar panel installation data was scraped from the website of open PV website, a collection of installations by zip code. It requires registration to be able to access the data. It is part of NREL. The dataset includes the zip code of the installation, the cost, the size of the installation and the state of each location.

(4). Household income, education, the population of each zip code was obtained from US census.

(5). The average cost of the solar installation for each state was scraped from the website: Current cost of solar panels and Why Solar Energy? More of datasets for this proposal will be downloaded from the Department of Energy on GitHub via API.

Note: I cannot guarantee the accuracy of the analysis. My results are based on two days of data mining, wrangling, and analysis. The quality of the analysis is highly depended on the quality of the data and on how I understood the datasets in such limited time. A further validation of the analysis and datasets is needed.

For further contact the author, please find me on https://geoyi.org; or email me:geospatialanalystyi@gmail.com.

Finally got my GitHub account and some other useful resources for RStudio for Git, GitHub

I finally got my portfolio ready for data science and GIS specialist job searching. Many of friends in data science have suggested that having a GitHub account available would be helpful. GitHub is a site that holds and manages codes for programmers globally. GitHub works much better if your have your colleagues work on the same programming with you, it will help to track the codes editing from other people’s contribution to the programming/project.

I’ve started to host some of the codes I developed in the past on my GitHub account. I use R and Python for data analysis and data visualization; Python for mapping and GIS work. HTML, CSS and Javascript for web application development. I’ve always been curious that how other people’s readme file look much better than my own. BTW, Readme file is helping other programmer read your file and codes easier.  Some of my big data friends also share this super helpful site that teaches you how to use Git link R, R markdown with RStudio to GitHub step by step.  It’s very easy to understand.

Anyway, shot me an email to geospatialanalystyi@gmail.com if you need any other instruction on it.

github

Find out your survival rate in Titanic tragedy

I believe all of us have been watched the movie Titanic by James Cameron (1997) again and after a good sobbing, let find out if we all could survival through the Titanic. Actually, Titanic dataset is also a superstar dataset in data science that people use to do all sort of crazy survival machine learning. Today we are going to use R to answer who actually survived and what their age, sex, and social status.

The sinking of the RMS Titanic occurred on the night of 14 April through to the morning of 15 April 1912 in the north Atlantic Ocean, four days into the ship’s maiden voyage from Southampton to New York City.

titanic boat

(image from google)

  1. What is in the dataset.

We have 1308 passengers in the data. The data includes:

survival Survival (0 = No; 1 = Yes);

pclass: Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd);

name Name;

sex Sex;

age Age;

sibsp Number of Siblings/Spouses Aboard;

parch Number of Parents/Children Aboard;

ticket Ticket Number;

fare Passenger Fare;

cabin Cabin;

embarked Port of Embarkation; (C = Cherbourg; Q = Queenstown; S = Southampton).

titanic dataset

How the dataset looks like.

2. Running R and packages.

I have uploaded my R codes to my GitHub account, find my R codes on GitHub.

3. Results.

Rplot01

This graph shows you who are on Titanic, there were more male passengers than female especially for the third class.

Rplot02

This is a graph show the survival comparison. Left graph shows people who did not survive and right graph show the survival counts (how many people survived). The death rate for third class passengers was super high :-(. Female passengers had high survival rate, especially for the first class.

Rplot03

This is also a death and survival comparison but with the age element (y-axis). From who were the survivals question you could see, the female had the highest survival rate overall, but for third class female tended to be much younger to be able to survive the tragedy. Now you know why Jack did not survive in the movie Titanic wasn’t a just tragedy itself, but it also there was the higher risk for him to lose his life in the voyage sinking.

Data visualization is very straight forward, isn’t it.  Here is a TED talk ‘The beauty of data visualization’ by David McCandless I found.  It’s really inspiring if you guys every interested in data visualization.

A baby step towards my interactive map application using Leaflet JavaScript

Picture1

This is how my national GDP interactive map looks like on the local host. You could watch my first ever video record on YouTube of this interactive map. A brief introduction of the map and also the codes using HTML, CSS, and JavaScript. This was a very simple example I made to test some of my ideas.

If you remember my last blog that I present an interactive map host via ESRI ArcGIS Online.  After my data was successfully uploaded, I found several issues that I don’t like about it:

  1. Even though ESRI ArcGIS Online have a super nice format that you could visualize the spatial data in a pretty way, but the data loading from the site is very slow, AND IT’S COULD BE VERY EXPENSIVE. I am at my 60 days free trial at this point and I believe if I wanna use the server and do some data analysis on ArcGIS Online I have to buy their credits;
  2. The way of data presenting is restricted to the certain format depends on how you select the web map format from ESRI.

I use quite a bit of R, and I know that there are two packages in R called Shiny and Leaflet For R might help me develop the idea. I was so thrilled to find these packages, I feel a bright light shine on my road and point to the destination I wanna head to, and I found a perfect example that my web map application will look like especially the case of  American Super Zipcode. There are not only an interactive map but also while you zoom in and out you could also show some statistic results on the right side of the map. It’s too cool.

But I was so disappointed too while I found out developing a web application through Shiny and Leaflet for R would not be free, because I still need a server to host my data and APP once they could be share. However, at the point that I only need to test my ideas.

I gave up the two methods I found above and even checked out Mapbox Studio and Cartodb, two of the most popular online interactive map and visualization platform. But they are for developers (you still could use it without coding background, though), but I wanna have some features that require coding in Javascript. Leaflet JavaScript library is the last and best way I could use, which could give me enough freedom to figure out the functions/features for my application, and even the interactive analytical tools that I could put up over there. Now I also find D3 might be even more attractive because it hosts a bigger JS library that not only for the interactive map but also other online interactive way of data visualization.

I got a lot help from briefing through some YouTube videos (that’s the reason I recorded a video myself and hope it could be helpful to another struggling beginner like myself). Learn quite a lot of new things like GeoJson and GeoJson-vt. GeoJson is a geodata format for JavaScript, which is equal to shapefile for ArcGIS and QGIS. If your dataset is bigger than 1 M, the data loading to your website would slow down, so the founder of leaflet JS library wrote a vector tile JS codes (GeoJson-vt) to speed up the shapefile data loading process.

Here is my HTML, CSS and JavaScript code for the application you see in the video, You could also find me and my codes on GitHub

 

My web map application is online

Hi friends,

I’ve been working on a web application for Chinese Ministry of Commerce on rubber cultivation and risks will be out soon, and I just wanna share with you the simplified version web map API here. I only have layers here, though, more to come.

Screen shot for web map application

Web map application by Zhuangfang Yi: Current rubber cultivation area (ha) in tropical Asia

This web map API aims to tell the investors that rubber cultivation is not just about clearing the land/forests, plant trees and then you could wait for tapping the tree and sell the latex. There are way more risks for the planting/cultivate rubber trees, including several natural disasters, cultural and economic conflicts between the foreign investors and host countries.

We also found the minimum price for rubber latex for livelihood sustainability is as high as 3USD/kg. I define the  minimum price is the price that an investor/household could cover the costs of establishing and managing their rubber plantations. While the actual rubber price is lower than the minimum price, there is no profit for having the rubber plantations. The minimum price for running a rubber plantation varies from country to country. I ran the analysis through 8 countries in Asia: China, Laos, Myanmar, Cambodia, Vietnam, Malaysia and Indonesia. The minimum price depends on the minimum wage, labour availability, costs of the plantation establishments and management, average rubber latex productivity throughout the life span of rubber trees. The cut-off price ranges from 1.2USD/kg to 3.6USD/kg.

We could make an example that if rubber price is 2USD/kg now in the market, the country whose cutoff price for rubber is 3USD/kg won’t make any profit, but the investors in the country might lose at least 1USD/kg for selling every kg of rubber latex.

 

Why”#” is important? Data mining and streaming from twitter using Python

To be able to exact big data from twitter, you have to register an API for twitter.

I installed Python3.5 and edit my Windows8.1 environmental variables setting from ‘advance computer system setting. I downloaded Tweepy (exacting data from twitter using python), and the tweepy could not be installed in my computer Command Prompt. It reminded me that I have to log in my computer as the administrator to be able to install tweepy. Of course, right?! Sometime you just lose the battle by doing something not very smart. I relogged in my computer as the administrator and problem solved.


Marco Bonzanini has written a full 7 blogs about how to do data mining from twitter if you ever interested in doing big data analysis.