aggregating by census tract

See it on Tableau

the goal:
I want to find the median value of lead samples in Parts Per Billion by census tract. To do that I will need to find the census tract associated with each latitude and longitude and then find the median value of all observations within that tract.

the data:
I have a dataframe that looks like this:

lon lat PPB
-77.034018 38.9138584 0

the code:

step 1.

I will use FCC’s API. I can send it latitude and longitude data and it will return the census block (among other stuff). You really only need a few lines of code, but below I wrote a class so the user can get the data they want. The class gets initiated with two parameters latitude and longitude this gets inserted into the url payload courtesy of the requests module. The object r is created and then the json is extracted to a new object y. The other four functions in the class simply extract the data element(s) from the object y

import requests
import json

class censusData:

    def __init__(self,lat,lon,showall=True):
        url = 'http://data.fcc.gov/api/block/find?format=json'
        payload = {'latitude': lat,'longitude': lon,'showall': showall}
        self.r  = requests.get(url, params=payload)
        self.y = self.r.json()

    def block(self):
        return str(self.y['Block']['FIPS'])

    def county(self):
        return str(self.y['County']['name'])

    def state(self):
        return str(self.y['State']['name'])

    def intersection(self):
        records = []
        for  b in self.y['Block']['intersection']:
            record = filter(lambda x: x.isdigit(), str(b))
            records.append(record)
        return records
    def data(self):
        return json.dumps(self.y)

step 2: using the class
I can call the api now simply like so:

censusData(28.35975,-81.421988).block()

but, how can we apply it to a large dataframe?
hint* use apply

import pandas as pd
from census import censusData
#bring in data
df =pd.read_csv("data.csv")
'''
this is when using apply is your greatest friend.
I am applying my function to my data frame.
No need for a messy for loop.
'''
df['Tract'] =df.apply(lambda x: censusData(df['lat'], df['lon']).block(), axis=1)

now I have a dateframe that looks like this:

lon lat PPB Tract
-77.034018 38.9138584 0 110010081002007

web scraping aircraft crashes

See it on Tableau

the goal:
I want to visualize all plane crashes that led to human fatalities.

the data:

The best data I could find was at https://aviation-safety.net/
The data is not in a very consumable structure; it is posted as static html tables throughout the website. I will need to iterate through every unique aircraft accident on the website, extract all data, and put it into a structure that will help me visualize the data. This is a perfect job for Python!

the code:


from bs4 import BeautifulSoup as bs
import urllib2
import pandas as pd
import requests

def getPlaneDataa(yearStart, yearEnd):
    number = yearEnd - yearStart + 1
    fs = pd.DataFrame()
    for y in range(number):
        lista = []
        yearStart +=1
        for x in range(1, 3):
            firstLink = 'http://www.aviation-safety.net/database/dblist.php?Year=' + str(yearStart)+ "&lang=&page=" + str(x)
            r =requests.get(firstLink)
            html = r.text
            soup= bs(html)
            for link in soup.find_all('a', href=True):
                lista.append(link['href'])
        u = [x for x in lista if  x.startswith('/database/r')]
        content = list(set(u))
        #main loop through all links just extracted gets html content of each link and extracts the table in each file
        for a in content:
            link = 'http://www.aviation-safety.net' + a
            req = urllib2.Request(link)
            req.add_unredirected_header('User-Agent', 'Custom User-Agent')
            html2 =urllib2.urlopen(req).read()
            table = bs(html2)
            try:
                tab= table.find_all('table')[0]
                records = []
                for tr in tab.findAll('tr'):
                    trs = tr.findAll('td')
                    th = tr.findAll('th')
                    record = []
                    record.append(trs[0].text)
                    try:
                        record.append(trs[1].text)
                    except:
                        continue
                        record.append(th[0].text)
                    records.append(record)
                df = pd.DataFrame(data=records)
            except:
                pass
            df.set_index(df[0],inplace=True)
            df = pd.DataFrame(df.ix[:,1])
            df = pd.DataFrame.transpose(df)
            fs=fs.append(df)
    return pd.DataFrame(fs)

an aside about the code
The function uses two parameters yearStart and yearEnd. It then iterates through the difference of the years and extracts the html from the webpage of the current iteration. It iterates again through the html and pulls out all links that have the word “/database”. and constructs a list called content of all webpages that have data. The code runs the main loop through each item in content and extracts the first data-table: tab= table.find_all(‘table’)[0]

the viz:
here

I want to visualize all the data in one chart. A sorted stacked bar graph I thought was the best approach. I sorted by first the category of crashes then by the amount of fatalities. This allows the reader to first compare fatalities by category then quickly find the largest plane crash in any given year.  It provides overall trends but allows for granular understanding, and all in one chart.