Exploring campaign finance data

blog
data engineering
case study
duckdb
performance
Author

Nick Crews

Published

March 24, 2023

Hi! My name is Nick Crews, and I’m a data engineer that looks at public campaign finance data.

In this post, I’ll walk through how I use Ibis to explore public campaign contribution data from the Federal Election Commission (FEC). We’ll do some loading, cleaning, featurizing, and visualization. There will be filtering, sorting, grouping, and aggregation.

Downloading The Data

from pathlib import Path
from zipfile import ZipFile
from urllib.request import urlretrieve

# Download and unzip the 2018 individual contributions data
url = "https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/indiv18.zip"
zip_path = Path("indiv18.zip")
csv_path = Path("indiv18.csv")

if not zip_path.exists():
    urlretrieve(url, zip_path)

if not csv_path.exists():
    with ZipFile(zip_path) as zip_file, csv_path.open("w") as csv_file:
        for line in zip_file.open("itcont.txt"):
            csv_file.write(line.decode())

Loading the data

Now that we have our raw data in a .csv format, let’s load it into Ibis, using the duckdb backend.

Note that a 4.3 GB .csv would be near the limit of what pandas could handle on my laptop with 16GB of RAM. In pandas, typically every time you perform a transformation on the data, a copy of the data is made. I could only do a few transformations before I ran out of memory.

With Ibis, this problem is solved in two different ways.

First, because they are designed to work with very large datasets, many (all?) SQL backends support out of core operations. The data lives on disk, and are only loaded in a streaming fashion when needed, and then written back to disk as the operation is performed.

Second, unless you explicitly ask for it, Ibis makes use of lazy evaluation. This means that when you ask for a result, the result is not persisted in memory. Only the original source data is persisted. Everything else is derived from this on the fly.

import ibis
from ibis import _

ibis.options.interactive = True

# The raw .csv file doesn't have column names, so we will add them in the next step.
raw = ibis.read_csv(csv_path)
raw
┏━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ C00401224  A       M6      P       201804059101866001  24T     IND     STOUFFER, LEIGH    AMSTELVEEN    ZZ      1187RC     MYSELF             SELF EMPLOYED            05172017  10     C00458000  SA11AI_81445687  1217152  column18  EARMARKED FOR PROGRESSIVE CHANGE CAMPAIGN COMMITTEE (C00458000)  4050820181544765358 ┃
┡━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ stringstringstringstringint64stringstringstringstringstringstringstringstringstringint64stringstringint64stringstringint64               │
├───────────┼────────┼────────┼────────┼────────────────────┼────────┼────────┼───────────────────┼──────────────┼────────┼───────────┼───────────────────┼─────────────────────────┼──────────┼───────┼───────────┼─────────────────┼─────────┼──────────┼─────────────────────────────────────────────────────────────────┼─────────────────────┤
│ C00401224A     M6    P     20180405910186774824T   IND   STRAWS, JOYCE    OCOEE       FL    34761    SILVERSEA CRUISESRESERVATIONS SUPERVISOR0518201710C00000935SA11AI_815923361217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544770597 │
│ C00401224A     M6    P     20180405910186774824T   IND   STRAWS, JOYCE    OCOEE       FL    34761    SILVERSEA CRUISESRESERVATIONS SUPERVISOR0519201715C00000935SA11AI_816275621217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544770598 │
│ C00401224A     M6    P     20180405910186594224T   IND   STOTT, JIM       CAPE NEDDICKME    039020760NONE             NONE                   0513201735C00000935SA11AI_810479211217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544765179 │
│ C00401224A     M6    P     20180405910186594224T   IND   STOTT, JIM       CAPE NEDDICKME    039020760NONE             NONE                   0515201735C00000935SA11AI_812092091217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544765180 │
│ C00401224A     M6    P     20180405910186594224T   IND   STOTT, JIM       CAPE NEDDICKME    039020760NONE             NONE                   051920175C00000935SA11AI_816052231217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544765181 │
│ C00401224A     M6    P     20180405910186594324T   IND   STOTT, JIM       CAPE NEDDICKME    039020760NONE             NONE                   0524201715C00000935SA11AI_822000221217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544765182 │
│ C00401224A     M6    P     20180405910186594324T   IND   STOTT, JIM       CAPE NEDDICKME    03902    NOT EMPLOYED     NOT EMPLOYED           05292017100C00213512SA11AI_825898341217152NULLEARMARKED FOR NANCY PELOSI FOR CONGRESS (C00213512)            4050820181544765184 │
│ C00401224A     M6    P     20180405910186594424T   IND   STOTT, JIM       CAPE NEDDICKME    039020760NONE             NONE                   0530201735C00000935SA11AI_826437271217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544765185 │
│ C00401224A     M6    P     20180405910186705024T   IND   STRANGE, WINIFREDANNA MSRIA  FL    34216    NOT EMPLOYED     NOT EMPLOYED           0516201725C00000935SA11AI_813259181217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544768505 │
│ C00401224A     M6    P     20180405910186705124T   IND   STRANGE, WINIFREDANNA MSRIA  FL    34216    NOT EMPLOYED     NOT EMPLOYED           0523201725C00000935SA11AI_819911891217152NULLEARMARKED FOR DCCC (C00000935)                                 4050820181544768506 │
│  │
└───────────┴────────┴────────┴────────┴────────────────────┴────────┴────────┴───────────────────┴──────────────┴────────┴───────────┴───────────────────┴─────────────────────────┴──────────┴───────┴───────────┴─────────────────┴─────────┴──────────┴─────────────────────────────────────────────────────────────────┴─────────────────────┘
# For a more comprehesive description of the columns and their meaning, see
# https://www.fec.gov/campaign-finance-data/contributions-individuals-file-description/
columns = {
    "CMTE_ID": "keep",  # Committee ID
    "AMNDT_IND": "drop",  # Amendment indicator. A = amendment, N = new, T = termination
    "RPT_TP": "drop",  # Report type (monthly, quarterly, etc)
    "TRANSACTION_PGI": "keep",  # Primary/general indicator
    "IMAGE_NUM": "drop",  # Image number
    "TRANSACTION_TP": "drop",  # Transaction type
    "ENTITY_TP": "keep",  # Entity type
    "NAME": "drop",  # Contributor name
    "CITY": "keep",  # Contributor city
    "STATE": "keep",  # Contributor state
    "ZIP_CODE": "drop",  # Contributor zip code
    "EMPLOYER": "drop",  # Contributor employer
    "OCCUPATION": "drop",  # Contributor occupation
    "TRANSACTION_DT": "keep",  # Transaction date
    "TRANSACTION_AMT": "keep",  # Transaction amount
    # Other ID. For individual contributions will be null. For contributions from
    # other FEC committees, will be the committee ID of the other committee.
    "OTHER_ID": "drop",
    "TRAN_ID": "drop",  # Transaction ID
    "FILE_NUM": "drop",  # File number, unique number assigned to each report filed with the FEC
    "MEMO_CD": "drop",  # Memo code
    "MEMO_TEXT": "drop",  # Memo text
    "SUB_ID": "drop",  # Submission ID. Unique number assigned to each transaction.
}

renaming = {old: new for old, new in zip(raw.columns, columns.keys())}
to_keep = [k for k, v in columns.items() if v == "keep"]
kept = raw.relabel(renaming)[to_keep]
kept
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ CMTE_ID    TRANSACTION_PGI  ENTITY_TP  CITY          STATE   TRANSACTION_DT  TRANSACTION_AMT ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ stringstringstringstringstringstringint64           │
├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤
│ C00401224P              IND      OCOEE       FL    05182017      10 │
│ C00401224P              IND      OCOEE       FL    05192017      15 │
│ C00401224P              IND      CAPE NEDDICKME    05132017      35 │
│ C00401224P              IND      CAPE NEDDICKME    05152017      35 │
│ C00401224P              IND      CAPE NEDDICKME    05192017      5 │
│ C00401224P              IND      CAPE NEDDICKME    05242017      15 │
│ C00401224P              IND      CAPE NEDDICKME    05292017      100 │
│ C00401224P              IND      CAPE NEDDICKME    05302017      35 │
│ C00401224P              IND      ANNA MSRIA  FL    05162017      25 │
│ C00401224P              IND      ANNA MSRIA  FL    05232017      25 │
│  │
└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘
# 21 million rows
kept.count()

┌──────────┐
│ 21730730 │
└──────────┘

Huh, what’s up with those timings? Previewing the head only took a fraction of a second, but finding the number of rows took 10 seconds.

That’s because duckdb is scanning the .csv file on the fly every time we access it. So we only have to read the first few lines to get that preview, but we have to read the whole file to get the number of rows.

Note that this isn’t a feature of Ibis, but a feature of Duckdb. This what I think is one of the strengths of Ibis: Ibis itself doesn’t have to implement any of the optimimizations or features of the backends. Those backends can focus on what they do best, and Ibis can get those things for free.

So, let’s tell duckdb to actually read in the file to its native format so later accesses will be faster. This will be a ~20 seconds that we’ll only have to pay once.

kept = kept.cache()
kept
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ CMTE_ID    TRANSACTION_PGI  ENTITY_TP  CITY          STATE   TRANSACTION_DT  TRANSACTION_AMT ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ stringstringstringstringstringstringint64           │
├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤
│ C00401224P              IND      OCOEE       FL    05182017      10 │
│ C00401224P              IND      OCOEE       FL    05192017      15 │
│ C00401224P              IND      CAPE NEDDICKME    05132017      35 │
│ C00401224P              IND      CAPE NEDDICKME    05152017      35 │
│ C00401224P              IND      CAPE NEDDICKME    05192017      5 │
│ C00401224P              IND      CAPE NEDDICKME    05242017      15 │
│ C00401224P              IND      CAPE NEDDICKME    05292017      100 │
│ C00401224P              IND      CAPE NEDDICKME    05302017      35 │
│ C00401224P              IND      ANNA MSRIA  FL    05162017      25 │
│ C00401224P              IND      ANNA MSRIA  FL    05232017      25 │
│  │
└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘

Look, now accessing it only takes a fraction of a second!

kept.count()

┌──────────┐
│ 21730730 │
└──────────┘

Committees Data

The contributions only list an opaque CMTE_ID column. We want to know which actual committee this is. Let’s load the committees table so we can lookup from committee ID to committee name.

def read_committees():
    committees_url = "https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/committee_summary_2018.csv"
    # This just creates a view, it doesn't actually fetch the data yet
    tmp = ibis.read_csv(committees_url)
    tmp = tmp["CMTE_ID", "CMTE_NM"]
    # The raw table contains multiple rows for each committee id, so lets pick
    # an arbitrary row for each committee id as the representative name.
    deduped = tmp.group_by("CMTE_ID").agg(CMTE_NM=_.CMTE_NM.arbitrary())
    return deduped


comms = read_committees().cache()
comms
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ CMTE_ID    CMTE_NM                                                        ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stringstring                                                         │
├───────────┼────────────────────────────────────────────────────────────────┤
│ C00659441JASON ORTITAY FOR CONGRESS                                     │
│ C00297911TEXAS FORESTRY ASSOCIATION FORESTRY POLITICAL ACTION COMMITTEE │
│ C00340745WADDELL & REED FINANCIAL, INC. POLITICAL ACTION COMMITTEE      │
│ C00679217CANTWELL-WARREN VICTORY FUND                                   │
│ C00101204NATIONAL FISHERIES INSTITUTE (FISHPAC)                         │
│ C00010520MEREDITH CORPORATION EMPLOYEES FUND FOR BETTER GOVERNMENT      │
│ C00532788LAFAYETTE COUNTY DEMOCRATIC PARTY                              │
│ C00128561TOLL BROS. INC. PAC                                            │
│ C00510958WENDYROGERS.ORG                                                │
│ C00665604COMMITTEE TO ELECT BILL EBBEN                                  │
│                                                               │
└───────────┴────────────────────────────────────────────────────────────────┘

Now add the committee name to the contributions table:

together = kept.left_join(comms, "CMTE_ID").drop("CMTE_ID", "CMTE_ID_right")
together
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ TRANSACTION_PGI  ENTITY_TP  CITY              STATE   TRANSACTION_DT  TRANSACTION_AMT  CMTE_NM                                         ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stringstringstringstringstringint64string                                          │
├─────────────────┼───────────┼──────────────────┼────────┼────────────────┼─────────────────┼─────────────────────────────────────────────────┤
│ P              IND      COHASSET        MA    01312017      230UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      KEY LARGO       FL    01042017      5000UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      LOOKOUT MOUNTAINGA    01312017      230UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      NORTH YARMOUTH  ME    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      ALPHARETTA      GA    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      FALMOUTH        ME    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      FALMOUTH        ME    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      HOLLIS CENTER   ME    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      FALMOUTH        ME    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│ P              IND      ALEXANDRIA      VA    01312017      384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │
│                                                │
└─────────────────┴───────────┴──────────────────┴────────┴────────────────┴─────────────────┴─────────────────────────────────────────────────┘

Cleaning

First, let’s drop any contributions that don’t have a committee name. There are only 6 of them.

# We can do this fearlessly, no .copy() needed, because
# everything in Ibis is immutable. If we did this in pandas,
# we might start modifying the original DataFrame accidentally!
cleaned = together

has_name = cleaned.CMTE_NM.notnull()
cleaned = cleaned[has_name]
has_name.value_counts()
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ NotNull(CMTE_NM)  NotNull(CMTE_NM)_count ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩
│ booleanint64                  │
├──────────────────┼────────────────────────┤
│ True             │               21730724 │
│ False            │                      6 │
└──────────────────┴────────────────────────┘

Let’s look at the ENTITY_TP column. This represents the type of entity that made the contribution:

together.ENTITY_TP.value_counts()
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ ENTITY_TP  ENTITY_TP_count ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ stringint64           │
├───────────┼─────────────────┤
│ NULL5289 │
│ CAN      13659 │
│ COM      867 │
│ IND      21687992 │
│ ORG      18555 │
│ PAC      3621 │
│ PTY      49 │
│ CCM      698 │
└───────────┴─────────────────┘

We only care about contributions from individuals.

Once we filter on this column, the contents of it are irrelevant, so let’s drop it.

cleaned = together[_.ENTITY_TP == "IND"].drop("ENTITY_TP")

It looks like the TRANSACTION_DT column was a raw string like “MMDDYYYY”, so let’s convert that to a proper date type.

from ibis.expr.types import StringValue, DateValue


def mmddyyyy_to_date(val: StringValue) -> DateValue:
    return val.cast(str).lpad(8, "0").to_timestamp("%m%d%Y").date()


cleaned = cleaned.mutate(date=mmddyyyy_to_date(_.TRANSACTION_DT)).drop("TRANSACTION_DT")
cleaned
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ TRANSACTION_PGI  CITY              STATE   TRANSACTION_AMT  CMTE_NM                                          date       ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ stringstringstringint64stringdate       │
├─────────────────┼──────────────────┼────────┼─────────────────┼─────────────────────────────────────────────────┼────────────┤
│ P              COHASSET        MA    230UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              KEY LARGO       FL    5000UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-04 │
│ P              LOOKOUT MOUNTAINGA    230UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              NORTH YARMOUTH  ME    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              ALPHARETTA      GA    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              FALMOUTH        ME    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              FALMOUTH        ME    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              HOLLIS CENTER   ME    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              FALMOUTH        ME    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│ P              ALEXANDRIA      VA    384UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC)2017-01-31 │
│           │
└─────────────────┴──────────────────┴────────┴─────────────────┴─────────────────────────────────────────────────┴────────────┘

The TRANSACTION_PGI column represents the type (primary, general, etc) of election, and the year. But it seems to be not very consistent:

cleaned.TRANSACTION_PGI.topk(10)
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ TRANSACTION_PGI  CountStar() ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ stringint64       │
├─────────────────┼─────────────┤
│ P              17013596 │
│ G2018          2095123 │
│ P2018          1677183 │
│ P2020          208501 │
│ O2018          161874 │
│ S2017          124336 │
│ G2017          98401 │
│ P2022          91136 │
│ P2017          61153 │
│ R2017          54281 │
└─────────────────┴─────────────┘
def get_election_type(pgi: StringValue) -> StringValue:
    """Use the first letter of the TRANSACTION_PGI column to determine the election type

    If the first letter is not one of the known election stage, then return null.
    """
    election_types = {
        "P": "primary",
        "G": "general",
        "O": "other",
        "C": "convention",
        "R": "runoff",
        "S": "special",
        "E": "recount",
    }
    first_letter = pgi[0]
    return first_letter.substitute(election_types, else_=ibis.null())


cleaned = cleaned.mutate(election_type=get_election_type(_.TRANSACTION_PGI)).drop(
    "TRANSACTION_PGI"
)
cleaned
┏━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ CITY        STATE   TRANSACTION_AMT  CMTE_NM                    date        election_type ┃
┡━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ stringstringint64stringdatestring        │
├────────────┼────────┼─────────────────┼───────────────────────────┼────────────┼───────────────┤
│ ATLANTA   GA    15NANCY PELOSI FOR CONGRESS2017-06-20primary       │
│ AUSTIN    TX    15NANCY PELOSI FOR CONGRESS2017-06-04primary       │
│ WASHINGTONDC    25NANCY PELOSI FOR CONGRESS2017-06-23primary       │
│ HONOLULU  HI    10NANCY PELOSI FOR CONGRESS2017-04-20primary       │
│ MAMARONECKNY    110NANCY PELOSI FOR CONGRESS2017-06-02primary       │
│ REHOBOTH  MA    10NANCY PELOSI FOR CONGRESS2017-06-01primary       │
│ BERKELEY  CA    25NANCY PELOSI FOR CONGRESS2017-06-05primary       │
│ BEAUMONT  TX    25NANCY PELOSI FOR CONGRESS2017-04-12primary       │
│ CONCORD   MA    200NANCY PELOSI FOR CONGRESS2017-05-05primary       │
│ OXNARD    CA    15NANCY PELOSI FOR CONGRESS2017-03-31primary       │
│              │
└────────────┴────────┴─────────────────┴───────────────────────────┴────────────┴───────────────┘

That worked well! There are 0 nulls in the resulting column, so we always were able to determine the election type.

cleaned.election_type.topk(10)
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ election_type  CountStar() ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ stringint64       │
├───────────────┼─────────────┤
│ primary      19061953 │
│ general      2216685 │
│ other        161965 │
│ special      149572 │
│ runoff       69637 │
│ convention   22453 │
│ recount      5063 │
│ NULL664 │
└───────────────┴─────────────┘

About 1/20 of transactions are negative. These could represent refunds, or they could be data entry errors. Let’s drop them to keep it simple.

above_zero = cleaned.TRANSACTION_AMT > 0
cleaned = cleaned[above_zero]
above_zero.value_counts()
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Greater(TRANSACTION_AMT, 0)  Greater(TRANSACTION_AMT, 0)_count ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ booleanint64                             │
├─────────────────────────────┼───────────────────────────────────┤
│ True                        │                          20669809 │
│ False                       │                           1018183 │
└─────────────────────────────┴───────────────────────────────────┘

Adding Features

Now that the data is cleaned up to a usable format, let’s add some features.

First, it’s useful to categorize donations by size, placing them into buckets of small, medium, large, etc.

edges = [
    10,
    50,
    100,
    500,
    1000,
    5000,
]
labels = [
    "<10",
    "10-50",
    "50-100",
    "100-500",
    "500-1000",
    "1000-5000",
    "5000+",
]


def bucketize(vals, edges, str_labels):
    # Uses Ibis's .bucket() method to create a categorical column
    int_labels = vals.bucket(edges, include_under=True, include_over=True)
    # Map the integer labels to the string labels
    int_to_str = {str(i): s for i, s in enumerate(str_labels)}
    return int_labels.cast(str).substitute(int_to_str)


featured = cleaned.mutate(amount_bucket=bucketize(_.TRANSACTION_AMT, edges, labels))
featured
┏━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ CITY          STATE   TRANSACTION_AMT  CMTE_NM                date        election_type  amount_bucket ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ stringstringint64stringdatestringstring        │
├──────────────┼────────┼─────────────────┼───────────────────────┼────────────┼───────────────┼───────────────┤
│ REMINGTON   IN    50AMERICA'S LIBERTY PAC2017-05-30primary      50-100        │
│ REMINGTON   IN    50AMERICA'S LIBERTY PAC2017-06-05primary      50-100        │
│ VANCOUVER   WA    100AMERICA'S LIBERTY PAC2017-06-07primary      100-500       │
│ SOLANA BEACHCA    500AMERICA'S LIBERTY PAC2017-06-26primary      500-1000      │
│ HILLSDALE   MI    250AMERICA'S LIBERTY PAC2017-05-15primary      100-500       │
│ MIDDLEBURY  VT    500NBT PAC FEDERAL FUND 2017-06-05primary      500-1000      │
│ WILLISTON   VT    500NBT PAC FEDERAL FUND 2017-05-30primary      500-1000      │
│ GLENMONT    NY    350NBT PAC FEDERAL FUND 2017-06-01primary      100-500       │
│ NORWICH     NY    250NBT PAC FEDERAL FUND 2017-05-31primary      100-500       │
│ CLIFTON PARKNY    250NBT PAC FEDERAL FUND 2017-06-26primary      100-500       │
│              │
└──────────────┴────────┴─────────────────┴───────────────────────┴────────────┴───────────────┴───────────────┘

Analysis

By donation size

One thing we can look at is the donation breakdown by size: - Are most donations small or large? - Where do politicians/committees get most of their money from? Large or small donations?

We also will compare performance of Ibis vs pandas during this groupby.

def summary_by(table, by):
    return table.group_by(by).agg(
        n_donations=_.count(),
        total_amount=_.TRANSACTION_AMT.sum(),
        mean_amount=_.TRANSACTION_AMT.mean(),
        median_amount=_.TRANSACTION_AMT.approx_median(),
    )


def summary_by_pandas(df, by):
    return df.groupby(by, as_index=False).agg(
        n_donations=("election_type", "count"),
        total_amount=("TRANSACTION_AMT", "sum"),
        mean_amount=("TRANSACTION_AMT", "mean"),
        median_amount=("TRANSACTION_AMT", "median"),
    )


# persist the input data so the following timings of the group_by are accurate.
subset = featured["election_type", "amount_bucket", "TRANSACTION_AMT"]
subset = subset.cache()
pandas_subset = subset.execute()

Let’s take a look at what we are actually computing:

by_type_and_bucket = summary_by(subset, ["election_type", "amount_bucket"])
by_type_and_bucket
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ election_type  amount_bucket  n_donations  total_amount  mean_amount   median_amount ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ stringstringint64int64float64int64         │
├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┤
│ primary      500-1000     634677334630687527.245649500 │
│ general      5000+        31254449637314238.8393607537 │
│ special      500-1000     78114003293512.519908500 │
│ runoff       100-500      181933088289169.751498100 │
│ convention   500-1000     1824945321518.268092500 │
│ general      <10          1158735367424.6321585 │
│ general      50-100       3043631618431253.17437450 │
│ general      1000-5000    2461014600252421869.2538511978 │
│ general      10-50        6607871441158821.80973325 │
│ other        500-1000     11962535525.504202500 │
│  │
└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┘

OK, now let’s do our timings.

One interesting thing to pay attention to here is the execution time for the following groupby. Before, we could get away with lazy execution: because we only wanted to preview the first few rows, we only had to compute the first few rows, so all our previews were very fast.

But now, as soon as we do a groupby, we have to actually go through the whole dataset in order to compute the aggregate per group. So this is going to be slower. BUT, duckdb is still quite fast. It only takes milliseconds to groupby-agg all 20 million rows!

%timeit summary_by(subset, ["election_type", "amount_bucket"]).execute()  # .execute() so we actually fetch the data
161 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Now let’s try the same thing in pandas:

%timeit summary_by_pandas(pandas_subset, ["election_type", "amount_bucket"])
2.19 s ± 6.54 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

It takes about 4 seconds, which is about 10 times slower than duckdb. At this scale, it again doesn’t matter, but you could imagine with a dataset much larger than this, it would matter.

Let’s also think about memory usage:

pandas_subset.memory_usage(deep=True).sum() / 1e9  # GB
2.782586667

The source dataframe is couple gigabytes, so probably during the groupby, the peak memory usage is going to be a bit higher than this. You could use a profiler such as FIL if you wanted an exact number, I was too lazy to use that here.

Again, this works on my laptop at this dataset size, but much larger than this and I’d start having problems. Duckdb on the other hand is designed around working out of core so it should scale to datasets into the hundreds of gigabytes, much larger than your computer’s RAM.

Back to analysis

OK, let’s plot the result of that groupby.

Surprise! (Or maybe not…) Most donations are small. But most of the money comes from donations larger than $1000.

Well if that’s the case, why do politicians spend so much time soliciting small donations? One explanation is that they can use the number of donations as a marketing pitch, to show how popular they are, and thus how viable of a candidate they are.

This also might explain whose interests are being served by our politicians.

import altair as alt

# Do some bookkeeping so the buckets are displayed smallest to largest on the charts
bucket_col = alt.Column("amount_bucket:N", sort=labels)

n_by_bucket = (
    alt.Chart(by_type_and_bucket.execute())
    .mark_bar()
    .encode(
        x=bucket_col,
        y="n_donations:Q",
        color="election_type:N",
    )
)
total_by_bucket = (
    alt.Chart(by_type_and_bucket.execute())
    .mark_bar()
    .encode(
        x=bucket_col,
        y="total_amount:Q",
        color="election_type:N",
    )
)
n_by_bucket | total_by_bucket

By election stage

Let’s look at how donations break down by election stage. Do people donate differently for primary elections vs general elections?

Let’s ignore everything but primary and general elections, since they are the most common, and arguably the most important.

gb2 = by_type_and_bucket[_.election_type.isin(("primary", "general"))]
n_donations_per_election_type = _.n_donations.sum().over(group_by="election_type")
frac = _.n_donations / n_donations_per_election_type
gb2 = gb2.mutate(frac_n_donations_per_election_type=frac)
gb2
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ election_type  amount_bucket  n_donations  total_amount  mean_amount   median_amount  frac_n_donations_per_election_type ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ stringstringint64int64float64int64float64                            │
├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┼────────────────────────────────────┤
│ general      <10          1158735367424.63215850.052544 │
│ general      50-100       3043631618431253.174374500.138017 │
│ general      1000-5000    2461014600252421869.25385119610.111598 │
│ general      10-50        6607871441158821.809733250.299642 │
│ general      100-500      700821123174568175.7575301500.317796 │
│ general      500-1000     17418291015697522.5321625000.078985 │
│ general      5000+        31254449637314238.83936076010.001417 │
│ primary      5000+        44085155837111635349.237065100000.002422 │
│ primary      100-500      3636287637353634175.2759431500.199765 │
│ primary      500-1000     634677334630687527.2456495000.034867 │
│  │
└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┴────────────────────────────────────┘

It looks like primary elections get a larger proportion of small donations.

alt.Chart(gb2.execute()).mark_bar().encode(
    x="election_type:O",
    y="frac_n_donations_per_election_type:Q",
    color=bucket_col,
)

By recipient

Let’s look at the top players. Who gets the most donations?

Far and away it is ActBlue, which acts as a conduit for donations to Democratic interests.

Beto O’Rourke is the top individual politician, hats off to him!

by_recip = summary_by(featured, "CMTE_NM")
by_recip
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ CMTE_NM                                                           n_donations  total_amount  mean_amount  median_amount ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ stringint64int64float64int64         │
├──────────────────────────────────────────────────────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤
│ INDIANA DENTAL PAC                                              11162236560.684685410 │
│ BEAM SUNTORY INC POLITICAL ACTION COMMITTEE                     40764806159.22850165 │
│ AMEDISYS, INC. POLITICAL ACTION COMMITTEE                       13225000189.39393975 │
│ PIEDMONT TRIAD ANESTHESIA P A FEDERAL PAC                       13290375684.659091600 │
│ AHOLD DELHAIZE USA, INC POLITICAL ACTION COMMITTEE              36948062130.249322100 │
│ DIMITRI FOR CONGRESS                                            8734719399.068966250 │
│ RELX INC. POLITICAL ACTION COMMITTEE                            549130690855.89291634 │
│ MAKING INVESTMENTS MAJORITY INSURED PAC                         14306002185.7142861000 │
│ AMERICAN ACADEMY OF OTOLARYNGOLOGY-HEAD AND NECK SURGERY ENT PAC765285756373.537255365 │
│ MIMI WALTERS VICTORY FUND                                       84025148242993.8380952506 │
│  │
└──────────────────────────────────────────────────────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘
top_recip = by_recip.order_by(ibis.desc("n_donations")).head(10)
alt.Chart(top_recip.execute()).mark_bar().encode(
    x=alt.X("CMTE_NM:O", sort="-y"),
    y="n_donations:Q",
)

By Location

Where are the largest donations coming from?

f2 = featured.mutate(loc=_.CITY + ", " + _.STATE).drop("CITY", "STATE")
by_loc = summary_by(f2, "loc")
# Drop the places with a small number of donations so we're
# resistant to outliers for the mean
by_loc = by_loc[_.n_donations > 1000]
by_loc
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ loc              n_donations  total_amount  mean_amount  median_amount ┃
┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ stringint64int64float64int64         │
├─────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤
│ NAZARETH, PA   146013871095.00684938 │
│ FULSHEAR, TX   1504346778230.57047950 │
│ GLOUCESTER, MA 4956563331113.66646525 │
│ NORMAN, OK     6195945333152.59612635 │
│ OAK PARK, IL   120173413138284.02579739 │
│ AUSTIN, TX     18986533315922175.47163538 │
│ MIAMI BEACH, FL1282510598453826.390097100 │
│ SAN ANTONIO, TX14052918925978134.67667235 │
│ HAMBURG, NY    232217025473.3221368 │
│ PITTSBURGH, PA 7420814358578193.49097142 │
│  │
└─────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘
def top_by(col):
    top = by_loc.order_by(ibis.desc(col)).head(10)
    return (
        alt.Chart(top.execute())
        .mark_bar()
        .encode(
            x=alt.X('loc:O', sort="-y"),
            y=col,
        )
    )


top_by("n_donations") | top_by("total_amount") | top_by("mean_amount") | top_by(
    "median_amount"
)

By month

When do the donations come in?

by_month = summary_by(featured, _.date.month().name("month_int"))
# Sorta hacky, .substritute doesn't work to change dtypes (yet?)
# so we cast to string and then do our mapping
month_map = {
    "1": "Jan",
    "2": "Feb",
    "3": "Mar",
    "4": "Apr",
    "5": "May",
    "6": "Jun",
    "7": "Jul",
    "8": "Aug",
    "9": "Sep",
    "10": "Oct",
    "11": "Nov",
    "12": "Dec",
}
by_month = by_month.mutate(month_str=_.month_int.cast(str).substitute(month_map))
by_month
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ month_int  n_donations  total_amount  mean_amount  median_amount  month_str ┃
┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ int32int64int64float64int64string    │
├───────────┼─────────────┼──────────────┼─────────────┼───────────────┼───────────┤
│      NULL1514250297165.321664100NULL      │
│         1348979174837854500.998209124Jan       │
│         2581646255997655440.126219100Feb       │
│         31042577430906797413.30932681Mar       │
│         41088244299252692274.98676050Apr       │
│         51374247387317192281.83957648May       │
│         61667285465305247279.07961044Jun       │
│         71607053320528605199.45117235Jul       │
│         82023466473544182234.02626135Aug       │
│         92583847697888624270.09672938Sep       │
│                  │
└───────────┴─────────────┴──────────────┴─────────────┴───────────────┴───────────┘
months_in_order = list(month_map.values())
alt.Chart(by_month.execute()).mark_bar().encode(
    x=alt.X("month_str:O", sort=months_in_order),
    y="n_donations:Q",
)

Conclusion

Thanks for following along! I hope you’ve learned something about Ibis, and maybe even about campaign finance.

Ibis is a great tool for exploring data. I now find myself reaching for it when in the past I would have reached for pandas.

Some of the highlights for me:

  • Fast, lazy execution, a great display format, and good type hinting/editor support for a great REPL experience.
  • Very well thought-out API and semantics (e.g. isinstance(val, NumericValue)?? That’s beautiful!)
  • Fast and fairly complete string support, since I work with a lot of text data.
  • Extremely responsive maintainers. Sometimes I’ve submitted multiple feature requests and bug reports in a single day, and a PR has been merged by the next day.
  • Escape hatch to SQL. I didn’t have to use that here, but if something isn’t supported, you can always fall back to SQL.

Check out The Ibis Website for more information.

Back to top