!curl -LsS 'https://storage.googleapis.com/ibis-examples/penguins/20240322T125036Z-9aae2/penguins.csv.gz' | zcat > penguins.csv
The Unix backend for Ibis
We’re happy to announce a new Ibis backend built on the world’s best known web scale technology: Unix pipes.
Why?
Why not? Pipes rock and they automatically stream data between operators and scale to your hard drive.
What’s not to love?
Demo
All production ready backends ship with amazing demos.
The Unix backend is no different. Let’s see it in action.
First we’ll install the Unix backend.
pip install ibish
Like all production-ready libraries ibish
depends on the latest commit of ibis-framework
.
Next we’ll download some data.
import ibis
import ibish
= True
ibis.options.interactive
= ibish.connect({"p": "penguins.csv"})
unix
= unix.table("p")
t t
┏━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓ ┃ species ┃ island ┃ bill_length_mm ┃ bill_depth_mm ┃ flipper_length_mm ┃ body_mass_g ┃ sex ┃ year ┃ ┡━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩ │ string │ string │ float64 │ float64 │ float64 │ float64 │ string │ int64 │ ├─────────┼───────────┼────────────────┼───────────────┼───────────────────┼─────────────┼────────┼───────┤ │ Adelie │ Torgersen │ 39.1 │ 18.7 │ 181.0 │ 3750.0 │ male │ 2007 │ │ Adelie │ Torgersen │ 39.5 │ 17.4 │ 186.0 │ 3800.0 │ female │ 2007 │ │ Adelie │ Torgersen │ 40.3 │ 18.0 │ 195.0 │ 3250.0 │ female │ 2007 │ │ Adelie │ Torgersen │ NULL │ NULL │ NULL │ NULL │ NULL │ 2007 │ │ Adelie │ Torgersen │ 36.7 │ 19.3 │ 193.0 │ 3450.0 │ female │ 2007 │ │ Adelie │ Torgersen │ 39.3 │ 20.6 │ 190.0 │ 3650.0 │ male │ 2007 │ │ Adelie │ Torgersen │ 38.9 │ 17.8 │ 181.0 │ 3625.0 │ female │ 2007 │ │ Adelie │ Torgersen │ 39.2 │ 19.6 │ 195.0 │ 4675.0 │ male │ 2007 │ │ Adelie │ Torgersen │ 34.1 │ 18.1 │ 193.0 │ 3475.0 │ NULL │ 2007 │ │ Adelie │ Torgersen │ 42.0 │ 20.2 │ 190.0 │ 4250.0 │ NULL │ 2007 │ │ … │ … │ … │ … │ … │ … │ … │ … │ └─────────┴───────────┴────────────────┴───────────────┴───────────────────┴─────────────┴────────┴───────┘
Sweet, huh?
Let’s filter the data and look at only the year 2009.
= t.filter(t.year == 2009)
expr expr
┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓ ┃ species ┃ island ┃ bill_length_mm ┃ bill_depth_mm ┃ flipper_length_mm ┃ body_mass_g ┃ sex ┃ year ┃ ┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩ │ string │ string │ float64 │ float64 │ float64 │ float64 │ string │ int64 │ ├─────────┼────────┼────────────────┼───────────────┼───────────────────┼─────────────┼────────┼───────┤ │ Adelie │ Biscoe │ 35.0 │ 17.9 │ 192.0 │ 3725.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 41.0 │ 20.0 │ 203.0 │ 4725.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 37.7 │ 16.0 │ 183.0 │ 3075.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 37.8 │ 20.0 │ 190.0 │ 4250.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 37.9 │ 18.6 │ 193.0 │ 2925.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 39.7 │ 18.9 │ 184.0 │ 3550.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 38.6 │ 17.2 │ 199.0 │ 3750.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 38.2 │ 20.0 │ 190.0 │ 3900.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 38.1 │ 17.0 │ 181.0 │ 3175.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 43.2 │ 19.0 │ 197.0 │ 4775.0 │ male │ 2009 │ │ … │ … │ … │ … │ … │ … │ … │ … │ └─────────┴────────┴────────────────┴───────────────┴───────────────────┴─────────────┴────────┴───────┘
We can sort the result of that too, and filter again.
= (
expr "species", ibis.desc("bill_length_mm"))
expr.order_by(filter(lambda t: t.island == "Biscoe")
.
) expr
┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┓ ┃ species ┃ island ┃ bill_length_mm ┃ bill_depth_mm ┃ flipper_length_mm ┃ body_mass_g ┃ sex ┃ year ┃ ┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━┩ │ string │ string │ float64 │ float64 │ float64 │ float64 │ string │ int64 │ ├─────────┼────────┼────────────────┼───────────────┼───────────────────┼─────────────┼────────┼───────┤ │ Adelie │ Biscoe │ 45.6 │ 20.3 │ 191.0 │ 4600.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 43.2 │ 19.0 │ 197.0 │ 4775.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 42.7 │ 18.3 │ 196.0 │ 4075.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 42.2 │ 19.5 │ 197.0 │ 4275.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 41.0 │ 20.0 │ 203.0 │ 4725.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 39.7 │ 17.7 │ 193.0 │ 3200.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 39.7 │ 18.9 │ 184.0 │ 3550.0 │ male │ 2009 │ │ Adelie │ Biscoe │ 39.6 │ 20.7 │ 191.0 │ 3900.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 38.6 │ 17.2 │ 199.0 │ 3750.0 │ female │ 2009 │ │ Adelie │ Biscoe │ 38.2 │ 20.0 │ 190.0 │ 3900.0 │ male │ 2009 │ │ … │ … │ … │ … │ … │ … │ … │ … │ └─────────┴────────┴────────────────┴───────────────┴───────────────────┴─────────────┴────────┴───────┘
There’s even support for joins and aggregations!
Let’s count the number of island, species pairs and sort descending by the count.
= (
expr "island", "species")
t.group_by(=lambda t: t.count())
.agg(n"n"))
.order_by(ibis.desc(
) expr
┏━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━┓ ┃ island ┃ species ┃ n ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━┩ │ string │ string │ int64 │ ├───────────┼───────────┼───────┤ │ Biscoe │ Gentoo │ 124 │ │ Dream │ Chinstrap │ 68 │ │ Dream │ Adelie │ 56 │ │ Torgersen │ Adelie │ 52 │ │ Biscoe │ Adelie │ 44 │ └───────────┴───────────┴───────┘
For kicks, let’s compare that to the DuckDB backend to make sure we’re able to count stuff.
To be extra awesome, we’ll reuse the same expression to do the computation.
= ibis.duckdb.connect()
ddb "penguins.csv", table_name="p")
ddb.read_csv( ibis.memtable(ddb.to_pyarrow(expr.unbind()))
- 1
-
The
read_csv
is necessary so that the expression’s table name–p
–matches one inside the DuckDB database.
┏━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━┓ ┃ island ┃ species ┃ n ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━┩ │ string │ string │ int64 │ ├───────────┼───────────┼───────┤ │ Biscoe │ Gentoo │ 124 │ │ Dream │ Chinstrap │ 68 │ │ Dream │ Adelie │ 56 │ │ Torgersen │ Adelie │ 52 │ │ Biscoe │ Adelie │ 44 │ └───────────┴───────────┴───────┘
How does it work?
Glad you asked!
The Unix backend for Ibis was built over the course of a few hours, which is about the time it takes to make a production ready Ibis backend.
Broadly speaking, the Unix backend:
- Produces a shell command for each Ibis table operation.
- Produces a nominal output location for the output of that command, in the form of a named pipe opened in write mode.
- Reads output from the named pipe output location of the root of the expression tree.
- Calls
pandas.read_csv
on that output.
Shell commands only allow a single input from stdin
.
However, joins accept > 1 input so we need a way to stream more than one input to a join operation.
Named pipes support the semantics of “unnamed” pipes (FIFO queue behavior) but can be used in pipelines with nodes that have more a single input since they exist as paths on the file system.
Expressions
Ibis expressions are an abstract representation of an analytics computation over tabular data.
Ibis ships a public API, whose instances we call expressions.
Expressions have an associated type–accessible via their type()
method–that determines what methods are available on them.
Expressions are ignorant of their underlying implementation: their composability is determined solely by their type.
This type is determined by the expression’s underlying operation.
The two-layer model makes it easy to describe operations in terms of the data types produced by an expression, rather than as instances of a specific class in a hierarchy.
This allows Ibis maintainers to alter expression API implementations without changing those APIs making it easier to maintain and easier to keep stable than if we had a complex (but not necessarily deep!) class hierarchy.
Operations, though, are really where the nitty gritty implementation details start.
Operations
Ibis operations are lightweight classes that model the tree structure of a computation.
They have zero or more inputs, whose types and values are constrained by Ibis’s type system.
Notably operations are not part of Ibis’s public API.
When we talk about “compilation” in Ibis, we’re talking about the process of converting an operation into something that the backend knows how to execute.
In the case of this 1̵0̸0̵%̵ p̶̺̑r̴̛ͅo̵̒ͅḍ̴̌u̷͇͒c̵̠̈t̷͍̿i̶̪͐o̸̳̾n̷͓̄-r̵̡̫̞͓͆̂̏ẽ̸̪̱̽ͅā̸̤̹̘̅̓͝d̵͇̞̏̂̔̽y̴̝͎̫̬͋̇̒̅ Unix backend, each operation is compiled into a list of strings that represent the shell command to run to execute the operation.
In other backends, like DuckDB, these compilation rules produce a sqlglot object.
The compile
method is also the place where the backend has a chance to invoke custom rewrite rules over operations.
Rewrites are a very useful tool for the Unix backend. For example, the join
command (yep, it’s in coreutils!) that we use to execute inner joins with this backend requires that the inputs be sorted, otherwise the results won’t be correct. So, I added a rewrite rule that replaces the left and right relations in a join operation with equivalent relations sorted on the join keys.
Once you obtain the output of compile, it’s up to the backend what to do next.
Backend implementation
At this point we’ve got our shell commands and some output locations created as named pipes.
What next?
Well, we need to execute the commands and write their output to the corresponding named pipe.
You might think
I’ll just loop over the operations, open the pipe in write mode and call
subprocess.Popen(cmd, stdout=named_pipe)
.
Not a bad thought, but the semantics of named pipes do not abide such thoughts :)
Named pipes, when opened in write mode, will block until a corresponding handle is opened in read mode.
Futures using a scoped thread pool are a decent way to handle this.
The idea is to launch every node concurrently and then read from the last node’s output. This initial read of the root node’s output pipe kicks off the cascade of other reads necessary to move data through the pipeline.
The Unix backend thus constructs a scoped ThreadPoolExecutor()
using a context manager and submits a task for each operation to the executor. Importantly, opening the named pipe in write mode happens inside the task, to avoid blocking the main thread while waiting for a reader to be opened.
The final output task’s path is then passed directly to read_csv
, and we’ve now got the result of our computation.
Show me the commands already!
Roger that.
= (
expr filter([t.year == 2009])
t.
.select("year", "species", "flipper_length_mm", island=lambda t: t.island.lower()
)"island", "species")
.group_by(=lambda t: t.count(), avg=lambda t: t.island.upper().length().mean())
.agg(n"n")
.order_by(=lambda t: t.island.length())
.mutate(ilength5)
.limit(
)print(unix.explain(expr))
- 1
-
explain
isn’t a public method and not likely to become one any time soon.
tail --lines +2 /home/cloud/src/ibis/docs/posts/unix-backend/penguins.csv > t0
awk -F , '{ if (($8 == 2009)) { print }}' t0 > t1
awk -F , '{ print $8 "," $1 "," $5 "," tolower($2) }' t1 > t2
awk -F , '{
agg0[$4","$2]++
agg1[$4","$2] += length(toupper($4))
}
END { for (key in agg0) print key "," agg0[key] "," agg1[key]/NR }' t2 > t3
sort -t , -k 3,3n t3 > t4
awk -F , '{ print $1 "," $2 "," $3 "," $4 "," length($1) }' t4 > t5
head --lines 5 t5 > t6
Conclusion
If you’ve gotten this far hopefully you’ve had a good laugh.
Let’s wrap up with some final thoughts.
Things to do
- Join our Zulip!
- Open a GitHub issue or discussion!
Things to avoid doing
- Putting this into production