# Napkin Folding - Data Origami's Blog

## Highlights from lifelines v0.25.0

Posted by **Cameron Davidson-Pilon** at

Today, the 0.25.0 release of lifelines was released. I'm very excited about some changes in this version, and want to highlight a few of them. Be sure to upgrade with: pip install lifelines==0.25.0 Formulas everywhere! Formulas, which should really be called Wilkinson-style notation but everyone just calls them formulas, is a lightweight-grammar for describing additive relationships. If you have used R, you'll likely be familiar with formulas. They are less common in Python, so here's an example: Writing age +...

## An L½ penalty in Cox Regression

Posted by **Cameron Davidson-Pilon** at

Following up from a previous blog post where we explored how to implement an \(L_1\) and elastic net penalty to induce sparsity, a paper, by Xu Z B, Zhang H, Wang Y, et al., explores what a \(L_{1/2}\) penalty is and how to implement it. But first, I think we are familiar with an \(L_1\) penalty, but what is an \(L_0\) penalty then? If you work out the math, it is a penalty that counts the number of non-zero coefficients, independent of the magnitude of the coefficients: $$ll^*(\theta, x) =...

## An accelerated lifetime spline model

Posted by **Cameron Davidson-Pilon** at

A paper came out recently with a novel accelerated lifetime (AFT) model with cubic splines. This should pique your interest for a few reasons: 1. It helps dethrone the Proportional Hazard (PH) model as the default survival model. People like the PH model because it doesn't make any distributional assumptions. However, like a Trojan horse, there are very strong implicit assumptions that are inherited, often which are too restricting. Suffice to say, I am not a big proponent of the PH model. ...

## New sibling blog on food science, fermentation, and statistics: ControlledMold

Posted by **Cameron Davidson-Pilon** at

I've started a new blog at ControlledMold that is more about the food, fermentation and cell-ag side of my research. There are still data science and statistics articles, too, like Using a Secchi Stick to Measure Cell Density. Enjoy!

## Controlling bacterial growth in fermentation with hurdle technology and survival analysis

Posted by **Cameron Davidson-Pilon** at

This article is a nice intersection of some of the topics I've been thinking about lately: bacteria, food, and survival analysis, and part of a larger project I've been working on (stay tuned). The bacteria C. Botulinum is responsible for creating one of the most dangerous chemicals known to man: botulinum toxin. If ingested, incredibly small amounts of this toxin can kill even a healthy person. Thankfully, food scientists and microbiologists have developed ways to control C. Botulinum. Any of...

## L₁ Penalty in Cox Regression

Posted by **Cameron Davidson-Pilon** at

In the 00's, L1 penalties were all the rage in statistics and machine learning. Since they induced sparsity in fitted parameters, they were used as a variable selection method. Today, with some advanced models having tens of billions of parameters, sparsity isn't as useful, and the L1 penalty has dropped out of fashion. However, most teams aren't using billion parameter models, and smart data scientists work with simple models initially. Below is how we implemented an L1 penalty in the...

## Non-parametric survival function prediction

Posted by **Cameron Davidson-Pilon** at

As I was developing lifelines, I kept having a feeling that I was gradually moving the library towards prediction tasks. lifelines is great for regression models and fitting survival distributions, but as I was adding more and more flexible parametric models, I realized that I really wanted a model that would predict the survival function — and I didn't care how. This led me to the idea to use a neural net with \(n\) outputs, one output for each parameter...

## Bayesian cell counting Pt. 2 - Growth over time

Posted by **Cameron Davidson-Pilon** at

I’ve started growing yeast in my closet-turned-laboratory. There’s a reason why I am growing yeast, but that’ll be for another post. For this experiment, I wanted to use my new hemocytometer to do cell counts periodically over the next few days to gather data.

A nutrient-rich bioreactor (an Erlenmeyer flask with wort) was left at room temperature with plenty of aeration (a magnetic stirrer) for about 2.5 days. My collected data is below.

## Bayesian cell counting

Posted by **Cameron Davidson-Pilon** at

Let’s say you are interested in counting the concentration of cells in some sample. This is a pretty common task: sperm counts, blood cell counts, plankton counts. Microbiologists are always counting. Let’s use the example of yeast counting, which is traditional in beer and wine making. The brewery has a sample of yeast slurry, a highly concentrated amount of yeast, and they would like to know how concentrated it is, so they can add the correct amount to a batch....

## SaaS churn and piecewise regression survival models

Posted by **Cameron Davidson-Pilon** at

A software-as-a-service company (SaaS) has a typical customer churn pattern. During periods of no billing, the churn is relatively low compared to periods of billing (typically every 30 or 365 days). This results in a distinct survival function for customers. See below: kmf = KaplanMeierFitter().fit(df['T'], df['E']) kmf.plot(figsize=(11,6)); To borrow a term from finance, we clearly have different regimes that a customer goes through: periods of low churn and periods of high churn, both of which are predictable. This predictability and...

## Counting and interval censoring analysis

Posted by **Cameron Davidson-Pilon** at

Let’s say you have an initial population of (micro-)organisms, and you are curious about their survival rates. A common summary statistic of their survival is the half-life. How might you collect data to measure their survival? Since we are dealing with micro-organisms, we can’t track individual lifetimes. What we might do is periodically count the number of organisms still alive. Suppose our dataset looks like: T = [0, 2, 4, 7 ] # in hours N = [1000, 914, 568,...

## The Delta-Method and Autograd

Posted by **Cameron Davidson-Pilon** at

One of the reasons I’m really excited about autograd is because it enables me to be able to transform my abstract parameters into business-logic. Let me explain with an example. Suppose I am modeling customer churn, and I have fitted a Weibull survival model using maximum likelihood estimation. I have two parameter estimates: lambda-hat and rho-hat. I also have their covariance matrix, which tells me how much uncertainty is present in the estimates (in lifelines, this is under the variance_matrix_...

## Evolution of lifelines over the past few months

Posted by **Cameron Davidson-Pilon** at

TLDR: upgrade lifelines for lots of improvements pip install -U lifelines During my time off, I’ve spent a lot of time improving my side projects so I’m at least kinda proud of them. I think lifelines, my survival analysis library, is in that spot. I’m actually kinda proud of it now. A lot has changed in lifelines in the past few months, and in this post I want to mention some of the biggest additions and the stories behind them....

## Using Statistics to Make Statistics Faster

Posted by **Cameron Davidson-Pilon** at

While working on my side project lifelines, I noticed a surprising behaviour. In lifelines, there are two classes that implement the Cox proportional hazard model. The first class, CoxTimeVaryingFitter, is used for time-varying datasets. Time-varying datasets require a more complicated algorithm, one that works by iterating over all unique times and "pulling out" the relevant rows associated with that time. It's a slow algorithm, as it requires lots of Python/Numpy indexing, which gets worse as the dataset size grows. Call this...

## Causal Inference & Heroes of the Storm Win Rates

Posted by **Cameron Davidson-Pilon** at

I've been playing the video game Heroes of the Storm by Blizzard for about 3 years now. I even attended the game's professional scene's championship last year! One part of the game that has attracted me is the constantly accumulating character roster (I'll provide a short summary of the game later in this post). This means that new characters are being added, and existing characters are being tweaked if they are too powerful or not powerful enough. It's this latter...

## Everything I need to know about causality, I learned smoking cigarettes

Posted by **Cameron Davidson-Pilon** at

I've gone through a causal inference revolution, and I can't go back. Causal inference is all I think about now, but unfortunately its importance is often overlooked. Part of my revolution came after reading Miguel Hernán et al's paper on infant mortality and smoking, The Birth Weight "Paradox" Uncovered?. It took me a full evening to really understand its points, and it was rewarding when I finally did. In this blog post, I'm going to examine the paper from my point of view, and add or...

## Three Pillars of Data Science

Posted by **Cameron Davidson-Pilon** at

## How Shopify Merchants Can Measure Retention [x-post from Shopify Engineering Blog]

Posted by **Cameron Davidson-Pilon** at

How Shopify Merchants Can Measure Retention

## A self-describing sequence problem

Posted by **Cameron Davidson-Pilon** at

Each week FiveThirtyEight posts a mathematical riddle to solve over the weekend. This latest week's problem was interesting, and I wanted to post my solution. The original problem is: Take a look at this string of numbers: 333 2 333 2 333 2 33 2 333 2 333 2 333 2 33 2 333 2 333 2 … At first it looks like someone fell asleep on a keyboard. But there’s an inner logic to the sequence: This...

## Solving 200 Project Euler Problems

Posted by **Cameron Davidson-Pilon** at

## Searching through distributed datasets: The Mod-Binary Search

Posted by **Cameron Davidson-Pilon** at

On a not-too-unusual day, one of my Spark jobs failed in production. Typically this means there was a row of bad data that entered into the job. As I’m one to write regression tests, this “type” of bad had likely never been seen before, so I needed to inspect the individual offending row (or rows). Typically debug steps include: Manually inspecting all the recent data, either by hand or on a local machine. The failed job might print the offending...

## A real-life mistake I made about penalizer terms

Posted by **Cameron Davidson-Pilon** at

I made a very interesting mistake, and I wanted to share it with you because it's quite enlightening to statistical learning in general. It concerns a penalizer term in maximum-likelihood estimation. Normally, one deals only with the penalizer coefficient, that is, one plays around with \(\lambda\) in an MLE optimization like: $$ \min_{\theta} -\ell(\theta) + \lambda ||\theta||_p^p $$ where \(\ell\) is the log-likelihood and \(||\cdot||\) is the \(p\) norm. This family of problems is typically solved by calculus because both...

## Distribution of the last value in a sum of Uniforms that exceeds 1

Posted by **Cameron Davidson-Pilon** at

While working on a problem, I derived an interesting result around sums of uniforms random variables. I wanted to record it here so I don't forget it (I haven't solved the more general problem yet!). Here's the summary of the result: Let \(S_n = \sum_{i=1}^n U_i \) be the sum of \(n\) Uniform random variables. Let \(N\) be the index of the first time the sum exceeds 1 (so \(S_{N-1} < 1\) and \(S_{N} \ge 1\)). The distribution of \(U_N\)...

## Poissonization of Multinomials

Posted by **Cameron Davidson-Pilon** at

Introduction I've seen some really interesting probability & numerical solutions using a strategy called Poissonization, but Googling for it revealed very few resources (just some references in some textbooks that I don't have quick access to). Below are my notes and repository for Poissonization. After we introduce the theory, we'll do some examples. The technique relies on the following theorem: Theorem: Let \(N \sim \text{Poi}(\lambda)\) and suppose \(N=n, (X_1, X_2, ... X_k) \sim \text{Multi}(n, p_1, p_2, ..., p_k)\). Then, marginally, \(X_1, X_2, ..., X_k\)...

## "Reversing the Python Data Analysis Lens" Video

Posted by **Cameron Davidson-Pilon** at

Last November, I was lucky enough to give the keynote at PyCon Canada 2015. Below is the abstract and video for it: Python developers are commonly using Python as a tool to explore datasets - but what if we reverse that analysis lens back on to the developer? In this talk, Cam will use Python as a data analysis tool to explore Python developers and code. With millions of data points, mostly scraped from Github and Stackoverflow, we'll reexamine who...

## Bayesian Methods for Hackers release!

Posted by **Cameron Davidson-Pilon** at

Finally, after a few years writing and debugging, I'm proud to announce that the print copy of Bayesian Methods for Hackers is released! It has update content, including a brand new chapter on A/B testing, compared to the online version. You can purchase it on Amazon today!

## [Video] Mistakes I've Made talk at PyData 2015

Posted by **Cameron Davidson-Pilon** at

A presentation from PyData Seattle 2015 about all the mistakes I've made in data analysis and data science

## How can I use non-constructive proofs in data analysis?

Posted by **Cameron Davidson-Pilon** at

In mathematics, there are two classes of proof techniques: constructive and non-constructive. Constructive proofs will demonstrate how to build the object required. Its construction proves its existence, hence you are done. An example of this is proving that prime numbers are infinite using Euclid's argument: to find a prime number, you multiply together all the prime numbers seen thus far and add 1. On the other hand, a non-constructive proof does not detail how to build the object, just states that it must...

## Bayesian M&M Problem in PyMC 2

Posted by **Cameron Davidson-Pilon** at

This Bayesian problem is from Allen Downey's Think Bayes book. I'll quote the problem here: M&M’s are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M’s, changes the mixture of colors from time to time. In 1995, they introduced blue M&M’s. Before then, the color mix in a bag of plain M&M’s was 30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20% Green, 16%...

## Evolutionary Group Theory - or what happens when algebraic groups have sex.

Posted by **Cameron Davidson-Pilon** at

And now for something totally different. This is not data related. It's a paper I wrote about an intersection between group theory and evolutionary dynamics. Basically, what happens when groups have sex. Interested? Read on! TLDR: You can find analogous group theory axioms in dynamical systems. Population Dynamics of Algebraic Groups We construct a dynamical population whose individuals are assigned elements from an algebraic group \(G\) and subject them to sexual reproduction. We investigate the relationship between the dynamical...

## Percentile and Quantile Estimation of Big Data: The t-Digest

Posted by **Cameron Davidson-Pilon** at

Suppose you are interested in the sample average of an array. No problem you think, as you create a small function to sum the elements and divide by the total count. Next, suppose you are interested in the sample average of a dataset that exists on many computers. No problem you think, as you create a function that returns the sum of the elements and the count of the elements, and send this function to each computer, and divide the sum of...

## Lifetimes: Measuring Customer Lifetime Value in Python

Posted by **Cameron Davidson-Pilon** at

Lifetimes is my latest Python project. Below is a summary, but you can also check out the source code on Github. Introduction As emphasized by P. Fader and B. Hardie, understanding and acting on customer lifetime value (CLV) is the most important part of your business's sales efforts. And (apparently) everyone is doing it wrong. Lifetimes is a Python library to calculate CLV for you. More generally, Lifetimes can be used to understand and predict future usage based on a...

## What The Name?!

Posted by **Cameron Davidson-Pilon** at

Kylea Parker and I over the holidays put together our first ever infographic! Now, her having a design background and myself having a stats background, we set out to do all infographics right: correct statistics and beautiful communication through design. I believe we achieved that. The data analysis was done using demographica.

## IPython Startup Scripts

Posted by **Cameron Davidson-Pilon** at

I've been playing around with my IPython workflow for the past few weeks, and have found one I really like. It uses IPython's startup files, that are launched before the prompt opens up. This way I can load my favourite libraries, functions, etc., into my console. It also allows me to add my own %magic functions. Today, I've opened up my startup scripts in a github repo, StartupFiles. The repo comes with some helper scripts too, to get your started: ./bin/build_symlink: for...

## Dawkins on Saying "statistically, ... "

Posted by **Cameron Davidson-Pilon** at

Richard Dawkins, in his early book The Extended Phenotype, describes what he means when he says "statistically, X occurs". His original motivation was addressing a comment about gender, but it applies more generally: If, then, it were true that the possession of a Y chromosome had a causal influence on, say, musical ability or fondness for knitting, what would this mean? It would mean that, in some specified population and in some specified environment, an observer in possession of information...

## Joins in MapReduce Pt. 2 - Generalizing Joins in PySpark

Posted by **Cameron Davidson-Pilon** at

## [Video] Presentation on Lifelines - Survival Analysis in Python, Sept. 23, 2014

Posted by **Cameron Davidson-Pilon** at

I gave this talk on Lifelines, my project on survival analysis in Python, to the Montreal Python Meetup. It's a pretty good introduction to survival analysis, and how to use Lifelines. Enjoy!

## Joins in MapReduce Pt. 1 - Implementations in PySpark

Posted by **Cameron Davidson-Pilon** at

## Why Your Distribution Might be Long-Tailed

Posted by **Cameron Davidson-Pilon** at

I really like this below video explaining how a long-tailed distribution (also called powerlaw distributions, or fat-tailed distributions) can form naturally. In fact, I keep thinking about it and applying it to some statistical thinking. Long-tailed distributions are incredibly common in the social science: for example, we encounter them in the wealth distribution: few people control most of the wealth. social networks: celebrities have thousands of times more followers than the median user. revenue generated by businesses: Amazon is larger than...

## The Class Imbalance Problem in A/B Testing

Posted by **Cameron Davidson-Pilon** at

Introduction If you have been following this blog, you'll know that I employ Bayesian A/B testing for conversion tests (and see this screencast to see how it works). One of the strongest reasons for this is the interpretability of the analogous "p-value", which I call Confidence, defined as the probability the conversion rate of A is greater than B, $$\text{confidence} = P( C_A > C_B ) $$ Really, this is what the experimenter wants - an answer to: "what are...

## Exploring Human Psychology with Mechanical Turk Data

Posted by **Cameron Davidson-Pilon** at

This blog post is a little different: it's a whole data collection and data analysis story. I become interested in some theories from behavioural economics, and wanted to verify them. So I used Mechanical Turkers to gather data, and then did some exploratory data analysis in Python and Pandas (bonus: I recorded my data analysis and visualization, see below). Prospect Theory and Expected Values It's clear that humans are irrational, but how irrational are they? After some research into behavourial...

## Using Census Data to Find Hot First Names

Posted by **Cameron Davidson-Pilon** at

We explore some cool data on first names and introduce a library for making this data available. We then use k-means to find the most trending names right now, and introduce some ideas on age inference of users. Freakonomics, the original Data Science book One of the first data science books, though it wasn't labelled that at the time, was the excellent book "Freakonomics" (2005). The authors were the first to publicise using data to solve large problems, or to...

## 8 great data blogs to follow

Posted by **Cameron Davidson-Pilon** at

Below I've listed my favourite data analysis, data science, or otherwise technical blogs that I've learned a great deal from. Big +1's to the blogs' authors for providing all these ideas and intellectual property for public access. The list is in no particular order - and it's only blogs I remember, so if your blog isn't here, I may have just forgotten it ;) 1. Andrew Gelman's Statistical Modeling, Causal Inference, and Social Science Gelman is probably the leader in...

## Replicating 538's plot styles in Matplotlib

Posted by **Cameron Davidson-Pilon** at

Nate Silver's FiveThirtyEight site has some aesthetically pleasing figures, ignoring the content of the plots for a moment: After pulling a few graphs locally, sampling colors, and crowd-sourcing the fonts used, I was able to come pretty close to replicating the style in Matplotlib styles. Here's an example (my figure dropped into an article on FiveThirtyEight.com) Another example using the replicated styles: So how to do it? [Edit: these steps are old, you can still use them, but there is...

## The Binary Problem and The Continuous Problem in A/B testing

Posted by **Cameron Davidson-Pilon** at

Introduction I feel like there is a misconception in performing A/B tests. I've seen blogs, articles, etc. that show off the result of an A/B test, something like "converted X% better". But this is not what the A/B test was actually measuring: an A/B test is measuring "which group is better" (the binary problem), not "how much better" (the continuous problem). In practice, here's what happens: the tester waits until the A/B test is over (hence solving the binary problem),...

## Data's Use in the 21st Century

Posted by **Cameron Davidson-Pilon** at

The technological challenges, and achievements, of the 20th Century handed society powerful tools. Technologies like nuclear power, airplanes & automobiles, the digital computer, radio, internet and imaging technologies to name only a handful. Each of these technologies had disrupted the system, and each can be argued to be Black Swans (à la Nassim Taleb). In fact, for each technology, one could find a company killed by it, and a company that made its billions from it. What these technologies have...

## Feature Space in Machine Learning

Posted by **Cameron Davidson-Pilon** at

Feature space refers to the \(n\)-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is feature extraction, hence we view all variables as features. For example, consider the data set with: Target \(Y \equiv\) Thickness of car tires after some testing period Variables \(X_1 \equiv\) distance travelled in test \(X_2 \equiv\) time duration of test \(X_3 \equiv\) amount of chemical \(C\) in...

## Generating exponential survival data

Posted by **Cameron Davidson-Pilon** at

Suppose we interested in generating exponential survival times with scale parameter \(\lambda\), and having \(\alpha\) probability of censorship, \(0 \le \alpha < 1\). This is actually, at least from what I tried, a non-trivial problem. I've derived a few algorithms: Algorithm 1 Generate \(T \sim \text{Exp}( \lambda )\). If \(\alpha = 0\), return \((T, 1)\). Solve \(\frac{ \lambda h }{ \exp (\lambda h) -1 } = \alpha \) for \(h\). Generate \(E \sim \text{TruncExp}( \lambda, h )\), where \(\text{TruncExp}\) is...

## Multi-Armed Bandits

Posted by **Cameron Davidson-Pilon** at

Preface: This example is a (greatly modified) excerpt from the open-source book Bayesian Methods for Hackers, currently being developed on Github Adapted from an example by Ted Dunning of MapR Technologies The Multi-Armed Bandit Problem Suppose you are faced with \(N\) slot machines (colourfully called multi-armed bandits). Each bandit has an unknown probability of distributing a prize (assume for now the prizes are the same for each bandit, only the probabilities differ). Some bandits are very generous, others not so...

## An algorithm to sort "Top" comments

Posted by **Cameron Davidson-Pilon** at

(originally posted on camdp.com) Preface: This example is a (greatly modified) excerpt from the open-source book Bayesian Methods for Hackers, currently being developed on Github ;) Why is sorting from "best" to "worst" so difficult? Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value...