How to solve the Price is Right's Showdown

Preface: This example is a (greatly modified) excerpt from the open-source book Bayesian Methods for Hackers, currently being developed on Github ;)

How to solve* the Showdown on the Price is Right

*I use the term loosely and irresponsibly. Thanks to Allen Downey, of Think Bayes for pointing out some original errors.

It is incredibly surprising how wild some bids can be on The Price is Right's final game, The Showcase. If you are unfamiliar with how it is played (really?), here's a quick synopsis:

  1. Two contestants compete in The Showcase.
  2. Each contestant is shown a unique suite of prizes (we'll assume two prizes per suite for brevity, but this can be extended to any number).
  3. After the viewing, the contestants are asked to bid on the price for their unique suite of prizes.
  4. If a bid price is over the actual price, the bid's owner is disqualified from winning.
  5. If both bids are over, then the closer bid's owner wins.
  6. Else the winner is the owner of the closer bid to the true prices.

The difficulty in the game is balancing your uncertainty in the prices and keeping your bid low enough so as to not bid over. This problem is the perfect problem to be solved by probabilistic programming and Bayesian methods.

Bayesian Philosophy

Bayesian inference differs from more traditional statistical analysis by preserving uncertainty about our beliefs. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving certainty from randomness? I'll explain.

The Bayesian method believes that probability is better seen as a measure of believability in an event. More formally, Bayesians interpret a probability as measure of *belief* of an event occurring. A belief of 0 is you have no confidence that the event will occur; conversely, a belief of 1 implies you are absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes.

This philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial evidence but have to make intelligent decisions. To align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A) \in [0,1]$.

John Maynard Keynes, a great economist and thinker, said

"When the facts change, I change my mind. What do you do, sir?"
This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even -especially- if the evidence is counter to what was initialed believed, it cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated probability the posterior probability so as to contrast the pre-evidence prior probability.

By introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, and we update our beliefs, our guess becomes less wrong. This is the opposite side of the inference coin, where typically we try to be more right.

Modeling the Showcase

There are two reasons why Bayesian inference is the correct methodology for the Showcase problem. The first was explained above: we do not have complete information about the prices in the prize suite, but we do have beliefs about what the prices might be. Also, we have a prior belief about what the final price might be: we can look at historical final prices and derive a suitable prior distribution.

Suppose, then, that the historical final prices have been about Normally distributed: $$ \text{final_price} \sim N( 35000, 7500 ) $$ This is our prior probability distribution of the final price.

Similarly, as the Showcase is revealed, we have beliefs about what the prices of the items might be (remember we are only considering two prizes initially). We can model the individuals prizes again by Normal distributions, with parameter $\mu_i$ and $\sigma^2_i$. This is very realistic and flexible, as we have a likely guess about the price ($\mu_i$) and we can also express our uncertainty in our guess why changing $\sigma^2_i$.

Let's take a step back. This is pretty cool. In other statistical models we can't specify individual beliefs about things like how uncertain we are, or what our prior opinions might be.

Suppose the prize suite contains a snowblower and a trip to Toronto, Ontario (I guess they hope they will use the snowblower on their vacation). Personally, I would assign the following distributions to the prizes: \begin{align*} &\text{snowblower} = \text{Normal}(3000, 500 ) \\\\ &\text{trip} = \text{Normal}( 12000, 3000 ) \end{align*}

and we know that the relationship holds:

$$ \text{final_price} = \text{snowblower} + \text{trip} + \epsilon $$

Out next step is to find our posterior distribution of the final price given we have seen the prizes. Normally, this step would involve terrible, cramp-inducing mathematical integrals, but in Probabilistic Programming and Bayesian Methods for Hackers we demonstrate that this approach is not necessary. In fact, the book presents Bayesian inference via a computational/understanding-first, mathematics-second point of view. But I advertise. Let's show how it is done without integrals and hand-cramps.

We will by using the PyMC library to find the posterior distribution. The code is pretty self explanatory in light of the above discussion:

import pymc as mc

mu_prior = 35000
std_prior =  7500
final_price = mc.Normal( "final_price", mu_prior, 1.0/std_prior**2 )#bayesian's use 1/sigma**2 as the parameter

snowblower = mc.Normal( "snowblower", 3000.0, 1.0/500.0**2 )
toronto = mc.Normal( "toronto", 12000.0, 1.0/3000.0**2 )
price_estimate = snowblower + toronto

@mc.potential
def error( final_price = final_price, price_estimate = price_estimate ):
        return mc.normal_like( final_price,  price_estimate, 3e3**2)


#start sampling from the posterior
model = mc.Model([final_price, toronto, snowblower, price_estimate, error ])
mcmc = mc.MCMC(model)
mcmc.sample( 220000, 180000) 

price_trace = mcmc.trace( "final_price"  )[:]

The important result is contained in price_trace at the end. It is an array of samples from the posterior distribution. So we do not get back an analytical formula of the posterior probability distribution (which often doesn't analytically exist), we are returned samples (which can be plotted into a histogram to observe the shape of the distribution).

Notice how our final price estimate, which use to be centered at 35 thousand, is now closer to 28 000. This reflects what our data (i.e. observed prizes) are suggesting: the final price is lower. But we have not completely discarded our prior, as this is still information we are using: our prior reflects that often prizes, for whatever reason, have historically cost more than our own current beliefs suggest. Hence we have a great balance between objectiveness and subjectiveness.

We still need to make a bid. We need a way to choose a *good* bid using this posterior distribution. A naive Bayesian would choose the mean of the posterior, but we can do soooo much better. The following is the second reason why Bayesian inference is the right way to approach this problem

Making a better bid: Not going over

Instead of using the mean of the posterior as our bid, we should choose a bid more intelligently. We understand that the Showcase has a unique payoff: if a bid is over the true price, the keys to winning are essentially handed over to the other contestant; if the bid is not close to the true price (without going over), too much room might be left for the other contestant to guess closer. We can (approximately) quantify this.

A loss function, $L$ is a function that accepts the truth and an estimate of the truth, and returns a number that reflects the outcome of that estimate. The larger the loss returned, the worse the outcome. For example, $$L( \theta, \hat{\theta} ) = (\theta - \hat{\theta})^2$$ is a loss function that is increasingly large and minimized only when $\hat{\theta} = \theta$. In Bayesian inference, we are free to choose our own loss function to reflect the outcomes of our estimates:


def showcase_loss( bid, true_price, pain = 80000):
    if true_price < bid:
        return pain
    else:
        return np.abs( true_price - guess )

The interpretation of this loss function is that if we bid over the true price, we are penalized heavily with the parameter pain. A lower pain means that you are more comfortable with the idea of going over (remember going over does not guarantee you lose-- your competitor might go over as well). If we do bid under the true price, we want to be as close as possible, hence the else loss is an increasing function of the distance between the bid and true price.

But we don't know what the true price is...

Right. We don't know what the true price is, call it $p$. In fact, we have a whole distribution of what the true price might be, aka the posterior distribution. Hence, we have to look at an expected loss as a function of the bid, $\hat{p}$: $$\ell(\hat{p}) = E_{p}[ L( p, \hat{p} ) ]$$ We can approximate the expected loss by using the samples from the posterior. $$\frac{1}{N} \sum_{i=0}^N L( p_i, \hat{p} ) \approx E_{p}[ L( p, \hat{p} ) ]$$ We can vary our bid and see what the expected loss is, varying the pain parameter too:

Of course, for a certain pain parameter, we would want to minimize our loss. We use Scipy's optimization routines to find the minimums per curve. These points provide the best (with respect to our unique loss) bid for the Showcase.

Note how far away our optimized bids are even though the posterior mean is 28 thousand. This is because, for any bid we make, there is still a chance that we will exceed the true price, something we decided we really do not want to happen (we decided this through our loss function above).

Conclusion

This example really shows off the power of Bayesian methods. We started with a very uncertain estimation problem, but with have a flexible framework that allows us to be uncertain, we were able to update our uncertainty (posterior) and be less wrong. We then optimized our bid by investigating the expected loss to find the best bid. We can probably derive a more rational loss, or at least be more intelligent to choosing a good pain parameter.

I encourage you to check out Probabilistic Programming and Bayesian Methods for Hackers in Python for more examples and tools to become less wrong.

The hyper-intelligent Allen Downey, of Think Bayes, has also created a blog-post about this subject, implementing it in his Python framework.

References



Other articles to enjoy:

Follow me on Twitter at cmrn_dp


All Blog Articles

DataOrigami Launch

June 24th, 2014

I'm proud to announce my latest project, dataorigami.net! Why are you still here, go check it out!

continue reading...


Feature Space

May 22th, 2014

Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is *feature extraction*, hence we view all variables as features. For example, consider the data set with:

continue reading...


Generating exponential survival data

March 02th, 2014

TLDR: Suppose we interested in generating exponential survival times with scale parameter $\lambda$, and having $\alpha$ probability of censorship ( $0 < \alpha < 1$. This is actually, at least from what I tried, a non-trivial problem. Here's the algorithm, and below I'll go through what doesn't work to:

continue reading...


Deriving formulas for the expected sample size needed in A/B tests

December 27th, 2013

Often an estimate of the number of samples need in an A/B test is asked. Now I've sat down and tried to work out a formula (being disatisfied with other formulas' missing derivations). The below derivation starts off with Bayesian A/B, but uses frequentist methods to derive a single estimate (God help an individual interested in a posterior sample size distribution!)

continue reading...


lifelines: survival analysis in Python

December 19th, 2013

The lifelines library provides a powerful tool to data analysts and statisticians looking for methods to solve a common problem:

How do I predict durations?

This question seems very vague and abstract, but thats only because we can be so general in this space. Some more specific questions lifelines will help you solve are:

continue reading...


Evolutionary Group Theory

October 03th, 2013

We construct a dynamical population whose individuals are assigned elements from an algebraic group \(G\) and subject them to sexual reproduction. We investigate the relationship between the dynamical system and the underlying group and present three dynamical properties equivalent to the standard group properties.

continue reading...


Videos about the Bayesian Methods for Hackers project

August 25th, 2013

  1. New York Tech Meetup, July 2013: This one is about 2/3 the way through, under the header "Hack of the month"

    Available via MLB Media player
  2. PyData Boston, July 2013: Slides available here

    Video available here.
continue reading...


Warrior Dash 2013

August 03th, 2013

Warrior dash data, just like last year: continue reading...


The Next Steps

June 16th, 2013

June has been an exciting month. The opensource book Bayesian Methods for Hackers I am working on blew up earlier this month, propelling it into Github's stratosphere. This is both a good and bad thing: good as it exposes more people to the project, hence more collaborators; bad because it is showing off an incomplete project -- a large fear is that advanced data specialists disparage in favour of more mature works the work to beginner dataists.

continue reading...


NSA Honeypot

June 08th, 2013

Let's perform an experiment.

continue reading...


21st Century Problems

May 16th, 2013

The technological challenges, and achievements, of the 20th century brought society enormous progress. Technologies like nuclear power, airplanes & automobiles, the digital computer, radio, internet and imaging technologies to name only a handful. Each of these technologies had disrupted the system, and each can be argued to be Black Swans (à la Nassim Taleb). In fact, for each technology, one could find a company killed by it, and a company that made its billions from it.

continue reading...


ML Counterexamples Pt.2 - Regression Post-PCA

April 26th, 2013

Principle Component Analysis (PCA), also known as Singular Value Decomposition, is one of the most popular tools in the data scientist's toolbox, and it deserves to be there. The following are just a handful of the uses of PCA:

  • data visualization
  • remove noise
  • find noise (useful in finance)
  • clustering
  • reduce dataset dimension before regression/classification, with minimal negative effect
continue reading...


Machine Learning Counterexamples Pt.1

April 24th, 2013

This will the first of a series of articles on some useful counterexamples in machine learning. What is a machine learning counterexample? I am perhaps using the term counterexample loosely, but in this context a counterexample is a hidden gotcha or otherwise a deviation from intuition.

Suppose you have a data matrix $X$, which has been normalized and demeaned (as appropriate for linear models). A response vector $Y$, also standardized, is regressed on $X$ using your favourite library and the following coefficients, $\beta$, are returned:

continue reading...


Multi-Armed Bandits

April 06th, 2013

Suppose you are faced with $N$ slot machines (colourfully called multi-armed bandits). Each bandit has an unknown probability of distributing a prize (assume for now the prizes are the same for each bandit, only the probabilities differ). Some bandits are very generous, others not so much. Of course, you don't know what these probabilities are. By only choosing one bandit per round, our task is devise a strategy to maximize our winnings.

continue reading...


Cover for Bayesian Methods for Hackers

March 25th, 2013

The very kind Stef Gibson created an amazing cover for my open source book Bayesian Methods for Hackers. View it below:

continue reading...


An algorithm to sort "Top" Comments

March 10th, 2013

Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product.

This has created flaws in how we sort items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results.

continue reading...


How to solve the Price is Right's Showdown

February 05th, 2013

Preface: This example is a (greatly modified) excerpt from the book Probabilistic Programming and Bayesian Methods for Hackers in Python, currently being developed on Github ;)

How to solve* the Showdown on the Price is Right

*I use the term loosely and irresponsibly.

It is incredibly surprising how wild some bids can be on The Price is Right's final game, The Showcase. If you are unfamiliar with how it is played (really?), here's a quick synopsis:

continue reading...


My favourite part of The Extended Phenotype

February 02th, 2013

To quote directly from the book, by Richard Dawkins:

continue reading...


The awesome power of Bayesian Methods - Part II - Optimizing Loss Functions

January 10th, 2013

Hi again, this article will really show off the flexibility of Bayesian analysis. Recall, Bayesian inference is basically being interested in the new random variables, $\Theta$, distributed by $$ P( \Theta | X ) \propto L( X | \Theta )P(\Theta )$$ where $X$ is observed data, $L(X | \Theta )$ is the likelihood function and P(\Theta) is the prior distribution for $\Theta$. Normally, computing the closed-form formula for the left-hand side of the above equation is difficult, so I say screw closed-forms. If we can sample from $P( \Theta | X )$ accurately, then we can do as much, possibly more, than if we just had the closed-form. For example, by drawing samples from $P( \Theta | X )$, we can estimate the distribution to arbitrary accuracy. Or find expected values for easily using Monte Carlo. Or maximize functions. Or...well I'll get into it.

continue reading...


Interior Design with Machine Learning

January 04th, 2013

While designing my new apartment, I found a very cool use of machine learning. Yes, that's right, you can use machine learning in interior design. As crazy as it sounds, it is completely legitimate.

continue reading...


The awesome power of Bayesian Methods - What they didn't teach you in grad school. Part I

December 27th, 2012

For all the things we learned in grad school, Bayesian methods was something that was skimmed over. Strange too, as we learned all the computationally machinery necessary, but we were never actually shown the power of these methods. Let's start our explanation with an example where the Bayesian analysis clearly simply is more correct (in the sense of getting the right answer).

continue reading...


How to bootstrap your way out of biased estimates

December 06th, 2012

Bootstrapping is like getting a free lunch, low variance and low bias, by exploiting the Law of Large numbers. Here's how to do it:

continue reading...


High-dimensional outlier detection using statistics

November 27th, 2012

I stumbled upon a really cool idea of detecting outliers. Classically, one can plot the data and visually find outliers. but this is not possible in higher-dimensions. A better approach to finding outliers is to consider the distance of each point to some central location. Data points that are unreasonably far away are considered outliers and are dealt with.

continue reading...


A more sensible omnivore.

November 17th, 2012

My girlfriend, who is a vegetarian, and I often discuss the merits and dismerits of being a vegetarian. Though I am not a vegetarian (though I did experiment with veganism and holistic diets during some One Week Ofs), very much agree that eating as much meat as we do is not optimal.

Producing an ounce of meat requires a surprising amount of energy, whereas it's return energy is very small. We really only eat meat for its taste. It is strange how often we, the human omnivores, require meat in a meal, less it's not a real meal (and we do this three times a day). And unfortunately, a whole culture eating this way is not sustainable.

I have often thought about a life without meat,

continue reading...


Kaggle Data Science Solution: Predicting US Census Return Rates

November 01th, 2012

The past month two classmates and I have been attacking a new Kaggle contest, Predicting US Census mail return rates. Basically, we were given large amounts of data about block groups, the second smallest unit of division in the US census, and asked to predict what fraction of individuals from the block group would mail back their 2010 census form.

continue reading...


Visualizing clusters of stocks

October 14th, 2012

One troubling aspect of an estimated covariance matrix is that it always overestimates the true covariance. For example, if two random variables are independent the covariance estimate for the two variables is always non-zero. It will converge to 0, yes, but it may take a really long time.

What's worse is that the covariance matrix does not understand causality. Consider the certainly common situation below:

continue reading...


Twitter Pyschopathy

October 08th, 2012

Released earlier was the research paper about predicting psychopathy using Twitter behaviour. Read the paper here.

continue reading...


UWaterloo Subway Map

September 22th, 2012

I think my thing with subway maps is getting weird. I just created a fictional University of Waterloo subway map using my subway.js library.

continue reading...


Sampling from a Conditional Markov Chain

September 15th, 2012

My last project involving the artificial creation of "human-generated" passwords required me to sample from a Markov Chain. This is not very difficult, and I'll outline the sampling algorithm below. For the setup, suppose you have a transition probability matrix $M$ and an initial probability vector $\mathbf{v}$. The element $(i,j)$ of $M$ is the probability of the next state being $j$ given that the current state is $i$. The initial probability vector element $i$ is the probability that the first state is $i$. If you have these quantities, then to sample from a realized Markov process is simple:

continue reading...


Modeling password creation

September 14th, 2012

Creating a password is an embarrassingly difficult task. A password needs to be both memorable and unique enough not to be guessed. The former criterion prevents using randomly generated passwords (try remembering 9st6Uqfe4Z for Gmail, rAEOZmePfT for Facebook, etc.), and the latter is the reason why passwords exist in the first place. So the task falls on humans to create their own passwords and carefully balance these two criteria. This has been, and still is, a bad idea.

continue reading...


Eurotrip & Python

August 13th, 2012

Later this month, my lovely girlfriend and I are travelling to Amsterdam, Berlin and Kiel. The first half of the trip we will be exploring the tourist and nontourist areas of Amsterdam and Berlin. I'm very excited as I get to spend time drinking and relaxing with my girlfriend. But then...

continue reading...


Turn your Android phone into a SMS-based Command Line

August 11th, 2012

One of my biggest pet peeves is not having my phone with me. This often occurs if the phone is charging and I need to leave, or I have forgotten it somewhere, or it is lost, or etc. I've created a partial solution.

continue reading...


Least Squares Regression with L1 Penalty

July 31th, 2012

I want to discuss, and exhibit, a really cool idea in machine learning, optimization and statistics. It's a simple idea: adding a constraint to an optimization problem, specifically a constraint on the sum, can have huge impacts on your interpretation, robustness and sanity. I must first introduce the family of functions we will be discussing.

The family of L-norm penalty functions, $L_p:R^d \rightarrow R$, is defined: $$ L_p( x ) = || x ||_p = \left( \sum_{i=1}^d |x_i|^p \right) ^{1/p} \;\: p>0 $$ For $p=2$, this is the familar Euclidean distance. The most often used in machine learning literature are the

continue reading...


Warrior Dash Data

July 25th, 2012

Last Sunday I competed in a pretty epic competition: The Warrior Dash. It's 5k of, well honestly, it's 5k of mostly hills and trail running. Plus spread throughout are some pretty fun obstacles. With only five training workouts un...

continue reading...


Subway.js

July 17th, 2012

The javascript code that creates and controls the subway map above is available on GitHub. You can build your own using the pretty self-explanatory code + README document. Imagine using the code in a school project or advertising...

continue reading...


Kernelized and Supervised Principle Component Analysis

July 13th, 2012

Sorry the title is a bit of a mouthful. Everyone in statistics has heard of Principle Components Analysis ( PCA ). The idea is so simple, and a personal favourite of mine, so I'll detail it here.

continue reading...


Python Android Scripts

July 05th, 2012

I am having a blast messing around with my new Android phone. It has Python! Currently I am playing with the sensors on the phone. Built-in is a light sensor, accelerometer, and an

continue reading...


Predicting Psychopathy using Twitter Data

July 03th, 2012

The goal of this Kaggle contest was to predict an individuals psychopathic rating using information from their Twitter profile. I was given the already processed data and psychopathic scores. This was the first Kaggle competition I entered, and certainly not the last! If you'll excuse me, I must begin my technical remarks on my solution:

continue reading...


CamDP++

July 03th, 2012

Camdp.com is my latest attempt to digitize myself. I tried to map the subway lines to mimic my life and work, with each subway line representing a train of thought. I hope you enjoy the continue reading...


Data Science FAQ

July 02th, 2012

What is data science? What is an example of a data set? What are some of the goals of data science? What are some examples of data science in action? continue reading...


(All Blog Articles).filter( Science )

Feature Space

May 22th, 2014

Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is *feature extraction*, hence we view all variables as features. For example, consider the data set with:

continue...


Generating exponential survival data

March 02th, 2014

TLDR: Suppose we interested in generating exponential survival times with scale parameter $\lambda$, and having $\alpha$ probability of censorship ( $0 < \alpha < 1$. This is actually, at least from what I tried, a non-trivial problem. Here's the algorithm, and below I'll go through what doesn't work to:

continue...


Deriving formulas for the expected sample size needed in A/B tests

December 27th, 2013

Often an estimate of the number of samples need in an A/B test is asked. Now I've sat down and tried to work out a formula (being disatisfied with other formulas' missing derivations). The below derivation starts off with Bayesian A/B, but uses frequentist methods to derive a single estimate (God help an individual interested in a posterior sample size distribution!)

continue...


lifelines: survival analysis in Python

December 19th, 2013

The lifelines library provides a powerful tool to data analysts and statisticians looking for methods to solve a common problem:

How do I predict durations?

This question seems very vague and abstract, but thats only because we can be so general in this space. Some more specific questions lifelines will help you solve are:

continue...


Evolutionary Group Theory

October 03th, 2013

We construct a dynamical population whose individuals are assigned elements from an algebraic group \(G\) and subject them to sexual reproduction. We investigate the relationship between the dynamical system and the underlying group and present three dynamical properties equivalent to the standard group properties.

continue...


21st Century Problems

May 16th, 2013

The technological challenges, and achievements, of the 20th century brought society enormous progress. Technologies like nuclear power, airplanes & automobiles, the digital computer, radio, internet and imaging technologies to name only a handful. Each of these technologies had disrupted the system, and each can be argued to be Black Swans (à la Nassim Taleb). In fact, for each technology, one could find a company killed by it, and a company that made its billions from it.

continue...


ML Counterexamples Pt.2 - Regression Post-PCA

April 26th, 2013

Principle Component Analysis (PCA), also known as Singular Value Decomposition, is one of the most popular tools in the data scientist's toolbox, and it deserves to be there. The following are just a handful of the uses of PCA:

  • data visualization
  • remove noise
  • find noise (useful in finance)
  • clustering
  • reduce dataset dimension before regression/classification, with minimal negative effect
continue...


Machine Learning Counterexamples Pt.1

April 24th, 2013

This will the first of a series of articles on some useful counterexamples in machine learning. What is a machine learning counterexample? I am perhaps using the term counterexample loosely, but in this context a counterexample is a hidden gotcha or otherwise a deviation from intuition.

Suppose you have a data matrix $X$, which has been normalized and demeaned (as appropriate for linear models). A response vector $Y$, also standardized, is regressed on $X$ using your favourite library and the following coefficients, $\beta$, are returned:

continue...


Multi-Armed Bandits

April 06th, 2013

Suppose you are faced with $N$ slot machines (colourfully called multi-armed bandits). Each bandit has an unknown probability of distributing a prize (assume for now the prizes are the same for each bandit, only the probabilities differ). Some bandits are very generous, others not so much. Of course, you don't know what these probabilities are. By only choosing one bandit per round, our task is devise a strategy to maximize our winnings.

continue...


An algorithm to sort "Top" Comments

March 10th, 2013

Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product.

This has created flaws in how we sort items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results.

continue...


My favourite part of The Extended Phenotype

February 02th, 2013

To quote directly from the book, by Richard Dawkins:

continue...


N is never large.

January 15th, 2013

continue...


The awesome power of Bayesian Methods - Part II - Optimizing Loss Functions

January 10th, 2013

Hi again, this article will really show off the flexibility of Bayesian analysis. Recall, Bayesian inference is basically being interested in the new random variables, $\Theta$, distributed by $$ P( \Theta | X ) \propto L( X | \Theta )P(\Theta )$$ where $X$ is observed data, $L(X | \Theta )$ is the likelihood function and P(\Theta) is the prior distribution for $\Theta$. Normally, computing the closed-form formula for the left-hand side of the above equation is difficult, so I say screw closed-forms. If we can sample from $P( \Theta | X )$ accurately, then we can do as much, possibly more, than if we just had the closed-form. For example, by drawing samples from $P( \Theta | X )$, we can estimate the distribution to arbitrary accuracy. Or find expected values for easily using Monte Carlo. Or maximize functions. Or...well I'll get into it.

continue...


The awesome power of Bayesian Methods - What they didn't teach you in grad school. Part I

December 27th, 2012

For all the things we learned in grad school, Bayesian methods was something that was skimmed over. Strange too, as we learned all the computationally machinery necessary, but we were never actually shown the power of these methods. Let's start our explanation with an example where the Bayesian analysis clearly simply is more correct (in the sense of getting the right answer).

continue...


How to bootstrap your way out of biased estimates

December 06th, 2012

Bootstrapping is like getting a free lunch, low variance and low bias, by exploiting the Law of Large numbers. Here's how to do it:

continue...


High-dimensional outlier detection using statistics

November 27th, 2012

I stumbled upon a really cool idea of detecting outliers. Classically, one can plot the data and visually find outliers. but this is not possible in higher-dimensions. A better approach to finding outliers is to consider the distance of each point to some central location. Data points that are unreasonably far away are considered outliers and are dealt with.

continue...


Visualizing clusters of stocks

October 14th, 2012

One troubling aspect of an estimated covariance matrix is that it always overestimates the true covariance. For example, if two random variables are independent the covariance estimate for the two variables is always non-zero. It will converge to 0, yes, but it may take a really long time.

What's worse is that the covariance matrix does not understand causality. Consider the certainly common situation below:

continue...


Twitter Pyschopathy

October 08th, 2012

Released earlier was the research paper about predicting psychopathy using Twitter behaviour. Read the paper here.

continue...


Sampling from a Conditional Markov Chain

September 15th, 2012

My last project involving the artificial creation of "human-generated" passwords required me to sample from a Markov Chain. This is not very difficult, and I'll outline the sampling algorithm below. For the setup, suppose you have a transition probability matrix $M$ and an initial probability vector $\mathbf{v}$. The element $(i,j)$ of $M$ is the probability of the next state being $j$ given that the current state is $i$. The initial probability vector element $i$ is the probability that the first state is $i$. If you have these quantities, then to sample from a realized Markov process is simple:

continue...


Least Squares Regression with L1 Penalty

July 31th, 2012

I want to discuss, and exhibit, a really cool idea in machine learning, optimization and statistics. It's a simple idea: adding a constraint to an optimization problem, specifically a constraint on the sum, can have huge impacts on your interpretation, robustness and sanity. I must first introduce the family of functions we will be discussing.

The family of L-norm penalty functions, $L_p:R^d \rightarrow R$, is defined: $$ L_p( x ) = || x ||_p = \left( \sum_{i=1}^d |x_i|^p \right) ^{1/p} \;\: p>0 $$ For $p=2$, this is the familar Euclidean distance. The most often used in machine learning literature are the

continue...


Kernelized and Supervised Principle Component Analysis

July 13th, 2012

Sorry the title is a bit of a mouthful. Everyone in statistics has heard of Principle Components Analysis ( PCA ). The idea is so simple, and a personal favourite of mine, so I'll detail it here.

continue...


Predicting Psychopathy using Twitter Data

July 03th, 2012

The goal of this Kaggle contest was to predict an individuals psychopathic rating using information from their Twitter profile. I was given the already processed data and psychopathic scores. This was the first Kaggle competition I entered, and certainly not the last! If you'll excuse me, I must begin my technical remarks on my solution:

continue...


Data Science FAQ

July 02th, 2012

What is data science? What is an example of a data set? What are some of the goals of data science? What are some examples of data science in action? continue...


(All Blog Articles).filter( Coding )

lifelines: survival analysis in Python

December 19th, 2013

The lifelines library provides a powerful tool to data analysts and statisticians looking for methods to solve a common problem:

How do I predict durations?

This question seems very vague and abstract, but thats only because we can be so general in this space. Some more specific questions lifelines will help you solve are:

continue...


An algorithm to sort "Top" Comments

March 10th, 2013

Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product.

This has created flaws in how we sort items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results.

continue...


How to solve the Price is Right's Showdown

February 05th, 2013

Preface: This example is a (greatly modified) excerpt from the book Probabilistic Programming and Bayesian Methods for Hackers in Python, currently being developed on Github ;)

How to solve* the Showdown on the Price is Right

*I use the term loosely and irresponsibly.

It is incredibly surprising how wild some bids can be on The Price is Right's final game, The Showcase. If you are unfamiliar with how it is played (really?), here's a quick synopsis:

continue...


Kaggle Data Science Solution: Predicting US Census Return Rates

November 01th, 2012

The past month two classmates and I have been attacking a new Kaggle contest, Predicting US Census mail return rates. Basically, we were given large amounts of data about block groups, the second smallest unit of division in the US census, and asked to predict what fraction of individuals from the block group would mail back their 2010 census form.

continue...


Visualizing clusters of stocks

October 14th, 2012

One troubling aspect of an estimated covariance matrix is that it always overestimates the true covariance. For example, if two random variables are independent the covariance estimate for the two variables is always non-zero. It will converge to 0, yes, but it may take a really long time.

What's worse is that the covariance matrix does not understand causality. Consider the certainly common situation below:

continue...


UWaterloo Subway Map

September 22th, 2012

I think my thing with subway maps is getting weird. I just created a fictional University of Waterloo subway map using my subway.js library.

continue...


Modeling password creation

September 14th, 2012

Creating a password is an embarrassingly difficult task. A password needs to be both memorable and unique enough not to be guessed. The former criterion prevents using randomly generated passwords (try remembering 9st6Uqfe4Z for Gmail, rAEOZmePfT for Facebook, etc.), and the latter is the reason why passwords exist in the first place. So the task falls on humans to create their own passwords and carefully balance these two criteria. This has been, and still is, a bad idea.

continue...


Eurotrip & Python

August 13th, 2012

Later this month, my lovely girlfriend and I are travelling to Amsterdam, Berlin and Kiel. The first half of the trip we will be exploring the tourist and nontourist areas of Amsterdam and Berlin. I'm very excited as I get to spend time drinking and relaxing with my girlfriend. But then...

continue...


Turn your Android phone into a SMS-based Command Line

August 11th, 2012

One of my biggest pet peeves is not having my phone with me. This often occurs if the phone is charging and I need to leave, or I have forgotten it somewhere, or it is lost, or etc. I've created a partial solution.

continue...


Subway.js

July 17th, 2012

The javascript code that creates and controls the subway map above is available on GitHub. You can build your own using the pretty self-explanatory code + README document. Imagine using the code in a school project or advertising...

continue...


Python Android Scripts

July 05th, 2012

I am having a blast messing around with my new Android phone. It has Python! Currently I am playing with the sensors on the phone. Built-in is a light sensor, accelerometer, and an

continue...


Predicting Psychopathy using Twitter Data

July 03th, 2012

The goal of this Kaggle contest was to predict an individuals psychopathic rating using information from their Twitter profile. I was given the already processed data and psychopathic scores. This was the first Kaggle competition I entered, and certainly not the last! If you'll excuse me, I must begin my technical remarks on my solution:

continue...


(All Blog Articles).filter( Awesome Stuff )

DataOrigami Launch

June 24th, 2014

I'm proud to announce my latest project, dataorigami.net! Why are you still here, go check it out!

continue...


Videos about the Bayesian Methods for Hackers project

August 25th, 2013

  1. New York Tech Meetup, July 2013: This one is about 2/3 the way through, under the header "Hack of the month"

    Available via MLB Media player
  2. PyData Boston, July 2013: Slides available here

    Video available here.
continue...


Warrior Dash 2013

August 03th, 2013

Warrior dash data, just like last year: continue...


The Next Steps

June 16th, 2013

June has been an exciting month. The opensource book Bayesian Methods for Hackers I am working on blew up earlier this month, propelling it into Github's stratosphere. This is both a good and bad thing: good as it exposes more people to the project, hence more collaborators; bad because it is showing off an incomplete project -- a large fear is that advanced data specialists disparage in favour of more mature works the work to beginner dataists.

continue...


NSA Honeypot

June 08th, 2013

Let's perform an experiment.

continue...


21st Century Problems

May 16th, 2013

The technological challenges, and achievements, of the 20th century brought society enormous progress. Technologies like nuclear power, airplanes & automobiles, the digital computer, radio, internet and imaging technologies to name only a handful. Each of these technologies had disrupted the system, and each can be argued to be Black Swans (à la Nassim Taleb). In fact, for each technology, one could find a company killed by it, and a company that made its billions from it.

continue...


Cover for Bayesian Methods for Hackers

March 25th, 2013

The very kind Stef Gibson created an amazing cover for my open source book Bayesian Methods for Hackers. View it below:

continue...


My favourite part of The Extended Phenotype

February 02th, 2013

To quote directly from the book, by Richard Dawkins:

continue...


Interior Design with Machine Learning

January 04th, 2013

While designing my new apartment, I found a very cool use of machine learning. Yes, that's right, you can use machine learning in interior design. As crazy as it sounds, it is completely legitimate.

continue...


A more sensible omnivore.

November 17th, 2012

My girlfriend, who is a vegetarian, and I often discuss the merits and dismerits of being a vegetarian. Though I am not a vegetarian (though I did experiment with veganism and holistic diets during some One Week Ofs), very much agree that eating as much meat as we do is not optimal.

Producing an ounce of meat requires a surprising amount of energy, whereas it's return energy is very small. We really only eat meat for its taste. It is strange how often we, the human omnivores, require meat in a meal, less it's not a real meal (and we do this three times a day). And unfortunately, a whole culture eating this way is not sustainable.

I have often thought about a life without meat,

continue...


UWaterloo Subway Map

September 22th, 2012

I think my thing with subway maps is getting weird. I just created a fictional University of Waterloo subway map using my subway.js library.

continue...


Modeling password creation

September 14th, 2012

Creating a password is an embarrassingly difficult task. A password needs to be both memorable and unique enough not to be guessed. The former criterion prevents using randomly generated passwords (try remembering 9st6Uqfe4Z for Gmail, rAEOZmePfT for Facebook, etc.), and the latter is the reason why passwords exist in the first place. So the task falls on humans to create their own passwords and carefully balance these two criteria. This has been, and still is, a bad idea.

continue...


Eurotrip & Python

August 13th, 2012

Later this month, my lovely girlfriend and I are travelling to Amsterdam, Berlin and Kiel. The first half of the trip we will be exploring the tourist and nontourist areas of Amsterdam and Berlin. I'm very excited as I get to spend time drinking and relaxing with my girlfriend. But then...

continue...


Warrior Dash Data

July 25th, 2012

Last Sunday I competed in a pretty epic competition: The Warrior Dash. It's 5k of, well honestly, it's 5k of mostly hills and trail running. Plus spread throughout are some pretty fun obstacles. With only five training workouts un...

continue...


Subway.js

July 17th, 2012

The javascript code that creates and controls the subway map above is available on GitHub. You can build your own using the pretty self-explanatory code + README document. Imagine using the code in a school project or advertising...

continue...


CamDP++

July 03th, 2012

Camdp.com is my latest attempt to digitize myself. I tried to map the subway lines to mimic my life and work, with each subway line representing a train of thought. I hope you enjoy the continue...