# Fast classifier calibration for 10^8 unbalanced examples

Sam Steingold has introduced MIMIC in NYC Machine Learning meetup. MIMIC is a method for calibrating the score of a binary classifier into probability. The method is different from the usual Platts and Isotonic calibration

Its advantage is that it is much faster than Isotonic calibration, and is capable of ...

# using external GTX 980 with MacBook Pro and El Capitan

My external GPU was running just fine with Yosemite but installing El Capitan broke it. Lucky for me I've found automate-eGPU and after some experimenting I've found that version 0.9.6 works:

    git clone https://github.com/goalque/automate-eGPU.git ...

# Intro

I have a working setup made from MacBook Pro (Retina, Mid 2012) connected to an external GPU card – GTX 980. The card is placed in PCIe box that is connected to the laptop with a thunderbolt 2 cable which gives a throughput of 10Gb/s (latter MBP supports 16Gb ...

# Tips on working with Theano

Debugging Theano code is notoriously hard. Perhaps the main reason is because the python code you just wrote is not executed. Instead, your python expressions are used to build a graph of what you want to compute. The graph is compiled into CPU or GPU code when you use something ...

# Command line for cleaning the cells of an ipython notebook

Once in a while I have an ipython notebook that has so much stuff written into its output cells that it takes forever to open it in the browser. In some cases the browser crash, blocking me from reaching the menu option to clear all the cells in the notebook ...

# Getting deeper into deep learning

What do you prefer? Coffee, Tea? or should I ask CAFFE or ThEAno?

These days there are two main tracks to doing deep learning. Either use Pylearn2/Theano or use Caffe. Pyearn2 is very confusing to use but I've found a very nice video lecture showing how to bypass ...

# VW contextual bandit

The task of contextual bandit is to find a policy $\pi$ for deciding what action $a$ to take given a context $x$ or $a = \pi(x)$

The goal is to find a policy which maximizes the reward $V^\pi = E(r_{\pi(x)})$

one problem is how to measure the performance using offline data which was not