Our goal is to better understand adversarial examples by 1) bounding the minimum perturbation that needs to be added to a regular input example to cause a given neural network to misclassify it, and 2) generating some adversarial input example with minimum perturbation.