Vermeille's blog

Resume && contactProjetsC++BullshitLanguage TheoryArtificial Intelligence¬LinkedIn

DeepBan: How to get banned from Facebook with deep neural networks

ban img

I'm currently playing with GANs. GANs are awesome, and the shortcoming GANs have shown previously are slowly getting fixed thanks to amazing researchers. GANs, if you don't know already, are a way to have a neural net to (hopefully) create realistic content.

It's no secret now that I'm working for an adult company, and that gives a quite funny color to the research I'm doing, As such, well, I thought "dude, let's use GANs to try and generate adult content. Latent space interpolation will be funny as heck.". So that was it. I reused a part of the dataset I'm using for action / position recognition, and threw a DCGAN on it with the newly proposed R1 regularizer.

A day later, I thought the images were pretty unsettling and weird, while being faaaaaar away from any recognizable adult content. Still very abstract to the human eye. So, well, let's post it on facebook, right? That'll amuse my friends while still being SFW.

And the second I posted it, Facebook's anti-porn filter banned me for three days. The key part of the previous paragraph was to the human eye. What I naively didn't expect was that it was actually really close to porn in convnet feature space, and I got banned for seemingly abstract pictures.

So there you have it folks: a good ol' black box adversarial attack to Facebook's anti-porn filter. Well, let's rather call it an adversarial suicide.

Next step is to actually attack the system and be able to post porn without being flagged. Just for science. Just to make a point about how unsafe neural nets currently are.

Enhancing Videos with Deep Product Placement

Motivation

In my previous company (hi to yall guys, btw), they had this amazing idea to lock me up in a room with four other 10x-ers madmen to come up with an idea of something new and innovative and all that.

We thought that ads were too invasive and unbearable in the world but were totally absent from movies. That means we can crap that with ads too and hope to make money. Obviously, we're tech people so LET'S AUTOMATE MOST OF THAT in few days. Proof of concept. I really do love impossible challenges.

Main concept

So, the goal was something like a UI with a split screen where ad makers could select a 3D model of a product on the left pane, and then click on the movie on the right pane to insert it. Then Neural Networks would do the magic, and

  1. Make region proposals to the ad makers for zones were it makes sense to insert ads, like adding a poster on a wall, or adding a Coca-Cola can on a table.
  2. Scale the 3D model according the perspective (covered in this blog post)
  3. Handle occlusions, that is, if someone walks between the camera and the inserted object, the occluded part will have to disappear (covered in this post as a side effect of 1)
  4. Track camera motion (uncovered, and never will be)
  5. Re-render / Harmonize to accomodate the model's luminosity and global light color to the original image (uncovered, and never will be)
  6. Cast shadows (uncovered, and never will be)

We just had enough time and funding for 1 (barely), 2 and 3. And then I quitted the job.

Region proposals

Is it weird if the solution has almost the same name of the problem? I used a Faster-RCNN (which includes a Region Proposals Network) trained on ImageNet to tell me where tables and TV monitors were.

Done.

Depth

Points 2 and 3 are about depth estimation. Choose a region (ie, a pixel) where the model will be inserted, estimate its depth, scale the model according to the perspective, and if that depth suddenly decreases, that (theoretically) means something has come between the object and the camera, and we have to hide the occluded pixels of the rendered model. Easy, right? So let's do it.

The general depth estimation was made using Deeper Depth Prediction with Fully Convolutional Residual Networks. They give the trained models, and it's doing a fairly decent job. However, for our needs, this was not good enough. Look at the predictions: the edges are not sharp at all (which makes handling occlusion a no-go), and tbh, that do look good on pictures, but Oh Lord forgive me if you attempt to visualize it in 3D. Edges are not sharp, and flat surfaces are nothing but flat. That's probably the state-of-the-art of what Deep Learning could do at that time, but we had to fix it.

Original Image and Depth Estimation

Original Image and Depth Estimation

And incompetent, clueless me happened to stumble upon the great Colorization Using Optimization. Take a B&W image, just give it some colored strokes as a hint, and let the algorithmic magic (no Deep Learning™! regular optimization algorithm!) propagate the color and shit. It'll get the shades right and will do a great job with the edges. And le me said to himself "This looks interesting, how about I use the same technique to propagate depth hints instead of color hints and get sharp edges?". Turns out that 10 times out of 10, when I have a good idea, thankfully someone else had it before and did the hard work of transposing a hand wavy concept into concrete maths and code.

So someone just did that in High resolution depth image recovery algorithm using grayscale image cough cough, look it up on SciHub, cough cough. Their problem is a bit different than mine: they have a reliable and low res depth image and they want to scale it up. I have an unreliable but high res depth image. But still, I thought that could work. Also, they have a bit more going on to make the leap between colorization and depth prediction: it's a really fair thing to assume that colors will vary according to the variations in the greyscale image, but it's much less obvious with depth. So they kinda iterate between fixing the depth image, then fixing the greyscale image. Unsurprisingly, their results are much better looking than mine.

I decided to randomly sample my depth image and take some random depth hints to propagate through the image. And used what's in the paper. Wasn't super easy to implement for someone like me without any prior knowledge in optimization (beside Gradient Descent) or Image Processing (besides, well, convolutions), but by ensembling several of that guy's papers, I could reconstruct the knowledge I was missing and build the thing. Also, the paper is fairly easy to read and generally understand, if you're not trying to implement something without knowing a shit about what you're doing.

recovered depth

recovered depth

Look it up in details on the Jupyter Notebook.

Before you see the final result, there's one last thing to fix: frame-by-frame depth estimation wasn't consistent, and the depth image did flicker a lot. The easy but disappointing solution I went with was to insert a lowpass filter

\[\text{depth} := 0.95 * \text{depth} + 0.05 * \text{this_frame_depth}\]

Perspective

Once we get the depth right-ish, we need to properly scale the object according to its location. This can be easily solved using the pin-hole camera model and doing simple triangle maths: first, tell the program how big the object is, then how big should the object look in the picture at some depth (like, click on the table and re-click few pixels above to tell where the object should be). They you're able to use basic proportionality to scale the object at any depth, as long as the camera remains the same.

Final Result

In the end, for something done in 3 days without prior knowledge it was pretty good for me to feel happy.

insert video here but I can't find it anymore

A Differentiable Graph for Neural Networks

A differentiable graph for neural networks

Following the work by (Grefenstette et al. 2015) in the paper "Learning to transduce with unbounded memory", they invented a fully differentiable Stack, Queue, and DeQueue, I'm trying to create a differentiable graph model. Although the work is not finished yet since it misses proper experiments, I'm writing this article so that someone may help me with those ideas.

Still, I actually didn't study maths the past 5 years in an academic context, so don't expect perfect notation and terminology, I'm self taught in ML and maths. However, I think the ideas are reasonnably introduced, and that the concept is not totally wrong.

Basic graph model

The simple model stores the edges in an adjancency matrix of size \(|V|\times |V|\), where row \(a\) column \(b\) contains \(P(\text{edge}_{ab})\), that is: the strength of a link from vertex \(a\) to \(b\). The vertices are stored in a vector of size \(|V|\) where each cell contains \(P(a)\), ie, the strength of existence of \(a\). Let's call the adjacency matrix \(\mathbf{C}\), where \(C_{ab}\) is the \(a\)th row and \(b\)th column of \(C\), and \(\mathbf{s}\) the strength vector. Both the vector and the matrix contains values only in \([0;1]\)

For two vertices \(a\) and \(b\) (I will discuss later about how to adress them), some operations are:

Are two vertices directly connected?

\[\text{connected?(a, b)} = s_a s_b C_{ab}\]

Returns the strength of the edge between \(a\) and \(b\). We can also propose to read \(\mathbf{C}^n\) to access the graph's transitive closure of nth order.

Successors of \(a\)

\[\text{succ}(a) = (\mathbf{s} \circ \mathbf{C}_{*,a}) s_a\]

Where \(\mathbf{C}_{*,a}\), following MATLAB's notation, denotes the \(a\)th column of \(\mathbf{C}\).

Returns a vector of strength. The \(i\)th value indicates the strength of \(i\) as a successor of \(b\).

The predecessor function can be implemented trivially by using rows intead of columns in the matrix.

Change the strength of a vertex

Changing strength of a vertex \(a\) to target strength \(t\) with amount \(p\) is:

\[s_a := (1-p)s_a + p t\]

So that nothing changes if \(p = 0\)

and similarly for all edges.

Adressing

Similarly to what have been proposed for the Neural Turing Machine (Graves et al. 2014), I propose two adressing modes: by location and by content. The manipulation of the associated graph function almost work the way the Neural Random Access Machine (Kurach et al. 2015) uses its modules.

By location

Let's take the example of the \(\text{connected?(a, b)}\) function.

To be able to call this function in a fully differentiable way, we can't hardly choose \(a\) and \(b\). We instead have to make \(a\) and \(b\) distributions over vertices.

Let \(A\) and \(B\) be distributions over vertices. The \(\text{connected?(a, b)}\) can be used in a differentiable way like this:

\[\text{out} = \sum_a \sum_b P(A = a)P(B = b)\text{connected?}(a, b)\] \[\text{out} = \sum_a \sum_b P(A = a)P(B = b) s_a s_b C_{a,b}\]

Same goes for the successor function:

\[\text{out} = \sum_a P(A = a)\text{succ}(a)\]

And so on.

However, addressing by locations has severe cons:

  • You can't grow the number of vertices. It would need the neural net emitting addressing distributions to grow accordingly.

  • As you grow the number of available operations or "views" of the graph (such as adding the ability to read the nth order transitive closure to study connexity, you need to emit more and more distributions over \(V\).

  • You need to emit \(V^2\) value of \(p, t\) at each time step to gain the ability to modify each edge. Which is a lot. Way too much. A RNN may be able to do it sequentially, or might not.

Hence, the graph must stay with a fixed size, and the dimensionnality of controls is already huge.

I will discuss a way to reduce dimensionality and allow the graph to have an unfixed size with content.

By content

So far, our vertices and edges were unlabeled. I have no idea how an unlabeled graph would be useful, but the neural net controlling it would have to find a way on its own to know which node is what. And it might not achieve that.

Here, I propose to extend our model with embeddings. With \(d\) being the size of embeddings, we need an additionnal matrix \(\mathbf{E} \in \mathbb{R}^{|V| \times d}\) to embed the vertices.

Adressing the nodes is now done by generating a distribution over the nodes defined as the softmax of the similarity (dot product) of the embedding outputted by the neural net and the actual vertices' embeddings.

For instance, let \(\mathbf{x_a, x_b} \in \mathbb{R}^{d \times d}\) be two embeddings given by the controller. We can have the strength of connection of those embeddings by:

\[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b \text{connected?}(a, b)\] \[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b s_a s_b C_{a,b}\]

Same goes for the successor function:

\[\text{out} = \text{softmax}(\mathbf{E} \mathbf{x_a}) \circ \sum_a \text{succ}(a)\]

We can extend the model again by adding a tensor \(\mathbf{F} \in \mathbb{R}^{|V| \times |V| \times d}\) to embed the edges, but I'm not sure yet about the use case of the benefices. Maybe one could find it useful to know which (fuzzy) pair of vertices is linked by a given embedded edge \(\mathbf{x}\), like

\[\text{vertex1} = \sum_a \text{softmax}(\sum_b s_b F_{a,b,*} \cdot \mathbf{x})_a s_a E_a\] \[\text{vertex2} = \sum_a \text{softmax}(\sum_b s_b F_{b,a,*} \cdot \mathbf{x})_a s_a E_a\]

We can derive other operations in a similar fashion easily.

To grow the graph, let the controller output, at each timestep, a pair of an embedding and a strength. At each timestep, add the node with the said embedding and strength to the graph. For the edges, it's possible to either init their strength and embedding to 0, or to initialize the embedding of the edge from the new vertex \(a\) to \(b\) as \[F_{a,b,*} = \frac{E_{a,*} + E_{b,*}}{2}\] and their strength as a cropped cosine distance of their embedding \[C_{a,b} = \text{max}(0, \frac{\mathbf{E_{a,*} \cdot E_{b,*}}}{\Vert\mathbf{E_{a,*}}\Vert \Vert\mathbf{E_{b,*}}\Vert})\]

With this addressing mode, the underlying graph structure given by \(\mathbf{C}\) and \(\mathbf{s}\) is never accessed by the controller which manipulates only embeddings to allow fuzzy operations. And the number of vertices is never needed for the controller, which allows to use a growing graph without growing the controller, solving the issue we had when addressing by locations.

Experiments

They are to come as soon as I'll find an idea to test this concept, and decided of a clear way to architect the inputs / outputs of the controller. I'm thinking about question answering about relationships between things, but we'll see. I don't really know how to design such an experiment yet.

Maybe the neural net won't be able to learn how to use this. Maybe it won't use it the expected way. That's the kind of things you never really know.

An expectation maximization Yahtzee AI

Yahtzee

I will describe a simple AI I did for the Yahtzee game. The solution is not optimal because of one small point. There are probably smarter ways to write this program, but as I needed to write this program quickly to play with my friends at New Year's Eve, (I had less than 3 days, actually), my priority was to have a solution almost guaranteeing me to win, not a beautiful and optimal one. If you know how to make it better, let me know in the comments.

As I am self taught in probability ans statistics, my notations and terminology might not be accurate. You're more than welcome to help me improve this in the comments.

Description

The game of Yahtzee is a mix of poker and dice rolls: you have 5 dices to roll, the ability to reroll any of them twice, and, depending of the combinations you have, score some points. Each combination can be scored only once, and if no combination was made, the player must sacrifice one of them, so that the number of turns is fixed.

The combinations are:

Name Score Description
One Sum of 1s Number of 1s obtained
Two Sum of 2s Number of 2s obtained * 2
Three Sum of 3s Number of 3s obtained * 3
Four Sum of 4s Number of 4s obtained * 4
Five Sum of 5s Number of 5s obtained * 5
Six Sum of 6s Number of 6s obtained * 6
Set Sum of 3 dices Three same dices. Score is the sum of those 3.
Full House 25 Three same dices + two same dices.
Quad Sum of 4 dices Four same dices. Score is the sum of those 4.
Straight 30 Four dices in sequence (1234 / 2345 / 3456)
Full straight 40 Five dices in sequence (12345 / 23456)
Yahtzee 50 Five same dices
Luck Sum of dices Any combination. Usually, when nothing else works.

Each player, in turn do this:

  1. Roll all dices. The player can select a combination and end his turn, or...
  2. Select some dices to roll again. Then, the player can select a combination and end his turn, or...
  3. Select some dices to roll again. Then, the player MUST select a combination to score or sacrifice.

The AI

The numbers

The game has a fairly low dimensionnality. Any of the 5 dices can take values from 1 to 6. Hence, the (naive) number of possible games is \(6^5 = 7776\). But this is actually a higher bound: the dices are not ordered, and a lot of the combinations are equivalent (11234 is equivalent to 12431, etc). The real number of possible games is given by the formula of unordered combinations with repetitions. With \(n = 6\) and \(k = 5\):

\[C'_k(n) = {n+k-1 \choose k}\] \[C'_{ 5 }( 6 ) = C_{{ 5}}(10) = {{ 10} \choose 5} = \frac{ 10! }{ 5!(10-5)!} = 252\]

Which is, fortunately, far from intractable, and we can bruteforce all of them.

We will also find useful later to know how many outcomes are possible for any number of dices.

# of dices # of outcomes
0 1
1 6
2 21
3 56
4 126
5 252

The number of possible actions (set of dices to reroll) is the number of subsets of the dices, ie \(2^k=2^5=32\).

The program

The program is fairly simple to use: given a dice roll, it will tell you which dices to reroll (if any), and the associated statistical expected score.

First, we need to precompute the score that each roll gets for all of the combinations. I first enumerate each of the (ordered) possible games, compute their score for each combination, and store than in a table of \(7776 \times 13\).

The user is then prompted to write the hand he got. The objective is the following:

\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}} \mathbb{E}[\text{best score | action}]\]

ie: find the subset of dices to reroll that leads to the best score (ie, the best scored combination for each possible outcome given this reroll) where \(action\) is successively one of the 32 possible subsets of dices to reroll, and \(action*\) the best choice according (with an eager strategy).

This expectation can be computed as follows:

\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}} \frac{1}{\text{# of equivalent outcomes | action}} \sum_{\text{possible games} g \text{| action}} \underset{\text{combination} c}{\operatorname{max}}(\text{score for} c | g)\]

This is an eager policy that maximizes the score for each turn. As such, this algorithm does not take into account the waste of points that you can make by choosing a combination, to allow maximizing your score for the whole game. As I was unable to think of an optimal solution for this (and I would really enjoy to know if there's one), I chose to apply a (quite arbitraty) penalty to each combination's maximum score following:

\[\text{penalty(combination, current_score)} = \exp{-\frac{\text{best possible score for combination} - \text{current_score})}{100}}\]

In code terms, this would lead to something like:

  1. Read input hand \(r\)
  2. Initialize \(e\), the expectation for each possible reroll, to 0
  3. For each possible game \(g_i\):
    1. \(d = \text{dices to reroll to go from } r \text{ to } g_i\)
    2. \[e[d] \text{+=} \frac{1}{\text{number of possible outcomes for } d} \text{maximum score for } g_i\]
  4. return \(\underset{\text{d}}{\operatorname{argmax}} e[d]\)

And that's it.

Conclusion

I won, statistically. Which is good. Bad point: my friends were angry because taking some time to make an often "obvious" choice was not worth it according to them :D. Make sure your friends enjoy maths and / or CS before doing something like this!

The code is available on my GitHub page. As I said, don't expect magnificent code.

Writing a french POS tagger (2)

How to choose those confidence levels? That's where we start to apply machine learning.

Okay, we're into maths now, so everything is a number. Let's assign a unique natural integer to every word in our vocabulary, so that \(w_i\) is the word with the id i, and a unique integer to every POS class, so that \(c_i\) is the class with the id i.

Maximum Likelihood Estimate

Since we're doing ML, let's use some vocabulary. Everything we extract from the ith word (noted wi) is called "features", denoted , the POS we're trying to tag are named "classes" and hence will be noted c, and our confidence is a probability. If we forget a little about suffixes, prefixes and capitalization, we have only the word as a source of info. Predicting is just the act of doing

\[\underset{c}{\operatorname{argmax}} P(c | w_i)\]

which is like asking "which class i'm the most confident about considering this word?". How do we find such a probability? We can use a simple algorithm called Maximum Likelihood Estimate which works like this:

\[P(c | w_i) = \frac{\operatorname{Count}(w_i\text{ is a }c)}{\operatorname{Count}(w_i)}\]

"How many times, in a big text corpus, is \(w_i\) of class c relatively to is global appearance?". And since the denominator is the same for all classes (it varies solely on w_i), we can leave it off.

\[\underset{c}{\operatorname{argmax}} P(c | w_i) = \underset{c}{\operatorname{argmax}} \operatorname{Count}(w_i\text{ is a }c)\]

Super simple, but we can't do anything for unknown words despite having all those fancy morphological features. We need something to incorporate them.

Maximum Entropy model

Prediction

Let's say we have what we call a feature vector \(\theta\), for which \(\theta_i\) is the \(i\)th feature being 1 if the feature is present, 0 otherwise. When we try to predict the class \(c\), the ith feature will be more or less discriminative. Let's represent that by weighting it with \(\lambda_{c,i}\) "how indicative is \(\theta_i\) for class \(c\)?" where:

  • a high lambda means "wow, super discriminative, this is super indicative of this class!"
  • a low (negative) lambda means "Wow, super discriminative, this super indicative of NOT this class!"
  • and a lambda about 0 means "this is not helping me for class c". Emphasis, \(\lambda_{c,i}\theta_i\) is NOT a probability, it's just a score. Continuing this way, evaluating the score of class \(c\) for the whole feature vector is a the dot product \(\lambda_c\cdot\theta\). Weighting then summing.

Note: Here, \(\lambda\) is essentially a matrix from which I pick the column \(c\), which is contrary to everything you'll read in the litterature. I'm doing this because, contrary to what the litterature says, I assume that a feature can be discriminative for every class, whereas most papers wants you to condition the features on the class being predicted, allowing a different set of features per class. With my approach, we have more parameters, some of them will be weighted to 0, but it prevents from early optimization. Later on, and after having analyzed the lambdas, one can purge the unnecessary features and weights in the actual implementation, without having missed a discriminative indication.

Once we have this score, easy stuff, find the class with the best score.

\[\underset{c}{\operatorname{argmax}} P(c | \theta, \lambda) = \underset{c}{\operatorname{argmax}} \lambda_c\cdot\theta\]

how cool? Wait, how do I choose all those lambdas?

Learning

Well, that's a different story. To make a long story short, we have a very handy algorithm, called Maximum Entropy. It relies on the fact that the probability distribution which best represents the current state of knowledge is the one with largest entropy.

I said earlier that scores weren't probabilities. I say that ME works on probabilities. We need to turn those score into probabilities. First we needs the class scores to be on the same scale. Fairly easy, just divide the score for the class \(c\) by the absolute score of all \(c\). We're mapping all of our values to [-1;1].

\[\operatorname{not-really-P}(c | \theta, \lambda) = \frac{\lambda_c\cdot\theta}{\sum_{c'}|\lambda_{c'}\cdot\theta|}\]

Still not good, we need [0, 1]. Well, we could just +1, but actually, we don't like computers to work on [0;1], because computation in such a range tends to quickly turn to NaN and absurdely small numbers. And divisions and multiplications are not cheap. That's why we prefer to deal with log-probabilities instead of probabilities. Log is monotonic, has nice additive properties and is cute looking function, despite its undefined log(0) value. It turns out that the best way to make probabilities from those scores is the exp function.

\[P(c | \theta,\lambda) = \frac{\exp{\lambda_c\cdot\theta}}{\sum_{c'}\exp{\lambda_{c'}\cdot\theta}}\]

It maps to the correct values range, and if we take the log probability, the division turns to a substraction and the exp cancel out. How nice.

And the super good thing is that, by whichever mathemagics I'm not fully getting yet (please someone explain?), Maximum Entropy and Maximum Likelihood are linked, which brings us to an optimization objective of:

\[\underset{\lambda}{\operatorname{argmin}} -\log \mathcal{L} = -\sum_x\log\frac{\exp{\lambda_c\cdot\theta^{(x)}}}{\sum_{c'}\exp{\lambda_{c'}\cdot\theta^{(x)}}}\]

Where \(\theta^{(x)}\) is the feature vector of the \(x\)th example in the dataset.

Cool. We have to take the gradient with respect to lambda, which gives us

\[\frac{\partial \mathcal{L}}{\partial\lambda_c} = \sum_x (1\{x\text{ is a }c\}\theta^{(x)} - \sum_c P(c | \theta^{(x)}, \lambda)\theta^{(x)})\]

with this derivative, we can take a simple iterative approach to update lambda.

\[\lambda := \lambda - \alpha\frac{\partial \mathcal{L}}{\partial \lambda}\]

This is quite slow, but it works.

And in the end, you have your model.

Wait, what about the context? We used only features from the word, and we can't disambiguate from the very first example "je commande" and "une commande". Well, we'll have to use something a little smarter than a Maximum Entropy Model and use a MEMM, a Maximum Entropy Markov Model.

- page 1 of 5