Vermeille's blog

Resume && contactProjetsC++BullshitLanguage TheoryArtificial Intelligence¬LinkedIn

Face Recognition at scale (2/N)

Hey, how about some more problems?

In order to show the alpha, face_recognition over dlib-only face recognition system, I needed to get a server. I ended up renting a p2.xlarge EC2 instance. We'll get our servers later. No worry Dory, it'll be done quickly, swiftly, promptly.

Build a nginx-flask-face_recognition cuda docker, push it, done. Write a docker-compose.yml, put a traefik as a load balancer / Reverse Proxy / vhost router, and you're good to go, Bilbo.

I demo it, superiors are happy (how couldn't they be, AMARITE?), they try it with random pictures on the internet, it works seemingly fine (I actually did some statistics beforehand, and my accuracy on my testing set was good, so, le me is not surprised). So, then, superiors be like, let's try that on some real video frames! Me like, sure, that's the goal RIGHT?

So we try it on a couple of frames and eerrrrhhhh, not that great. The cynical reader would say "You said you were interested in annotating videos, yet you chose photos in your training and testing set, you stupid modafaka". And he wouldn't be so wrong, but I knew what I was doing (let's pretend).

  1. A face in a photo and a face in a frame shouldn't be much different.
  2. I don't exactly care on being exact per image, I just need to be statistically right over the video sequence. No probs if I miss some frames as long as I'm correct in the majority.
  3. Collecting roughly annotated photos was actually much easier than frames.

Why did it perform not-so-greatly?

  1. First, because there are no males in the database yet, but there are some on the videos, and they're totally polluting the results.

  2. And then, because the embeddings done by dlib when the face is not facing the camera and straight up is not great. True, dlib embeds a face aligner, but when the face is not facing the camera, there's only so much you can do to deform it to make it seemingly face the camera without ruining the picture. And dlib's face realigner totally fails when the face is rotated, say, because the person is lying on the ground. Which, uh, happens a lot in our videos, for some reasons.


Super easy to solve. Download the CelebA dataset, embed the faces, and train an SVM to recognize male and female faces right from the embeddings. More than 98% of testing accuracy, me happy.

Rotations, etc

The first problem, people not facing the camera, can't be solved without re-training dlib's embedding neural net with a less constrained set of photos (thus including non facing ones), which would take way too long (in terms of computing time, data gathering and preparation), and it's a wayyy too sensitive piece of the puzzle, so, let's try not to touch it.

The second however, rotated faces, has a simple solution: after dlib detected a face, but before sending it to the embedding neural net, we can detect and fix the rotation ourselves.

To be able to find the orientation of the face, a great quality RGB image is not needed. So we'll restrict ourselves to a greyscale 32x32 image of the face.

So I collected some fair amount of straight up faces, generated a bunch of rotations to build a dataset and that was it for the data. Easy peasy, Winny.

Taking sklearn, I did not have much success with learning the rotation with any of the algorithms. I guess the God of Hype didn't want it to work, because he wanted me to use some more Deep Learning Magic. So, I did.

Brace yourselves, rotated faces, Tensorflow is coming. I ended up building a dead small, 5 layers convolutional neural network that took few minutes to train, and had awesome results. Note that, for simplictiy, I did not try to regress to the exact angle, but instead classify it into some buckets of 20 degrees each, which is good enough for me, since dlib's face aligner can properly do its job as long as the angle is from -20° to 20°.

Integrating it to dlib's Face Embedding pipeline really made things better. How much? I don't really know tbh. It's so uuuurrrrrrhhhh to build a test set. As all researchers / industrials, I'll make it later, and if I ever publish somethin' 'bout that, I'll obviously pretend I made it right before doing anything.

Rotated face

Rotated face

The net's architecture is dead simple and fast, and looks like this:

InputLayer 32x32x1
Conv2D 64 filters, 3x3 kernel, same
MaxPool2D 2x2 valid
Conv2D 32 filters, 3x3 kernel, same
MaxPool2D 2x2 valid
Conv2D 16 filters, 3x3 kernel, same
MaxPool2D 2x2 valid
FullyConnected 128
FullyConnected 12

Some more stuff

Playing more with UTKFaces and CelebA, I also built a SVM that classifies an embedding into ethnicities, and another that regresses to an estimation of the age. The second one do not work that great, mainly I guess because embeddings are actually supposed to be age-invariant so that ideally the same young and old person share the same embedding.


Is it good enough now? Haha, you bet. Life is a bitch, Machine Learning is a lie. The world is an illusion, governments are a conspiracy, vaccines prevent peple from having their Jedi powers. So, yeah. Nope. Not yet. 'Twas bettah, but still not enuff'.

Enhancing Videos with Deep Product Placement


In my previous company (hi to yall guys, btw), they had this amazing idea to lock me up in a room with four other 10x-ers madmen to come up with an idea of something new and innovative and all that.

We thought that ads were too invasive and unbearable in the world but were totally absent from movies. That means we can crap that with ads too and hope to make money. Obviously, we're tech people so LET'S AUTOMATE MOST OF THAT in few days. Proof of concept. I really do love impossible challenges.

Main concept

So, the goal was something like a UI with a split screen where ad makers could select a 3D model of a product on the left pane, and then click on the movie on the right pane to insert it. Then Neural Networks would do the magic, and

  1. Make region proposals to the ad makers for zones were it makes sense to insert ads, like adding a poster on a wall, or adding a Coca-Cola can on a table.
  2. Scale the 3D model according the perspective (covered in this blog post)
  3. Handle occlusions, that is, if someone walks between the camera and the inserted object, the occluded part will have to disappear (covered in this post as a side effect of 1)
  4. Track camera motion (uncovered, and never will be)
  5. Re-render / Harmonize to accomodate the model's luminosity and global light color to the original image (uncovered, and never will be)
  6. Cast shadows (uncovered, and never will be)

We just had enough time and funding for 1 (barely), 2 and 3. And then I quitted the job.

Region proposals

Is it weird if the solution has almost the same name of the problem? I used a Faster-RCNN (which includes a Region Proposals Network) trained on ImageNet to tell me where tables and TV monitors were.



Points 2 and 3 are about depth estimation. Choose a region (ie, a pixel) where the model will be inserted, estimate its depth, scale the model according to the perspective, and if that depth suddenly decreases, that (theoretically) means something has come between the object and the camera, and we have to hide the occluded pixels of the rendered model. Easy, right? So let's do it.

The general depth estimation was made using Deeper Depth Prediction with Fully Convolutional Residual Networks. They give the trained models, and it's doing a fairly decent job. However, for our needs, this was not good enough. Look at the predictions: the edges are not sharp at all (which makes handling occlusion a no-go), and tbh, that do look good on pictures, but Oh Lord forgive me if you attempt to visualize it in 3D. Edges are not sharp, and flat surfaces are nothing but flat. That's probably the state-of-the-art of what Deep Learning could do at that time, but we had to fix it.

Original Image and Depth Estimation

Original Image and Depth Estimation

And incompetent, clueless me happened to stumble upon the great Colorization Using Optimization. Take a B&W image, just give it some colored strokes as a hint, and let the algorithmic magic (no Deep Learning™! regular optimization algorithm!) propagate the color and shit. It'll get the shades right and will do a great job with the edges. And le me said to himself "This looks interesting, how about I use the same technique to propagate depth hints instead of color hints and get sharp edges?". Turns out that 10 times out of 10, when I have a good idea, thankfully someone else had it before and did the hard work of transposing a hand wavy concept into concrete maths and code.

So someone just did that in High resolution depth image recovery algorithm using grayscale image cough cough, look it up on SciHub, cough cough. Their problem is a bit different than mine: they have a reliable and low res depth image and they want to scale it up. I have an unreliable but high res depth image. But still, I thought that could work. Also, they have a bit more going on to make the leap between colorization and depth prediction: it's a really fair thing to assume that colors will vary according to the variations in the greyscale image, but it's much less obvious with depth. So they kinda iterate between fixing the depth image, then fixing the greyscale image. Unsurprisingly, their results are much better looking than mine.

I decided to randomly sample my depth image and take some random depth hints to propagate through the image. And used what's in the paper. Wasn't super easy to implement for someone like me without any prior knowledge in optimization (beside Gradient Descent) or Image Processing (besides, well, convolutions), but by ensembling several of that guy's papers, I could reconstruct the knowledge I was missing and build the thing. Also, the paper is fairly easy to read and generally understand, if you're not trying to implement something without knowing a shit about what you're doing.

recovered depth

recovered depth

Look it up in details on the Jupyter Notebook.

Before you see the final result, there's one last thing to fix: frame-by-frame depth estimation wasn't consistent, and the depth image did flicker a lot. The easy but disappointing solution I went with was to insert a lowpass filter

\[\text{depth} := 0.95 * \text{depth} + 0.05 * \text{this_frame_depth}\]


Once we get the depth right-ish, we need to properly scale the object according to its location. This can be easily solved using the pin-hole camera model and doing simple triangle maths: first, tell the program how big the object is, then how big should the object look in the picture at some depth (like, click on the table and re-click few pixels above to tell where the object should be). They you're able to use basic proportionality to scale the object at any depth, as long as the camera remains the same.

Final Result

In the end, for something done in 3 days without prior knowledge it was pretty good for me to feel happy.

insert video here but I can't find it anymore

Face Recognition at scale (1/N)

Face recognition at scale pt.1 (featuring a lot more buzzwords in the text!)


Because I'm such a soulless sheep, I've recently started a PhD and a new job. Deep Learning, computer vision, hype, artificial intelligent, more buzzwords, face recognition, butterflies (I can't unthink of flying butter when I hear this word), neural networks, etc.

So yeah. I've been doing that for exactly 4 months, at the moment of writing (19/6/18). It appears that the company I'm working for needs some face recognition to identify people in videos. Like, actually, a fuckton of people, in a fuckload of videos, with a big shitload of uploads per day (I'm trying to curse proportionally to the numbers I'm describing). That means that whatever I'm doing, I have to do it...

  • quickly: people there are a bit skeptical about DL and I have to prove my worth
  • fast: because whatever algorithms I'm running, I have to treat a day's upload within less than a day, otherwise I'll never catch up
  • accurate: we already have reviewers, they're already busy, and the goal is to help them, not to give them more work because of false positives etc.
  • with the least effort possible: because I'm completely alone on that herculean task

Oh, and btw, I have no formal education in ML/DL, I've been basically self taught for a while, and that's more or less my introductory task in ML. Speaking of suicidal tendencies much?

Proof of concept and initial system

The Data

The first thing I had to do was to download some pictures per person I wish to recognize. I made a list of 100 persons, and downloaded around 4k pictures for each of them. That many because I didn't exactly know how many photos are actually needed (and still don't, tbh), they're basically super easy to find on the internet so why not, the more the better, and I thought that I might need to train my own face recognition engine, because the domain is much less constrained than traditional face recognition systems.

Analyzing what the script downloaded showed an awful lot of noise. Photos not showing the face, photos of another person sharing part of the name, photos were they were other faces, etc. And I just didn't want to review 400k pictures by myself, one by one, or I would still be doing that today.


Cleaning the data

I jumped immediately on dlib, wrapped by the fairly good face_recognition (btw, if you haven't read Adam's post on face recognition , it's awesome), detected the faces, and generated the embeddings (a 128D vector representing one's face features, so that vectors of the same persons are meant to be close (with Euclidean distance), and far apart for different persons).

I don't know jack about Javascript except:

  • It works in a browser so that's good because not only is it remote but it doesn't need anything to be installed to be used;
  • I hate stupid languages like php and javascript that are so terribly error-prone and inconsistent (I learned yesterday that php doesn't have a proper way to concatenate arrays, that's amazing);
  • React is the only frontend technology that hasn't made me want to shoot myself with a cannon so far. I guess that makes it my favorite (or should I say, least hated) UI technology at that point;
  • Andrej Karpathy wrote the awesome t-SNE.js that I definitely wanted to use to explore the data I just downloaded (or that's what I said to my thesis supervisor, but for realsies it's because PICTURES MOVE! WOOOO!)

And I went coding and nerding and sleep depriving myself for something like a month, moving through JS and web nonsense, avoiding depression, and finally built what I call the Face Inspector. Tadah. You can explore all the pictures downloaded under someone's name, filter them by the number of faces in the photo, visualize them under t-SNE, cluster them with K-means (with automatic K choice), X-means, and Chinese Whispers, and I've been allowed to release some of that stuff, for the better or the worse.

Face Inspector Showcase

Face Inspector Showcase

So, selecting pictures by clusters of similar pictures is a much less unreasonable task, and I could have my photos properly tagged in a little over a week.

I quickly built an HTTP API you can send pictures to, and get back some JSON describing locations of the faces that have been found, and the filename+distance of the k most similarly looking pictures in the dataset. As I knew that the dataset is really meant to grow (and I was missing C++), I sat for few hours writing a fast, multithreaded, vectorized, 128D knn C++ library that I exposed to python. I benchmarked various implementations and compilers and took the best one. Takeout: compilers are amazing. I ended-up with a clean implementation that my compiler could easily optimize.

Oh, and for the lulz, I quickly built a UI to exploit that HTTP endpoint, but didn't want to go through the hassle of using npm, creating a React project, compiling shit etc, so I wrote some piece of funny code React-ish but actually pure JS-backtick-string-nested-in-JS-backtick-string-nested-in-JS-backtick-string-as-a-python-string-on-the-server. See a part of it for yourself is its full glory. Because, who needs JSX, right?

def whoami():
    return """
    .... some JS.....
    let form = document.getElementById('form');
    form.addEventListener('submit', e => {
        res.innerText = 'loading...';
        fetch('/whoami?photo_url=' + encodeURIComponent(photo_url.value))
            .then(r => r.json())
            .then(data => {
                res.innerHTML =
                    ${ =>
                        `<h1>${}, conf: ${d.confidence}</h1>
                        <div style="display:flex">
                        <div style="display:flex">
                            ${ =>
                                         width="128" />
                                         (${Math.floor(p[1] * 1000) / 1000})
    .... some more JS...

So I had this dlib + fast kNN pipeline, ready to tackle some videos! Oh wait, not so fast. But we'll see that later.

A Differentiable Graph for Neural Networks

A differentiable graph for neural networks

Following the work by (Grefenstette et al. 2015) in the paper "Learning to transduce with unbounded memory", they invented a fully differentiable Stack, Queue, and DeQueue, I'm trying to create a differentiable graph model. Although the work is not finished yet since it misses proper experiments, I'm writing this article so that someone may help me with those ideas.

Still, I actually didn't study maths the past 5 years in an academic context, so don't expect perfect notation and terminology, I'm self taught in ML and maths. However, I think the ideas are reasonnably introduced, and that the concept is not totally wrong.

Basic graph model

The simple model stores the edges in an adjancency matrix of size \(|V|\times |V|\), where row \(a\) column \(b\) contains \(P(\text{edge}_{ab})\), that is: the strength of a link from vertex \(a\) to \(b\). The vertices are stored in a vector of size \(|V|\) where each cell contains \(P(a)\), ie, the strength of existence of \(a\). Let's call the adjacency matrix \(\mathbf{C}\), where \(C_{ab}\) is the \(a\)th row and \(b\)th column of \(C\), and \(\mathbf{s}\) the strength vector. Both the vector and the matrix contains values only in \([0;1]\)

For two vertices \(a\) and \(b\) (I will discuss later about how to adress them), some operations are:

Are two vertices directly connected?

\[\text{connected?(a, b)} = s_a s_b C_{ab}\]

Returns the strength of the edge between \(a\) and \(b\). We can also propose to read \(\mathbf{C}^n\) to access the graph's transitive closure of nth order.

Successors of \(a\)

\[\text{succ}(a) = (\mathbf{s} \circ \mathbf{C}_{*,a}) s_a\]

Where \(\mathbf{C}_{*,a}\), following MATLAB's notation, denotes the \(a\)th column of \(\mathbf{C}\).

Returns a vector of strength. The \(i\)th value indicates the strength of \(i\) as a successor of \(b\).

The predecessor function can be implemented trivially by using rows intead of columns in the matrix.

Change the strength of a vertex

Changing strength of a vertex \(a\) to target strength \(t\) with amount \(p\) is:

\[s_a := (1-p)s_a + p t\]

So that nothing changes if \(p = 0\)

and similarly for all edges.


Similarly to what have been proposed for the Neural Turing Machine (Graves et al. 2014), I propose two adressing modes: by location and by content. The manipulation of the associated graph function almost work the way the Neural Random Access Machine (Kurach et al. 2015) uses its modules.

By location

Let's take the example of the \(\text{connected?(a, b)}\) function.

To be able to call this function in a fully differentiable way, we can't hardly choose \(a\) and \(b\). We instead have to make \(a\) and \(b\) distributions over vertices.

Let \(A\) and \(B\) be distributions over vertices. The \(\text{connected?(a, b)}\) can be used in a differentiable way like this:

\[\text{out} = \sum_a \sum_b P(A = a)P(B = b)\text{connected?}(a, b)\] \[\text{out} = \sum_a \sum_b P(A = a)P(B = b) s_a s_b C_{a,b}\]

Same goes for the successor function:

\[\text{out} = \sum_a P(A = a)\text{succ}(a)\]

And so on.

However, addressing by locations has severe cons:

  • You can't grow the number of vertices. It would need the neural net emitting addressing distributions to grow accordingly.

  • As you grow the number of available operations or "views" of the graph (such as adding the ability to read the nth order transitive closure to study connexity, you need to emit more and more distributions over \(V\).

  • You need to emit \(V^2\) value of \(p, t\) at each time step to gain the ability to modify each edge. Which is a lot. Way too much. A RNN may be able to do it sequentially, or might not.

Hence, the graph must stay with a fixed size, and the dimensionnality of controls is already huge.

I will discuss a way to reduce dimensionality and allow the graph to have an unfixed size with content.

By content

So far, our vertices and edges were unlabeled. I have no idea how an unlabeled graph would be useful, but the neural net controlling it would have to find a way on its own to know which node is what. And it might not achieve that.

Here, I propose to extend our model with embeddings. With \(d\) being the size of embeddings, we need an additionnal matrix \(\mathbf{E} \in \mathbb{R}^{|V| \times d}\) to embed the vertices.

Adressing the nodes is now done by generating a distribution over the nodes defined as the softmax of the similarity (dot product) of the embedding outputted by the neural net and the actual vertices' embeddings.

For instance, let \(\mathbf{x_a, x_b} \in \mathbb{R}^{d \times d}\) be two embeddings given by the controller. We can have the strength of connection of those embeddings by:

\[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b \text{connected?}(a, b)\] \[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b s_a s_b C_{a,b}\]

Same goes for the successor function:

\[\text{out} = \text{softmax}(\mathbf{E} \mathbf{x_a}) \circ \sum_a \text{succ}(a)\]

We can extend the model again by adding a tensor \(\mathbf{F} \in \mathbb{R}^{|V| \times |V| \times d}\) to embed the edges, but I'm not sure yet about the use case of the benefices. Maybe one could find it useful to know which (fuzzy) pair of vertices is linked by a given embedded edge \(\mathbf{x}\), like

\[\text{vertex1} = \sum_a \text{softmax}(\sum_b s_b F_{a,b,*} \cdot \mathbf{x})_a s_a E_a\] \[\text{vertex2} = \sum_a \text{softmax}(\sum_b s_b F_{b,a,*} \cdot \mathbf{x})_a s_a E_a\]

We can derive other operations in a similar fashion easily.

To grow the graph, let the controller output, at each timestep, a pair of an embedding and a strength. At each timestep, add the node with the said embedding and strength to the graph. For the edges, it's possible to either init their strength and embedding to 0, or to initialize the embedding of the edge from the new vertex \(a\) to \(b\) as \[F_{a,b,*} = \frac{E_{a,*} + E_{b,*}}{2}\] and their strength as a cropped cosine distance of their embedding \[C_{a,b} = \text{max}(0, \frac{\mathbf{E_{a,*} \cdot E_{b,*}}}{\Vert\mathbf{E_{a,*}}\Vert \Vert\mathbf{E_{b,*}}\Vert})\]

With this addressing mode, the underlying graph structure given by \(\mathbf{C}\) and \(\mathbf{s}\) is never accessed by the controller which manipulates only embeddings to allow fuzzy operations. And the number of vertices is never needed for the controller, which allows to use a growing graph without growing the controller, solving the issue we had when addressing by locations.


They are to come as soon as I'll find an idea to test this concept, and decided of a clear way to architect the inputs / outputs of the controller. I'm thinking about question answering about relationships between things, but we'll see. I don't really know how to design such an experiment yet.

Maybe the neural net won't be able to learn how to use this. Maybe it won't use it the expected way. That's the kind of things you never really know.

An expectation maximization Yahtzee AI


I will describe a simple AI I did for the Yahtzee game. The solution is not optimal because of one small point. There are probably smarter ways to write this program, but as I needed to write this program quickly to play with my friends at New Year's Eve, (I had less than 3 days, actually), my priority was to have a solution almost guaranteeing me to win, not a beautiful and optimal one. If you know how to make it better, let me know in the comments.

As I am self taught in probability ans statistics, my notations and terminology might not be accurate. You're more than welcome to help me improve this in the comments.


The game of Yahtzee is a mix of poker and dice rolls: you have 5 dices to roll, the ability to reroll any of them twice, and, depending of the combinations you have, score some points. Each combination can be scored only once, and if no combination was made, the player must sacrifice one of them, so that the number of turns is fixed.

The combinations are:

Name Score Description
One Sum of 1s Number of 1s obtained
Two Sum of 2s Number of 2s obtained * 2
Three Sum of 3s Number of 3s obtained * 3
Four Sum of 4s Number of 4s obtained * 4
Five Sum of 5s Number of 5s obtained * 5
Six Sum of 6s Number of 6s obtained * 6
Set Sum of 3 dices Three same dices. Score is the sum of those 3.
Full House 25 Three same dices + two same dices.
Quad Sum of 4 dices Four same dices. Score is the sum of those 4.
Straight 30 Four dices in sequence (1234 / 2345 / 3456)
Full straight 40 Five dices in sequence (12345 / 23456)
Yahtzee 50 Five same dices
Luck Sum of dices Any combination. Usually, when nothing else works.

Each player, in turn do this:

  1. Roll all dices. The player can select a combination and end his turn, or...
  2. Select some dices to roll again. Then, the player can select a combination and end his turn, or...
  3. Select some dices to roll again. Then, the player MUST select a combination to score or sacrifice.

The AI

The numbers

The game has a fairly low dimensionnality. Any of the 5 dices can take values from 1 to 6. Hence, the (naive) number of possible games is \(6^5 = 7776\). But this is actually a higher bound: the dices are not ordered, and a lot of the combinations are equivalent (11234 is equivalent to 12431, etc). The real number of possible games is given by the formula of unordered combinations with repetitions. With \(n = 6\) and \(k = 5\):

\[C'_k(n) = {n+k-1 \choose k}\] \[C'_{ 5 }( 6 ) = C_{{ 5}}(10) = {{ 10} \choose 5} = \frac{ 10! }{ 5!(10-5)!} = 252\]

Which is, fortunately, far from intractable, and we can bruteforce all of them.

We will also find useful later to know how many outcomes are possible for any number of dices.

# of dices # of outcomes
0 1
1 6
2 21
3 56
4 126
5 252

The number of possible actions (set of dices to reroll) is the number of subsets of the dices, ie \(2^k=2^5=32\).

The program

The program is fairly simple to use: given a dice roll, it will tell you which dices to reroll (if any), and the associated statistical expected score.

First, we need to precompute the score that each roll gets for all of the combinations. I first enumerate each of the (ordered) possible games, compute their score for each combination, and store than in a table of \(7776 \times 13\).

The user is then prompted to write the hand he got. The objective is the following:

\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}} \mathbb{E}[\text{best score | action}]\]

ie: find the subset of dices to reroll that leads to the best score (ie, the best scored combination for each possible outcome given this reroll) where \(action\) is successively one of the 32 possible subsets of dices to reroll, and \(action*\) the best choice according (with an eager strategy).

This expectation can be computed as follows:

\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}} \frac{1}{\text{# of equivalent outcomes | action}} \sum_{\text{possible games} g \text{| action}} \underset{\text{combination} c}{\operatorname{max}}(\text{score for} c | g)\]

This is an eager policy that maximizes the score for each turn. As such, this algorithm does not take into account the waste of points that you can make by choosing a combination, to allow maximizing your score for the whole game. As I was unable to think of an optimal solution for this (and I would really enjoy to know if there's one), I chose to apply a (quite arbitraty) penalty to each combination's maximum score following:

\[\text{penalty(combination, current_score)} = \exp{-\frac{\text{best possible score for combination} - \text{current_score})}{100}}\]

In code terms, this would lead to something like:

  1. Read input hand \(r\)
  2. Initialize \(e\), the expectation for each possible reroll, to 0
  3. For each possible game \(g_i\):
    1. \(d = \text{dices to reroll to go from } r \text{ to } g_i\)
    2. \[e[d] \text{+=} \frac{1}{\text{number of possible outcomes for } d} \text{maximum score for } g_i\]
  4. return \(\underset{\text{d}}{\operatorname{argmax}} e[d]\)

And that's it.


I won, statistically. Which is good. Bad point: my friends were angry because taking some time to make an often "obvious" choice was not worth it according to them :D. Make sure your friends enjoy maths and / or CS before doing something like this!

The code is available on my GitHub page. As I said, don't expect magnificent code.

- page 1 of 6