Vermeille's blog2016-03-06T17:42:36+01:00urn:md5:5758d3efb9495ffa8211bb031b6b3594DotclearA Differentiable Graph for Neural Networksurn:md5:2f086af02eb210b583eedb732030ccb42016-03-06T07:39:00+01:002016-03-06T18:11:43+01:00Vermeille <h2 id="a-differentiable-graph-for-neural-networks">A differentiable graph for neural networks</h2>
<p>Following the work by (Grefenstette et al. 2015) in the paper "Learning to transduce with unbounded memory", they invented a fully differentiable Stack, Queue, and DeQueue, I'm trying to create a differentiable graph model. Although the work is not finished yet since it misses proper experiments, I'm writing this article so that someone may help me with those ideas.</p>
<p>Still, I actually didn't study maths the past 5 years in an academic context, so don't expect perfect notation and terminology, I'm self taught in ML and maths. However, I think the ideas are reasonnably introduced, and that the concept is not totally wrong.</p>
<h2 id="basic-graph-model">Basic graph model</h2>
<p>The simple model stores the edges in an adjancency matrix of size <span class="math">\(|V|\times |V|\)</span>, where row <span class="math">\(a\)</span> column <span class="math">\(b\)</span> contains <span class="math">\(P(\text{edge}_{ab})\)</span>, that is: the strength of a link from vertex <span class="math">\(a\)</span> to <span class="math">\(b\)</span>. The vertices are stored in a vector of size <span class="math">\(|V|\)</span> where each cell contains <span class="math">\(P(a)\)</span>, ie, the strength of existence of <span class="math">\(a\)</span>. Let's call the adjacency matrix <span class="math">\(\mathbf{C}\)</span>, where <span class="math">\(C_{ab}\)</span> is the <span class="math">\(a\)</span>th row and <span class="math">\(b\)</span>th column of <span class="math">\(C\)</span>, and <span class="math">\(\mathbf{s}\)</span> the strength vector. Both the vector and the matrix contains values only in <span class="math">\([0;1]\)</span></p>
<p>For two vertices <span class="math">\(a\)</span> and <span class="math">\(b\)</span> (I will discuss later about how to adress them), some operations are:</p>
<h3 id="are-two-vertices-directly-connected">Are two vertices directly connected?</h3>
<p><span class="math">\[\text{connected?(a, b)} = s_a s_b C_{ab}\]</span></p>
<p>Returns the strength of the edge between <span class="math">\(a\)</span> and <span class="math">\(b\)</span>. We can also propose to read <span class="math">\(\mathbf{C}^n\)</span> to access the graph's transitive closure of nth order.</p>
<h3 id="successors-of-a">Successors of <span class="math">\(a\)</span></h3>
<p><span class="math">\[\text{succ}(a) = (\mathbf{s} \circ \mathbf{C}_{*,a}) s_a\]</span></p>
<p>Where <span class="math">\(\mathbf{C}_{*,a}\)</span>, following MATLAB's notation, denotes the <span class="math">\(a\)</span>th column of <span class="math">\(\mathbf{C}\)</span>.</p>
<p>Returns a vector of strength. The <span class="math">\(i\)</span>th value indicates the strength of <span class="math">\(i\)</span> as a successor of <span class="math">\(b\)</span>.</p>
<p>The predecessor function can be implemented trivially by using rows intead of columns in the matrix.</p>
<h3 id="change-the-strength-of-a-vertex">Change the strength of a vertex</h3>
<p>Changing strength of a vertex <span class="math">\(a\)</span> to target strength <span class="math">\(t\)</span> with amount <span class="math">\(p\)</span> is:</p>
<p><span class="math">\[s_a := (1-p)s_a + p t\]</span></p>
<p>So that nothing changes if <span class="math">\(p = 0\)</span></p>
<p>and similarly for all edges.</p>
<h2 id="adressing">Adressing</h2>
<p>Similarly to what have been proposed for the Neural Turing Machine (Graves et al. 2014), I propose two adressing modes: by location and by content. The manipulation of the associated graph function almost work the way the Neural Random Access Machine (Kurach et al. 2015) uses its modules.</p>
<h3 id="by-location">By location</h3>
<p>Let's take the example of the <span class="math">\(\text{connected?(a, b)}\)</span> function.</p>
<p>To be able to call this function in a fully differentiable way, we can't hardly choose <span class="math">\(a\)</span> and <span class="math">\(b\)</span>. We instead have to make <span class="math">\(a\)</span> and <span class="math">\(b\)</span> <em>distributions over vertices</em>.</p>
<p>Let <span class="math">\(A\)</span> and <span class="math">\(B\)</span> be distributions over vertices. The <span class="math">\(\text{connected?(a, b)}\)</span> can be used in a differentiable way like this:</p>
<p><span class="math">\[\text{out} = \sum_a \sum_b P(A = a)P(B = b)\text{connected?}(a, b)\]</span> <span class="math">\[\text{out} = \sum_a \sum_b P(A = a)P(B = b) s_a s_b C_{a,b}\]</span></p>
<p>Same goes for the successor function:</p>
<p><span class="math">\[\text{out} = \sum_a P(A = a)\text{succ}(a)\]</span></p>
<p>And so on.</p>
<p>However, addressing by locations has severe cons:</p>
<ul>
<li><p>You can't grow the number of vertices. It would need the neural net emitting addressing distributions to grow accordingly.</p></li>
<li><p>As you grow the number of available operations or "views" of the graph (such as adding the ability to read the nth order transitive closure to study connexity, you need to emit more and more distributions over <span class="math">\(V\)</span>.</p></li>
<li><p>You need to emit <span class="math">\(V^2\)</span> value of <span class="math">\(p, t\)</span> at each time step to gain the ability to modify each edge. Which is a lot. Way too much. A RNN may be able to do it sequentially, or might not.</p></li>
</ul>
<p>Hence, the graph must stay with a fixed size, and the dimensionnality of controls is already huge.</p>
<p>I will discuss a way to reduce dimensionality and allow the graph to have an unfixed size with content.</p>
<h3 id="by-content">By content</h3>
<p>So far, our vertices and edges were unlabeled. I have no idea how an unlabeled graph would be useful, but the neural net controlling it would have to find a way on its own to know which node is what. And it might not achieve that.</p>
<p>Here, I propose to extend our model with embeddings. With <span class="math">\(d\)</span> being the size of embeddings, we need an additionnal matrix <span class="math">\(\mathbf{E} \in \mathbb{R}^{|V| \times d}\)</span> to embed the vertices.</p>
<p>Adressing the nodes is now done by generating a distribution over the nodes defined as the softmax of the similarity (dot product) of the embedding outputted by the neural net and the actual vertices' embeddings.</p>
<p>For instance, let <span class="math">\(\mathbf{x_a, x_b} \in \mathbb{R}^{d \times d}\)</span> be two embeddings given by the controller. We can have the strength of connection of those embeddings by:</p>
<p><span class="math">\[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b \text{connected?}(a, b)\]</span> <span class="math">\[\text{out} = \sum_a \sum_b \text{softmax}(\mathbf{E} \mathbf{x_a})_a\text{softmax}(\mathbf{E} \mathbf{x_b})_b s_a s_b C_{a,b}\]</span></p>
<p>Same goes for the successor function:</p>
<p><span class="math">\[\text{out} = \text{softmax}(\mathbf{E} \mathbf{x_a}) \circ \sum_a \text{succ}(a)\]</span></p>
<p>We can extend the model again by adding a tensor <span class="math">\(\mathbf{F} \in \mathbb{R}^{|V| \times |V| \times d}\)</span> to embed the edges, but I'm not sure yet about the use case of the benefices. Maybe one could find it useful to know which (fuzzy) pair of vertices is linked by a given embedded edge <span class="math">\(\mathbf{x}\)</span>, like</p>
<p><span class="math">\[\text{vertex1} = \sum_a \text{softmax}(\sum_b s_b F_{a,b,*} \cdot \mathbf{x})_a s_a E_a\]</span> <span class="math">\[\text{vertex2} = \sum_a \text{softmax}(\sum_b s_b F_{b,a,*} \cdot \mathbf{x})_a s_a E_a\]</span></p>
<p>We can derive other operations in a similar fashion easily.</p>
<p>To grow the graph, let the controller output, at each timestep, a pair of an embedding and a strength. At each timestep, add the node with the said embedding and strength to the graph. For the edges, it's possible to either init their strength and embedding to 0, or to initialize the embedding of the edge from the new vertex <span class="math">\(a\)</span> to <span class="math">\(b\)</span> as <span class="math">\[F_{a,b,*} = \frac{E_{a,*} + E_{b,*}}{2}\]</span> and their strength as a cropped cosine distance of their embedding <span class="math">\[C_{a,b} = \text{max}(0, \frac{\mathbf{E_{a,*} \cdot E_{b,*}}}{\Vert\mathbf{E_{a,*}}\Vert \Vert\mathbf{E_{b,*}}\Vert})\]</span></p>
<p>With this addressing mode, the underlying graph structure given by <span class="math">\(\mathbf{C}\)</span> and <span class="math">\(\mathbf{s}\)</span> is never accessed by the controller which manipulates only embeddings to allow fuzzy operations. And the number of vertices is never needed for the controller, which allows to use a growing graph without growing the controller, solving the issue we had when addressing by locations.</p>
<h2 id="experiments">Experiments</h2>
<p>They are to come as soon as I'll find an idea to test this concept, and decided of a clear way to architect the inputs / outputs of the controller. I'm thinking about question answering about relationships between things, but we'll see. I don't really know how to design such an experiment yet.</p>
<p>Maybe the neural net won't be able to learn how to use this. Maybe it won't use it the expected way. That's the kind of things you never really know.</p>http://vermeille.fr/dotclear2/index.php/post/36-A-Differentiable-Graph-for-Neural-Networks#comment-formhttp://vermeille.fr/dotclear2/index.php/feed/atom/comments/36An expectation maximization Yahtzee AIurn:md5:a7a1eb9a91f310f868a673733ca3d8d32016-03-05T20:30:00+01:002016-03-05T21:36:05+01:00Vermeille <h2 id="yahtzee">Yahtzee</h2>
<p>I will describe a simple AI I did for the Yahtzee game. The solution is not optimal because of one small point. There are probably smarter ways to write this program, but as I needed to write this program quickly to play with my friends at New Year's Eve, (I had less than 3 days, actually), my priority was to have a solution almost guaranteeing me to win, not a beautiful and optimal one. If you know how to make it better, let me know in the comments.</p>
<p>As I am self taught in probability ans statistics, my notations and terminology might not be accurate. You're more than welcome to help me improve this in the comments.</p>
<h3 id="description">Description</h3>
<p>The game of Yahtzee is a mix of poker and dice rolls: you have 5 dices to roll, the ability to reroll any of them twice, and, depending of the combinations you have, score some points. Each combination can be scored only once, and if no combination was made, the player must sacrifice one of them, so that the number of turns is fixed.</p>
<p>The combinations are:</p>
<table>
<thead>
<tr class="header">
<th align="left">Name</th>
<th align="left">Score</th>
<th align="left">Description</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left">One</td>
<td align="left">Sum of 1s</td>
<td align="left">Number of 1s obtained</td>
</tr>
<tr class="even">
<td align="left">Two</td>
<td align="left">Sum of 2s</td>
<td align="left">Number of 2s obtained * 2</td>
</tr>
<tr class="odd">
<td align="left">Three</td>
<td align="left">Sum of 3s</td>
<td align="left">Number of 3s obtained * 3</td>
</tr>
<tr class="even">
<td align="left">Four</td>
<td align="left">Sum of 4s</td>
<td align="left">Number of 4s obtained * 4</td>
</tr>
<tr class="odd">
<td align="left">Five</td>
<td align="left">Sum of 5s</td>
<td align="left">Number of 5s obtained * 5</td>
</tr>
<tr class="even">
<td align="left">Six</td>
<td align="left">Sum of 6s</td>
<td align="left">Number of 6s obtained * 6</td>
</tr>
<tr class="odd">
<td align="left">Set</td>
<td align="left">Sum of 3 dices</td>
<td align="left">Three same dices. Score is the sum of those 3.</td>
</tr>
<tr class="even">
<td align="left">Full House</td>
<td align="left">25</td>
<td align="left">Three same dices + two same dices.</td>
</tr>
<tr class="odd">
<td align="left">Quad</td>
<td align="left">Sum of 4 dices</td>
<td align="left">Four same dices. Score is the sum of those 4.</td>
</tr>
<tr class="even">
<td align="left">Straight</td>
<td align="left">30</td>
<td align="left">Four dices in sequence (1234 / 2345 / 3456)</td>
</tr>
<tr class="odd">
<td align="left">Full straight</td>
<td align="left">40</td>
<td align="left">Five dices in sequence (12345 / 23456)</td>
</tr>
<tr class="even">
<td align="left">Yahtzee</td>
<td align="left">50</td>
<td align="left">Five same dices</td>
</tr>
<tr class="odd">
<td align="left">Luck</td>
<td align="left">Sum of dices</td>
<td align="left">Any combination. Usually, when nothing else works.</td>
</tr>
</tbody>
</table>
<p>Each player, in turn do this:</p>
<ol style="list-style-type: decimal">
<li>Roll all dices. The player can select a combination and end his turn, or...</li>
<li>Select some dices to roll again. Then, the player can select a combination and end his turn, or...</li>
<li>Select some dices to roll again. Then, the player MUST select a combination to score or sacrifice.</li>
</ol>
<h2 id="the-ai">The AI</h2>
<h3 id="the-numbers">The numbers</h3>
<p>The game has a fairly low dimensionnality. Any of the 5 dices can take values from 1 to 6. Hence, the (naive) number of possible games is <span class="math">\(6^5 = 7776\)</span>. But this is actually a higher bound: the dices are not ordered, and a lot of the combinations are equivalent (11234 is equivalent to 12431, etc). The real number of possible games is given by the formula of unordered combinations with repetitions. With <span class="math">\(n = 6\)</span> and <span class="math">\(k = 5\)</span>:</p>
<p><span class="math">\[C'_k(n) = {n+k-1 \choose k}\]</span> <span class="math">\[C'_{ 5 }( 6 ) = C_{{ 5}}(10) = {{ 10} \choose 5} = \frac{ 10! }{ 5!(10-5)!} = 252\]</span></p>
<p>Which is, fortunately, far from intractable, and we can bruteforce all of them.</p>
<p>We will also find useful later to know how many outcomes are possible for any number of dices.</p>
<table>
<thead>
<tr class="header">
<th align="left"># of dices</th>
<th align="left"># of outcomes</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td align="left">0</td>
<td align="left">1</td>
</tr>
<tr class="even">
<td align="left">1</td>
<td align="left">6</td>
</tr>
<tr class="odd">
<td align="left">2</td>
<td align="left">21</td>
</tr>
<tr class="even">
<td align="left">3</td>
<td align="left">56</td>
</tr>
<tr class="odd">
<td align="left">4</td>
<td align="left">126</td>
</tr>
<tr class="even">
<td align="left">5</td>
<td align="left">252</td>
</tr>
</tbody>
</table>
<p>The number of possible <em>actions</em> (set of dices to reroll) is the number of subsets of the dices, ie <span class="math">\(2^k=2^5=32\)</span>.</p>
<h3 id="the-program">The program</h3>
<p>The program is fairly simple to use: given a dice roll, it will tell you which dices to reroll (if any), and the associated statistical expected score.</p>
<p>First, we need to precompute the score that each roll gets for all of the combinations. I first enumerate each of the (ordered) possible games, compute their score for each combination, and store than in a table of <span class="math">\(7776 \times 13\)</span>.</p>
<p>The user is then prompted to write the hand he got. The objective is the following:</p>
<p><span class="math">\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}} \mathbb{E}[\text{best score | action}]\]</span></p>
<p>ie: find the subset of dices to reroll that leads to the best score (ie, the best scored combination for each possible outcome given this reroll) where <span class="math">\(action\)</span> is successively one of the 32 possible subsets of dices to reroll, and <span class="math">\(action*\)</span> the best choice according (with an eager strategy).</p>
<p>This expectation can be computed as follows:</p>
<p><span class="math">\[\text{action*} = \underset{\text{action}}{\operatorname{argmax}}
\frac{1}{\text{# of equivalent outcomes | action}} \sum_{\text{possible games} g \text{| action}} \underset{\text{combination} c}{\operatorname{max}}(\text{score for} c | g)\]</span></p>
<p>This is an eager policy that maximizes the score for each <em>turn</em>. As such, this algorithm does not take into account the <em>waste of points</em> that you can make by choosing a combination, to allow maximizing your score for the <em>whole game</em>. As I was unable to think of an optimal solution for this (and I would really enjoy to know if there's one), I chose to apply a (quite arbitraty) penalty to each combination's maximum score following:</p>
<p><span class="math">\[\text{penalty(combination, current_score)} = \exp{-\frac{\text{best possible score for combination} - \text{current_score})}{100}}\]</span></p>
<p>In code terms, this would lead to something like:</p>
<ol style="list-style-type: decimal">
<li>Read input hand <span class="math">\(r\)</span></li>
<li>Initialize <span class="math">\(e\)</span>, the expectation for each possible reroll, to 0</li>
<li>For each possible game <span class="math">\(g_i\)</span>:
<ol style="list-style-type: decimal">
<li><span class="math">\(d = \text{dices to reroll to go from } r \text{ to } g_i\)</span></li>
<li><span class="math">\[e[d] \text{+=} \frac{1}{\text{number of possible outcomes for } d}
\text{maximum score for } g_i\]</span></li>
</ol></li>
<li>return <span class="math">\(\underset{\text{d}}{\operatorname{argmax}} e[d]\)</span></li>
</ol>
<p>And that's it.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I won, statistically. Which is good. Bad point: my friends were angry because taking some time to make an often "obvious" choice was not worth it according to them :D. Make sure your friends enjoy maths and / or CS before doing something like this!</p>
<p>The code is available on <a href="https://github.com/Vermeille/Yahtzee">my GitHub page</a>. As I said, don't expect magnificent code.</p>http://vermeille.fr/dotclear2/index.php/post/35-An-expectation-maximization-Yahtzee-AI#comment-formhttp://vermeille.fr/dotclear2/index.php/feed/atom/comments/35Writing a french POS tagger (2)urn:md5:6a3e36e620d56fe76aa06a23ae468efc2015-10-13T12:00:00+02:002015-10-14T08:25:11+02:00Vermeille <p>How to choose those confidence levels? That's where we start to apply machine learning.</p>
<p>Okay, we're into maths now, so everything is a number. Let's assign a unique natural integer to every word in our vocabulary, so that <span class="math">\(w_i\)</span> is the word with the id i, and a unique integer to every POS class, so that <span class="math">\(c_i\)</span> is the class with the id i.</p>
<h2 id="maximum-likelihood-estimate">Maximum Likelihood Estimate</h2>
<p>Since we're doing ML, let's use some vocabulary. Everything we extract from the ith word (noted wi) is called "features", denoted , the POS we're trying to tag are named "classes" and hence will be noted c, and our confidence is a probability. If we forget a little about suffixes, prefixes and capitalization, we have only the word as a source of info. Predicting is just the act of doing</p>
<p><span class="math">\[\underset{c}{\operatorname{argmax}} P(c | w_i)\]</span></p>
<p>which is like asking "which class i'm the most confident about considering this word?". How do we find such a probability? We can use a simple algorithm called Maximum Likelihood Estimate which works like this:</p>
<p><span class="math">\[P(c | w_i) = \frac{\operatorname{Count}(w_i\text{ is a }c)}{\operatorname{Count}(w_i)}\]</span></p>
<p>"How many times, in a big text corpus, is <span class="math">\(w_i\)</span> of class c relatively to is global appearance?". And since the denominator is the same for all classes (it varies solely on w_i), we can leave it off.</p>
<p><span class="math">\[\underset{c}{\operatorname{argmax}} P(c | w_i) = \underset{c}{\operatorname{argmax}} \operatorname{Count}(w_i\text{ is a }c)\]</span></p>
<p>Super simple, but we can't do anything for unknown words despite having all those fancy morphological features. We need something to incorporate them.</p>
<h2 id="maximum-entropy-model">Maximum Entropy model</h2>
<h3 id="prediction">Prediction</h3>
<p>Let's say we have what we call a feature vector <span class="math">\(\theta\)</span>, for which <span class="math">\(\theta_i\)</span> is the <span class="math">\(i\)</span>th feature being 1 if the feature is present, 0 otherwise. When we try to predict the class <span class="math">\(c\)</span>, the ith feature will be more or less discriminative. Let's represent that by weighting it with <span class="math">\(\lambda_{c,i}\)</span> "how indicative is <span class="math">\(\theta_i\)</span> for class <span class="math">\(c\)</span>?" where:</p>
<ul>
<li>a high lambda means "wow, super discriminative, this is super indicative of this class!"</li>
<li>a low (negative) lambda means "Wow, super discriminative, this super indicative of NOT this class!"</li>
<li>and a lambda about 0 means "this is not helping me for class c". Emphasis, <span class="math">\(\lambda_{c,i}\theta_i\)</span> is NOT a probability, it's just a score. Continuing this way, evaluating the score of class <span class="math">\(c\)</span> for the whole feature vector is a the dot product <span class="math">\(\lambda_c\cdot\theta\)</span>. Weighting then summing.</li>
</ul>
<p>Note: Here, <span class="math">\(\lambda\)</span> is essentially a matrix from which I pick the column <span class="math">\(c\)</span>, which is contrary to everything you'll read in the litterature. I'm doing this because, contrary to what the litterature says, I assume that a feature can be discriminative for every class, whereas most papers wants you to condition the features on the class being predicted, allowing a different set of features per class. With my approach, we have more parameters, some of them will be weighted to 0, but it prevents from early optimization. Later on, and after having analyzed the lambdas, one can purge the unnecessary features and weights in the actual implementation, without having missed a discriminative indication.</p>
<p>Once we have this score, easy stuff, find the class with the best score.</p>
<p><span class="math">\[\underset{c}{\operatorname{argmax}} P(c | \theta, \lambda)
= \underset{c}{\operatorname{argmax}} \lambda_c\cdot\theta\]</span></p>
<p>how cool? Wait, how do I choose all those lambdas?</p>
<h3 id="learning">Learning</h3>
<p>Well, that's a different story. To make a long story short, we have a very handy algorithm, called Maximum Entropy. It relies on the fact that the probability distribution which best represents the current state of knowledge is the one with largest entropy.</p>
<p>I said earlier that scores weren't probabilities. I say that ME works on probabilities. We need to turn those score into probabilities. First we needs the class scores to be on the same scale. Fairly easy, just divide the score for the class <span class="math">\(c\)</span> by the absolute score of all <span class="math">\(c\)</span>. We're mapping all of our values to [-1;1].</p>
<p><span class="math">\[\operatorname{not-really-P}(c | \theta, \lambda) = \frac{\lambda_c\cdot\theta}{\sum_{c'}|\lambda_{c'}\cdot\theta|}\]</span></p>
<p>Still not good, we need [0, 1]. Well, we could just +1, but actually, we don't like computers to work on [0;1], because computation in such a range tends to quickly turn to NaN and absurdely small numbers. And divisions and multiplications are not cheap. That's why we prefer to deal with log-probabilities instead of probabilities. Log is monotonic, has nice additive properties and is cute looking function, despite its undefined log(0) value. It turns out that the best way to make probabilities from those scores is the exp function.</p>
<p><span class="math">\[P(c | \theta,\lambda) = \frac{\exp{\lambda_c\cdot\theta}}{\sum_{c'}\exp{\lambda_{c'}\cdot\theta}}\]</span></p>
<p>It maps to the correct values range, and if we take the log probability, the division turns to a substraction and the exp cancel out. How nice.</p>
<p>And the super good thing is that, by whichever mathemagics I'm not fully getting yet (please someone explain?), Maximum Entropy and Maximum Likelihood are linked, which brings us to an optimization objective of:</p>
<p><span class="math">\[\underset{\lambda}{\operatorname{argmin}} -\log \mathcal{L}
= -\sum_x\log\frac{\exp{\lambda_c\cdot\theta^{(x)}}}{\sum_{c'}\exp{\lambda_{c'}\cdot\theta^{(x)}}}\]</span></p>
<p>Where <span class="math">\(\theta^{(x)}\)</span> is the feature vector of the <span class="math">\(x\)</span>th example in the dataset.</p>
<p>Cool. We have to take the gradient with respect to lambda, which gives us</p>
<p><span class="math">\[\frac{\partial \mathcal{L}}{\partial\lambda_c} =
\sum_x (1\{x\text{ is a }c\}\theta^{(x)} - \sum_c P(c | \theta^{(x)}, \lambda)\theta^{(x)})\]</span></p>
<p>with this derivative, we can take a simple iterative approach to update lambda.</p>
<p><span class="math">\[\lambda := \lambda - \alpha\frac{\partial \mathcal{L}}{\partial \lambda}\]</span></p>
<p>This is quite slow, but it works.</p>
<p>And in the end, you have your model.</p>
<p>Wait, what about the context? We used only features from the word, and we can't disambiguate from the very first example "je commande" and "une commande". Well, we'll have to use something a little smarter than a Maximum Entropy Model and use a MEMM, a Maximum Entropy Markov Model.</p>http://vermeille.fr/dotclear2/index.php/post/33-Maximum-Entropy-Models#comment-formhttp://vermeille.fr/dotclear2/index.php/feed/atom/comments/33Writing a french POS tagger (1)urn:md5:0ed957ede03bb25361308b906bf708022015-10-12T11:09:00+02:002015-10-12T11:11:04+02:00Vermeille <h2 id="context">Context</h2>
<p>French lacks a lot of NLP tools in the FOSS community. And everyone focuses on english AI. Then, french technology sucks. How sad for us. Thing is, I want an AI in my home, for various automation stuff. And I want it to run locally, for privacy reasons (I don't want to share with companies my personnal audio and video at any single second). Then, someone has to start something. Let's say this someone is me, hahaha, sounds so much like a noble duty, hahaha.</p>
<p>The very first step in NLP is to have a POS tagger, that is, something tagging every word in a sentence with its part-of-speech (verb, noun, pronoun, adjective, adverb, etc). Having this allows to have another features that will allow higher level model to generalize more easily (like "oh, this word is a noun. Even if I never saw this word before, I know that this sentence is synctactically correct") and with less data.</p>
<p>One might at first think that this is not even an issue: look at the dictionnary entry for this word. Sadly, language is highly ambiguous. Let's consider "Je commande" (I order) and "une commande" (an order); depending on the context, "commande" is either a noun or a verbal form. Thankfully, french is much more morphologically rich than english, which makes it a less ambiguous language and provides more patterns in the words to guess their POS (like a word ending in "ly" is most likely an adverb).</p>
<h2 id="word-features">Word features</h2>
<p>Alright, let's brainstorm for a little, and see what kind of clues we can have to guess a word's POS. in machine learning terminology, that's called "features".</p>
<ul>
<li>hopefully, we already know the word, and this word will have a clear frequential dominance of one its many possible POS. For instance, the determiner "la" is ambiguous with "LA" (Los Angeles), but the determiner occuring much more than the city, always tagging it as the determiner will fail in only very few cases. Same applies for the "est" (a form of "être"/"be") and "est", the latin locution.</li>
<li>Case would also help disambiguate the LA case. So, keeping the shape is a good feature. We'll unlikely to know every city and possible name and every organisation etc, then something starting with a capitalized letter is actually a good indication to tag it as a proper noun.</li>
<li>If we don't know the word, we can use some morphological features of french. Suffixes like "er", "é", "ées", "ée", "eindre", "oindre", "ir", "issons", "issez", "ent" are very strong features.</li>
<li>Same applies for prefixes.</li>
</ul>
<p>Great. But they're another question. More than one, actually. How much can I trust every of these features? What are their relative levels of confidence? I mean, I believe they help. I don't know how if I'm right, and if I am, I don't know how much. A "ement" prefix is alwyas an adverb (tbh, I just don't have any counter example on top of head right now), while "er" is super often discriminative to infinitive verbs, but one counter example could be the adjective "léger" (not heavy), so we're slightly less confident in this one.</p>
<p>How to choose those confidence levels? That's where we start to apply machine learning.</p>http://vermeille.fr/dotclear2/index.php/post/32-Writing-a-french-POS-tagger-%281%29#comment-formhttp://vermeille.fr/dotclear2/index.php/feed/atom/comments/32No, I can't hack your ex-girlfriend's Facebookurn:md5:8711e5ecf717b8eee2c4ba1a1c93cd3f2015-08-12T01:31:00+02:002015-08-12T11:20:15+02:00Vermeille <p>As a computer geek who eventually turned into a software engineer, one of the questions I've been asked the most (if not <em>the most</em>), is</p>
<blockquote>
<p>Can you hack someone's service account?</p>
</blockquote>
<p>And I'm sure it's super common among SWEs, and I'm sure I'm not the only one raging about that.</p>
<h2 id="quick-and-simple-answer">Quick and simple answer</h2>
<p>No, I fuckin' can't.</p>
<h2 id="longer-and-detailed-answer">Longer and detailed answer</h2>
<p>To understand why, let me explain what it takes to hack something.</p>
<h3 id="an-internet-service">An internet service</h3>
<p>Any internet service (GMail, Facebook, MSN, any blogging service, etc) is a (complex) system built by engineers. Compare it to an engine, in a mechanical sense, or an electronic circuit, if you like, in a black box. This service is cut into many different parts:</p>
<ul>
<li><p>The web view is what you actually see and quite often the only way you have to use the website. On a minimal or poorly designed website, you directly receive the view filled with the data (the content of the page).</p></li>
<li><p>Sometimes, this webview calls another service on the web. For exemple, when you go on Facebook, you want to see the webview. When it's showing, the webview queries another service on the web to know what it must show, this service is known as an API. The API sends data back to the webview in an easy to process but very not pretty way.</p></li>
<li><p>Both of them are generated on a webserver where the company's code is running. You have absolutely no control on that. Actually, that's all of what hacking is about: be able to access this server and do bad stuff.</p></li>
<li><p>Most of the time, there's also a database service where all of user data is stored. This is the most precious piece of such services. Breaking into this is actually even better than the webserver. Engineers know that. That's why it has most often no interaction to the external world. It's directly accessible only from the inside of the company, and very few people have the credentials. For such big companies, most engineers never see actual data, and work only on generated and fictious data for privacy concerns, some being by company's ethics, other enforced by law. So that data can be actually rendered to the user, the webserver queries the database about what's needed, filters the result and renders it to the webview or the API.</p></li>
</ul>
<h3 id="how-do-we-get-in">How do we get in?</h3>
<p>You have to find a weakness of the system. You have to <em>misuse</em> it. If we continue with the analogies proposed before, the question of hacking is:</p>
<blockquote>
<p>Given what we could see and how we could interact with the engine, how can we misuse it so that it does what we want?</p>
</blockquote>
<p>If you have a car with an artificial speed limitation, you have to understand how the engine works, why it has this limitation, and how you can remove it. Not that hard'uh? Everybody at least a little known in mechanics can do that! Sure thing SWEs know how software works and can do the same!</p>
<p>Nope. In the case of an internet service, you can't even touch the engine. You can only see what it produces, and use it in the way engineers allowed you. You don't know how it works because most of the time the company keeps it secret. That's trying to bypass the car's speed limitation by only using the car's controls. Now it sounds way harder, doesn't it?</p>
<h3 id="what-is-hacking-about-in-the-end">What is hacking about, in the end?</h3>
<p>Hacking is misusing you service, trying to guess what are the mistakes the engineers could have done, so that you can do more than what you should. It's about forgetting to handle server's state properly <a href="http://sakurity.com/blog/2015/05/21/starbucks.html">when doing two opposite actions</a>; not securing some text fields, allowing you to <a href="http://www.w3schools.com/sql/sql_injection.asp">inject code</a> (super old, everybody's aware about this one), not checking properly filetypes when uploading files etc.</p>
<p>So, the hacker has to guess, try, fail, imagine what the engineers have done wrong. Try everything that comes to his mind, and, hopefully, that'll work. And even if you find a weakness, you have no clue that you can exploit it to do what you'd like. Maybe it will just produce garbage results.</p>
<h3 id="i-know-a-weakness-and-how-to-exploit-it">I know a weakness and how to exploit it</h3>
<p>Good, you have two choices now:</p>
<ol style="list-style-type: decimal">
<li><p>Contact the service, and tell them about it. They'll thank you, you'll save people data, maybe can write an article that will get famous for few weeks, and be given some money by the said company.</p></li>
<li><p>Keep it for yourself. Obviously, if it gets known, they will eventually fix it, and it will become unusable.</p></li>
</ol>
<p>So, exploits are well kept secrets. No, I can't hack your ex-girlfriend's account.</p>
<h2 id="but-i-had-my-account-hacked">But I had my account hacked!</h2>
<p>Well, odds are it's not Facebook's fault. As long as a service has users, there is a major vulnerability: the user's stupidity. Users are dumb. Beyond imagination. They write their passwords in a non secured file on their computers, they use public data like their birthday, or they use publicly available answer to their secret question (seriously guys, just enter garbage if you're asked. And if you want to "hack" someone you know, just ask them for the answer, they'll never remind this question can leak their accounts), they use the same password on every service, they leave their computers unlocked, they believe phishing, etc etc.</p>
<p>If you've been hacked, chances are that you've been stupid at some point. If you want to hack someone, just exploit his stupidity and send a phishing page.</p>
<h2 id="lets-say-im-stupid-what-could-i-do-about-that">Let's say I'm stupid, what could I do about that?</h2>
<p>Make sure your computer is clean: do not install garbage (like toolbars) or from untrusted sources. Use a password manager like LastPass to have random passwords to everything and keep your passwords safe. Read the links you're clicking on, and check the URL when you're asked your credentials.</p>http://vermeille.fr/dotclear2/index.php/post/31-No-I-can-t-hack-your-ex-girlfriend-s-Facebook#comment-formhttp://vermeille.fr/dotclear2/index.php/feed/atom/comments/31