<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[A dev diary]]></title><description><![CDATA[A dev diary]]></description><link>https://blog.wwalmnes.xyz/</link><generator>Ghost 4.34</generator><lastBuildDate>Thu, 30 Apr 2026 16:03:25 GMT</lastBuildDate><atom:link href="https://blog.wwalmnes.xyz/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Getting started with machine learning]]></title><description><![CDATA[<p><em>Note: this post was originally posted on Systek&apos;s website (where I work) in Norwegian. This post is slightly more detailed but essentially the same.</em></p><p>For many people machine learning might seem like something magical used by wizards in their ivory tower. Perhaps it is, but getting started with</p>]]></description><link>https://blog.wwalmnes.xyz/getting-started-with-machine-learning-1-x/</link><guid isPermaLink="false">5f46b318c199f6000196f3a5</guid><dc:creator><![CDATA[William Almnes]]></dc:creator><pubDate>Thu, 10 Dec 2020 11:37:27 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1591453089816-0fbb971b454c?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1591453089816-0fbb971b454c?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Getting started with machine learning"><p><em>Note: this post was originally posted on Systek&apos;s website (where I work) in Norwegian. This post is slightly more detailed but essentially the same.</em></p><p>For many people machine learning might seem like something magical used by wizards in their ivory tower. Perhaps it is, but getting started with ML is actually not that difficult and hopefully this post will teach your first cantrips and pave the path for you to become an archmage of machine learning in the future. <em>Translation: We&apos;ll set up a &quot;hello world&quot;-example of ML.</em></p><p>So the goal of this post is to create a simple example of <em><a href="https://en.wikipedia.org/wiki/Computer_vision">computer-vision</a> </em>where we train a model that&apos;s able to predict handwritten digits.</p><h3 id="setting-up-the-environment">Setting up the environment</h3><p>To achieve this goal we will use <a href="https://www.tensorflow.org/">Tensorflow</a>, which is an open source library to train ML models, and <a href="https://keras.io/">Keras</a>, which is an API that simplifies this process. To use Tensorflow we will use Python. I will assume you already have this installed, if not, there are plenty tutorials out there that can help you with that.</p><p>I recommend using Anaconda (and Conda) to manage your virtual environment and dependencies, but you could use other alternatives if you want to. If so, skip this part.</p><p>I installed Anaconda Individual Edition which you can find <a href="https://www.anaconda.com/products/individual">here</a>. If you follow the installation instructions you will get anaconda and conda.</p><p>To create an environment you can use the following code:</p><!--kg-card-begin: markdown--><p><code>conda create --name &lt;your-env-name&gt; &lt;...packages&gt;</code></p>
<!--kg-card-end: markdown--><p>For future reference, if you want to remove an environment simply write:</p><!--kg-card-begin: markdown--><p><code>conda env remove --name &lt;your-env-name&gt;</code></p>
<!--kg-card-end: markdown--><p>To be more explicit for this tutorial you can type:</p><!--kg-card-begin: markdown--><p><code>conda create --name computer-vision tensorflow keras matplotlib</code></p>
<!--kg-card-end: markdown--><p>This will download and install the necessary packages for Tensorflow, Keras and matplotlib (we&apos;ll need it later). You will be prompted to proceed with the download and install so simply type <em>y </em>to continue.</p><p>In our <em>computer-vision</em> environment we&apos;ve installed tensorflow and keras, but we need to activate that environment and you can do so like this:</p><!--kg-card-begin: markdown--><p><code>conda activate computer-vision</code></p>
<!--kg-card-end: markdown--><p>Create a basic python file called <em>index.py</em> and open up your favorite editor or IDE. Personally I use Pycharm (anaconda) for python and ML. Add the following lines:</p><!--kg-card-begin: markdown--><pre><code>import tensorflow as tf

print(tf.__version__)
</code></pre>
<!--kg-card-end: markdown--><p>Execute the program and, depending on when you&apos;re reading this, hopefully you should get <em>2.2.0 </em>or something similar. If so, we&apos;ve setup our environment correctly. Huzzah!</p><h3 id="using-mnist">Using MNIST</h3><p>The <a href="https://en.wikipedia.org/wiki/MNIST_database">MNIST dataset</a> is a collection of handwritten digits from 0 to 9. It contains 60 000 training images and 10 000 testing images. It&apos;s commonly used for training and testing in ML. </p><p>Luckily, it is quite easy to get started with since Keras provides a method for us to load the dataset. Let us modify our <em>index.py</em> to the following:</p><!--kg-card-begin: markdown--><pre><code>import tensorflow as tf

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_dataset()

# Verify that we have 60 000 trainining images and 10 000 testing images
print(len(x_train)) # should be 60 000
print(len(x_test)) # should be 10 000
</code></pre>
<!--kg-card-end: markdown--><p>Before we continue, remove our print statements as we do not need them any longer. A common practice is to normalize the values so the values are between 0 and 1. From what I understand is that the activation functions in our neural network works better with such ranges. Add this one line so we can normalize our values. </p><!--kg-card-begin: markdown--><pre><code>x_train, x_test = x_train / 255.0, x_test / 255.0
</code></pre>
<!--kg-card-end: markdown--><p>Next we&apos;ll want to create our model. Our model will be using one input layer, one hidden layer and one output layer. The images we get from the MNIST dataset are 28x28 pixels. The images are grayscale meaning we only have a single channel. So we can tell our input layer that we expect the input to be 28 by 28 by using <em>input_shape=(28, 28, 1))</em>. The input is flattened into a 1D vector. &#xA0;This is necessary for our output layer to do the classification. However, we lose spatial information (e.g. which pixels are next to each other). The hidden layer consists of 128 neurons and uses the <em><a href="https://machinelearningmastery.com/rectified-linear-activation-function-for-deep-learning-neural-networks/">rectified linear activation function</a></em> (ReLU/relu). It&apos;s easier to think of the neurons as parameters to a function. The objective of our neural network is to find a rule which finds the parameters necessary to convert the 784 values we get from the input layer into one of our digits. The output layer contains as many neurons as we have classes (one for each digit).</p><!--kg-card-begin: markdown--><pre><code>model = tf.keras.Sequential([
    // input layer
    tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
    // hidden layer
    tf.keras.layers.Dense(128, activation=&apos;relu&apos;),
    // output layer
    tf.keras.layers.Dense(10, activation=&apos;softmax&apos;)
])
# To examine our model:
model.summary()
</code></pre>
<!--kg-card-end: markdown--><p>The result of the <em>model.summary()</em> will show us the output shape of the input layer is 784 values (from 28*28 pixels).</p><!--kg-card-begin: markdown--><pre><code>Layer (type)                 Output Shape              Param #   
=================================================================
flatten_1 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 128)               100480    
_________________________________________________________________
dense_3 (Dense)              (None, 10)                1290      
=================================================================

</code></pre>
<!--kg-card-end: markdown--><p>Next we&apos;ll want to to compile our model. We need to specify an optimizer function, a loss function and metrics. I&apos;ll briefly explain what these functions are, but I will not to any deep dive into a specific optimizer or loss function as that will just increase the length of this blog post by too much and because I have limited knowledge in the topic.</p><p>The neural network does not know the relationship between the image and the categories (digits). When the NN makes a guess on the function that describes the relationship, the loss function measures how good the guess was. Then the optimizer figures out what the next guess should be based on the data from the loss function.</p><!--kg-card-begin: markdown--><pre><code>model.compile(
    optimizer=&apos;adam&apos;,
    loss=&apos;sparse_categorical_crossentropy&apos;,
    metrics=[&apos;accuracy&apos;])
</code></pre>
<!--kg-card-end: markdown--><p>After we&apos;ve compiled our model we can start training it. This is done by using <em>model.fit. </em>We will use the training data (images and categories) that we got from the MNIST dataset and we&apos;ll train it for five epochs. How long you should train your model depends on the training set and how much you increase your accuracy for every iteration.</p><!--kg-card-begin: markdown--><pre><code>model.fit(x_train, y_train, epochs=5)
</code></pre>
<!--kg-card-end: markdown--><p>I got an accuracy of 98.61%. Your results may vary slightly.</p><p>Now we need to evaluate how the model performs on data it has not seen before. The result I got on the evaluation 97.86% accuracy and loss of 7.11%. Not too bad. Could we do better with e.g. more epochs? </p><!--kg-card-begin: markdown--><pre><code>model.evaluate(x_test, y_test, verbose=2)
</code></pre>
<!--kg-card-end: markdown--><p>We can see the progression of the loss and accuracy rate in each epoch, but consider cases where we have many more epochs. It would be nice to plot it in a graph. So let&apos;s try:</p><!--kg-card-begin: markdown--><pre><code># In the terminal install Matplotlib: conda install matplotlib

# With your other imports add this:
import matplotlib.pyplot as plt

# Alter the previous model.fit method to this:
history = model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test, verbose=2)

# Plot the accuracy and loss
plt.plot(history.history[&apos;accuracy&apos;])
plt.plot(history.history[&apos;loss&apos;])
plt.title(&apos;model accuracy&apos;)
plt.ylabel(&apos;loss&apos;)
plt.xlabel(&apos;epoch&apos;)
plt.legend([&apos;train&apos;, &apos;test&apos;], loc=&apos;upper left&apos;)
plt.show()

</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://blog.wwalmnes.xyz/content/images/2020/09/Screenshot-from-2020-09-13-22-23-45.png" class="kg-image" alt="Getting started with machine learning" loading="lazy"></figure><p>We see the improvement in </p><p>Now that we&apos;re satisfied with our accuracy and loss, let&apos;s save our model. Create a directory in the root of the project called model and do the following modification to the code:</p><!--kg-card-begin: markdown--><pre><code># Remove model.evaluate(x_test, y_test, verbose=2)
# with:
model.save(&apos;model/cv_model&apos;)
</code></pre>
<!--kg-card-end: markdown--><p>Execute the code again and you&apos;ll see our model in <em>model/cv_model</em>.</p><h3 id="testing-our-model">Testing our model</h3><p>I wanted to test how well my model would evaluate my own handwritten digits, so I used GIMP to draw some 28x28 images. I used a black background with white text:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.wwalmnes.xyz/content/images/2020/09/2.jpg" class="kg-image" alt="Getting started with machine learning" loading="lazy"><figcaption>Handwritten digit of number 2</figcaption></figure><p>Let&apos;s create a new file that uses our stored model and reads our own images to see how well the model predicts. I called it <em>cv.py</em> (short for computer-vision). Start the file by importing tensorflow and loading in the model we saved to disk:</p><!--kg-card-begin: markdown--><pre><code>import tensorflow as tf
# We need this one later
import numpy as np 

saved_model = tf.keras.models.load_model(&apos;model/cv_model&apos;)

# This should give the same result as the summary above
saved_model.summary()
</code></pre>
<!--kg-card-end: markdown--><p>Now we need to load in the images we want to test. I created a folder called <em>test</em> and placed the numbers I painted in GIMP (with a mouse, not a tablet!). Feel free to draw your own images, but create them in 28x28 pixels. Keras provides an easy way to load in our image and convert them to our desired size (28x28 pixels, incase it&apos;s not already in the correct size) and color_mode.</p><!--kg-card-begin: markdown--><pre><code># I&apos;m just numbering it based on what I drew to make it easier for
# me to see which image this is.
image_2 = tf.keras.preprocessing.image.load_image(&apos;test/2.jpg&apos;, color_mode=&apos;grayscale&apos;, target_size=(28, 28))
# We want a flattened 1D vector where the values are normalized.
image_2 = tf.keras.preprocessing.image.img_to_array(image_2) / 255.0
# Expand dimension // note to self.. need to fix this?
image_2 = np.expand_dims(image_2, axis=0)
</code></pre>
<!--kg-card-end: markdown--><p>Now that we&apos;ve loaded our image and processed our data we can attempt to predict the number.</p><!--kg-card-begin: markdown--><pre><code>predictions = saved_model.predict(image_2)
</code></pre>
<!--kg-card-end: markdown--><p>The prediction we get is how certain the model is that the image is one of the categories (digits). So let&apos;s just create a helper function that finds which category the model thinks is most similar to the image.</p><!--kg-card-begin: markdown--><pre><code>def findPredictedValue(list):
    max_value = 0
    index = 0
    for i, j in enumerate(list):
        if (j &gt; max_value):
            max_value = j
            index = i

    return index
    
    
print(findPredictedValue(predictions[0]))
</code></pre>
<!--kg-card-end: markdown--><h3 id="testing-our-model-in-the-browser">Testing our model in the browser</h3><p>We have created a model that is able to predict digits and we&apos;ve also tried to use our saved model and test it on some images we created in GIMP. We can also use this model on a web application so that we can let users draw digits and do predictions on the drawings. I&apos;m not going to go into detail of that process now (I&apos;ll maybe save it for later), but you can test it <a href="https://computer-vision-digits.wwalmnes.xyz/">here</a>. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.wwalmnes.xyz/content/images/2020/12/Screenshot-from-2020-10-27-21-05-29.png" class="kg-image" alt="Getting started with machine learning" loading="lazy"><figcaption>A screenshot where I try to draw the digit 4 in the web application</figcaption></figure><p>You will probably see that the predictions do not necessarily work so well with &quot;real life&quot; examples, but keep in mind this is just a simple neural network and there are still loads of improvements that can be done, e.g. rotating and skewing images, flipping them and such.</p><p>In this example, we created a super simple neural network that produces moderately good test results and decently on our &quot;real life&quot; web test, but the images are hardly complex. A more common way to model your neural network with regards to computer vision is using a convolutional neural networks. I&apos;m planning on making a new tutorial about CNNs in the future. Stay tuned!</p>]]></content:encoded></item><item><title><![CDATA[The first post]]></title><description><![CDATA[<p>My main goal of having this blog is simply to reflect on various tech projects I work on. I use some of my spare time to learn new stuff, but rarely do I take a moment and <em>think</em> about what I just learned. Writing down the process, methods or concepts</p>]]></description><link>https://blog.wwalmnes.xyz/o/</link><guid isPermaLink="false">5edcf35a8d3c81000151988a</guid><dc:creator><![CDATA[William Almnes]]></dc:creator><pubDate>Sat, 20 Jun 2020 15:15:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1489533119213-66a5cd877091?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1489533119213-66a5cd877091?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="The first post"><p>My main goal of having this blog is simply to reflect on various tech projects I work on. I use some of my spare time to learn new stuff, but rarely do I take a moment and <em>think</em> about what I just learned. Writing down the process, methods or concepts I use I hope to achieve a better understanding.</p><p>I actually don&apos;t expect anybody to read this, but in the off-chance somebody do in fact read this, I hope they deem at least some of the content useful.</p><h3 id="this-blog">This blog</h3><p>It may be appropriate to write a few words about the technicalities behind this blog, but I&apos;ll be brief. I intend to delve more into detail when I write about self-hosting this blog and other stuff (Gitlab, Nextcloud, web apps).</p><p>First of all I wanted a very simple blog. Nothing fancy. I tested <a href="https://ghost.org/">Ghost</a> locally and found it to be clean and straight to the point. Perhaps I could&apos;ve used Wordpress, but I&apos;m more familiar with node than I&apos;m with php so if I ever want to do some customization it will be easier for me to do so in a more familiar environment. Since I don&apos;t want to spend a lot of time working on the design or code on this blog, I just used the <a href="https://github.com/zutrinken/attila">Attila</a> theme which I found on the Ghost marketplace. So there you have it. Absolute minimal effort in order for me to focus on more fun stuff.</p><p>So I guess that&apos;s it. Finally done with the first (worst) blog post.</p>]]></content:encoded></item></channel></rss>