/var/

Various programming stuff

Hello! If you are using an ad blocker but find something useful here and want to support me please consider disabling your ad blocker for this site.

Thank you,
Serafeim

Saving in Dark Souls

Introduction

The Dark Souls Trilogy (1-2-3) from FromSoftware is one of the modern gaming classics. The games should be experienced by everybody because of their excellent gameplay, combat mechanics, atmosphere and character development. The defining characteristic of the Dark Souls Trilogy and what scares most gamers is their over the top difficulty.

This great difficulty is increased even more because of the saving mechanism of these games: There’s a single save game in the game, if you die you’ll return to a previous checkpoint (called bonfire). These checkpoint are sparcely located within the gaming world and they are not always near boss fights, so if you die in a boss fight you may need to kill enemies for sometime before you reach the boss again to retry. Also, everything is permanent so if you screw up somehow (i.e you kill an important NPC) there’s no way to “restore” your game; you’ll lose him (and his items if he’s a merchant) for the rest of your current game!

If the above seems too difficult for you to even try, fear not! There a particular way to have “real” saves in all three Dark Souls games, even if it is a little cumbersome. It will be much less cumbersome than having to restart the game because you killed an important NPC.

Disclaimer

Before describing the technique I’d like to provide some disclaimer points:

  • The Dark Souls Trilogy should be experienced as-is. You shouldn’t use this method because you’ll make the games easier and not as difficult as it was intented by their publisher. Use it only as a last resort when you are going to abandon the game.
  • Most other Dark Souls players will mock you and hate you for using these techniques.
  • You may break your save if you do something wrong so I won’t be held responsible for losing your progress.

How Dark Souls saves your game

All three Dark Souls games have a particular directory in your hard disk where they place their save game. There’s a single file with your save game that has an extension endingg in .sl2.

From my PC, the folders and names of each of these games are the following:

  • Dark Souls Remastered: Folder C:\Users\username\Documents\NBGI\DARK SOULS REMASTERED\1638, filename: DRAKS0005.sl2
  • Dark Souls 2: Scholar of the First Sin: C:\Users\username\AppData\Roaming\DarkSoulsII\0110000100000666\, filename: DS2SOFS0000.sl2
  • Dark Souls 3: C:\Users\username\AppData\Roaming\DarkSoulsIII\0110000100000666\, filename: DS3000.sl2

Notice that the username will be your user’s username while the numbers you see will probably be different.

Now, when some particular action occurs (i.e when you kill an enemy) the game will overwrite the file in the folder with a new one with the changes. You will see a flame in the top right of your screen when this happens. Notice that this happens on particular moments, for example if you are just moving without encountering enemies your game won’t be saved (so if for example you make a difficult jump the game won’t be saved right after the jump). Also, Dark Souls will save your game when you quit (so if you do a difficult jump, quit the game and restart you will be after the jump).

The above description enables you to actually have proper saves: Quit the game (not completely, just display the title screen), backup the save file in a different location, start the game. If you die, quit the game (again just display the title screen), copy over from the backup to the save location and start the game again. Notice that you should always quit the game before restoring from a save file because Dark Souls reads the saves only then. If you copy over a backup save while playing the game the backup will be just overwritten with the new save data.

However you can backup your game without actually quitting: When you’ve reach a point you feel it needs saving, just alt+tab outside of your game copy over the save to a backup location (you can even give it a proper name) and continue playing. When you want to load that save you’ll need to quit, restore the backup and start the game again. Notice that when you do this the game will show you a warning that you “did not properly quit the game”. From what I can understand, when you quit the game Dark Souls writes some flag to your save game. If you shut down your PC while playing (or copy over the save game) then Dark Souls won’t write that flag to your save game. However from my experience in all three Dark Souls games this warning doesn’t mean anything, the game will continue normally without any problems.

Making it simpler

Copying over the save game in a different location is cumbersome and makes it easy to do mistakes (i.e copy instead of restoring your backup save, copy over the current save to your backup). To make this process easier I will give you here a simple autohotkey script that will do this for you using F7 to backup your save and F8 to restore it (don’t forget that you can only restore when you have quit the game and see the title screen).

To use this script you need the excellent autohotkey utility. Download and install it and then execute the script by double clicking it (it needs to have an .ahk extension):

#SingleInstance Force
#MaxHotkeysPerInterval 99999
SendMode Input  ; Recommended for new scripts due to its superior speed and reliability.
SetWorkingDir %A_ScriptDir%  ; Ensures a consistent starting directory.


SAVE_FOLDER_DS := "C:\Users\serafeim\AppData\Roaming\DarkSoulsII\0110000100000666\"
SAVE_FILENAME_DS := "DS2SOFS0000.sl2"
BACKUP_FOLDER_DS := "C:\Users\serafeim\Documents\ds2\"


GetFolderMax(f)
{
  MAX := 0
  Loop, Files, %f%\*.*
  {
    NUM_EXT := 1 * A_LoopFileExt

    if (NUM_EXT> MAX)
    {
      MAX := NUM_EXT
    }
  }

  return MAX
}

F7::
{
  ;MsgBox % "F7"
  ;MsgBox % "Will copy " . SAVE_FILENAME_DS . " to " . BACKUP_FOLDER_DS

  MAX_P1 := GetFolderMax(BACKUP_FOLDER_DS) + 1
  ;MsgBox % "Max + 1 is " . MAX_P1

  SOURCE := SAVE_FOLDER_DS . SAVE_FILENAME_DS
  DEST := BACKUP_FOLDER_DS . SAVE_FILENAME_DS . "." . MAX_P1

  ;MsgBox % "Will copy " . SOURCE . " to " . DEST
  FileCopy, %SOURCE%, %DEST%
  return
}

F8::
{
  ;MsgBox % "F8"
  MAX := GetFolderMax(BACKUP_FOLDER_DS)
  MAX_FILE := BACKUP_FOLDER_DS . SAVE_FILENAME_DS . "." . MAX
  ;MsgBox % "Maxfile is " . MAX_FILE

  SOURCE := MAX_FILE
  DEST := SAVE_FOLDER_DS . SAVE_FILENAME_DS

  ;MsgBox % "Will copy " . SOURCE . " to " . DEST
  FileCopy, %SOURCE%, %DEST%, 1
  return
}

The script is very easy to understand but I’ll explain it a bit here: First of all you need to define the SAVE_FOLDER_DS, SAVE_FILENAME_DS and BACKUP_FOLDER_DS variables. The first two are the folder and filename of your game (in my example I’m using it for DS2). The BACKUP_FOLDER_DS is where you want your backups to be placed. This script will backup your save file in that folder when you press F7. To keep better backups it will append an increasing number in the end of your filename so when you press F7 you will see that it will create a file named DS2SOFS0000.sl2.0, then DS2SOFS0000.sl2.1 etc in the BACKUP_FOLDER_DS. When you press F8 it will get the file with the biggest number in the end, strip that number and copy it over your Dark Souls save file.

As you can see there’s a GetFolderMax function that retrieves the max number from your backup folder. Then, F7 and F8 will use that function to either copy over your Dark Souls save file in the backup with an increased number or retrieve the latest one and restore it in your save folder.

The script works independently of the game so if you configure it and press F7 you should see that the backup file will be created. Also if you delete (or rename) your Dark Souls save file and press F8 you should see that it will be restore by the backup.

So using the above script, my play workflow is like this: Start Dark Souls, kill an enemy, press F7, kill another enemy, press F7 (depending on how difficult the enemies are of course). Die from an enemy, quit the game, press F8, continue my game.

One thing to notice is that in Windows 10 it seems that the hotkeys are not captured from autohotkey when the game runs in full screen. When I run the games in a window it works fine. Some people say that if you run autohotkey as administrator it will capture the key-presses but it didn’t work fine for me.

Using matplotlib to generate graphs in Django

Nowadays the most common way to generate graphs in your Django apps (or web apps in general) is to pass the data as json to the page and use a javascript lib. The big advantage these javascript libs offer is interactivity: You can hover over points to see their values making studying the graph much easier.

Yet, there are times where you need some simple (or not so simple) graphs and don’t care about offering interactivity through javascript nor you want to mess with javascript at all. For these cases you can generate the graphs server-side using django and the matplotlib plot library.

matplotlib is a very popular library in the scientific cycles. It can be used to create more or less any kind of graph and has unlimited capabilities! I won’t go into much detail about matplotlib here because the subject is huge but I recommend you to take a look at the comprehensive tutorials on its homepage.

To install matplotlib on unix you need to do a pip install matplotlib while, for windows, you can download the proper ready-made binaries from the Unofficial Windows Binaries for Python Extension Packages site that offers pre-compiled versions of almost all python packages! Just make sure to download the correct version for your python version and architecture (32bit or 64bit). After you’ve downloaded the file you can install it for your project using something like pip install matplotlib-3.3.4-cp38-cp38-win32 from inside your virtual environment.

Before actually creating a graph I recommend playing a bit with matplotlib to understand the basic concepts. Start a django shell and do the following:

>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.plot([1, 2, 3, 4], [1, 4, 2, 3])
[<matplotlib.lines.Line2D object at 0x0FBF5F58>]
>>> fig.show()

The above should open a window and display the graph. This works fine on Window 10 with python 3.8 and matplotlib 3.3.4 but I can’t guarantee other versions. If however fig.show() shows an error or does not display the graph, you can just do something like:

>>> fig.savefig('test')

that will output the figure in a file named test.png which you can the view. Please notice that the above are with the default options; there are various ways that matplotlib can be configured.

In any case, after you’ve played a bit with the shell and generate a nice figure (take a look at the matplotlib examples for inspiration) you are ready to integrate matplotlib with Django!

I can think of two ways which you can integrate matplotlib with Django:

  • Use a special view that would render the graph and just return a PNG object. Use a normal <img> element pointing to that view in your template.
  • Put the graph in the context of a normal django view encoded as a base64 object and use a special <img> with an src attribute of data:image/png;base64,{{ graph }} to actually embed the image in the template!

I prefer the second approach because it’s much more flexible since you don’t need to create a different Django view for each graph you want to generate. For this reason I will explain this approach right now and give you some hints if you need to follow the dedicated graph view approach.

Our view should:

  • Generate the graph
  • Save it in a BytesIO object
  • Convert that BytesIO to base64
  • Put the string value of the base64 encoded graph to the template

Then the template will just output that base64 value using the special img we mentioned above.

Here’s a snippet of a view that does exactly this:

import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import io, base64
from django.db.models.functions import TruncDay
from matplotlib.ticker import LinearLocator

class SampleListView(ListView):
  model = Sample

  def get_context_data(self, **kwargs):

    by_days = get_queryset().annotate(day=TruncDay('created_on')).values('day').annotate(c=Count('id')).order_by('day')
    days = [x['day'] for x in by_days]
    counts = [x['c'] for x in by_days]

    fig, ax = plt.subplots(figsize=(10,4))
    ax.plot(days, counts, '--bo')

    fig.autofmt_xdate()
    ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
    ax.set_title('By date')
    ax.set_ylabel("Count")
    ax.set_xlabel("Date")
    ax.grid(linestyle="--", linewidth=0.5, color='.25', zorder=-10)
    ax.yaxis.set_minor_locator(LinearLocator(25))

    flike = io.BytesIO()
    fig.savefig(flike)
    b64 = base64.b64encode(flike.getvalue()).decode()
    context['chart'] = b64
    return context

Please notice that after importing matplotlib I’m using the matplotlib.use('Agg') command to use the Agg backend. You can learn more about backends here, but it should be sufficient for now to know that using the Agg you’ll be able to save your graphs in png.

The above code uses some Django ORM trickery to group values by their created_on day value and then assings the days and counts to two arrays (days, counts). It then creates a new empty graph with a specific size using fig, ax = plt.subplots(figsize=(10,4)) and plots the data with some fancy styles with ax.plot(days, counts, '--bo'). After that it sets various options in the graph like the labels, grid etc.

The save and convert to base64 part follows: A new file like object is created using io.BytesIO() and the figure is saved there (fig.savefig(flike)). Then it is converted to a base64 string using the b64 = base64.b64encode(flike.getvalue()).decode(). Finally it is just passed to the context of the template as chart.

Now, inside the template I’ve got the following line:

<img src='data:image/png;base64,{{ chart }}'>

This will include the data of the chart inline and display it as a png image. If you’ve followed along you should be able to see the graph when you load that view!

If instead of including the graphs in your normal django template views you want to use a dedicated graph-generating view, you can follow my Django non-HTML responses tutorial. You could then modify the render_to_response method of your view like this:

def render_to_response(self, generator, **response_kwargs):
    response = HttpResponse(content_type='image/png')

    fig, ax = plt.subplots(figsize=(10,4))
    # fill the report here

    fig.savefig(response)
    return response

Since response is a file-like object you can save your graph directly there!

Using hashids to hide ids of objects in Django

A common pattern in Django urls is to have the following setup for CRUD operations of your objects. Let’s suppose we have a Ship object. It’s CRUD urls would be something like:

  • /ships/create/ To add a new object
  • /ships/list/ To display a list of your objects
  • /ships/detail/id/ To display the particular object with that id (primary key)
  • /ships/update/id/ To update/edit the particular object with that id (primary key)
  • /ships/delete/id/ To delete the particular object with that id (primary key)

This is very easy to implement using class based views. For example for the detail view add the following to your views.py:

class ShipDetailView(DetailView):
    model = models.Ship

and then in your urls.py add the line:

urlpatterns = [
  # ...
  path(
      "detail/<int:pk>/",
      login_required(views.ShipDetailView.as_view()),
      name="ship_detail",
  ),

This path means that it expects an integer (int) which will be used as the primary key of the ship (pk).

Now, a common requirement if you are using integers as primary keys is to not display them to the public. So you shouldn’t allow the users to write something like /ships/detail/43 to see the details of ship 43. Even if you have add proper authorization (each user only sees the ids he has access to) you are opening a window for abuse. Also you don’t want the users to be able to estimate how many objects there are in your database (if a user creates a new ship he’ll get the latest id and know approximately how many ships are in your database).

One simple requirement is to use some encryption mechanism to encode the ids to some string and display that string to the public urls. When you receive the string you’ll then decode it to get the id.

Thankfully, not only there’s a particular library that makes this whole encode/decode procedure very easy but Django has functionality to make trivial to integrate this functionality to an existing project with only miniman changes!

The library I propose for this is called hashids-python. This is the python branch of the hashids library that works for many languages. If you take a look at the documentation you’ll see that it can be used like this:

from hashids import Hashids
hashids = Hashids()
hashid = hashids.encode(123) # 'Mj3'
ints = hashids.decode('xoz') # (456,)

This library offers two useful utilities: Define a random salt so that the generated hashids will be unique for your app and add a minimum hash length so that the real length of the id will be obfuscated. I’ve found out that a length of 8 characters will be more than enough to encode all possible ids up to 99 billion:

hashids = Hashids(min_length=8)
len(hashids.encode(99_999_999_999)) # 8
This is more than enough since by default django will use an integer to store the primary keys which is around 4 billion (you actually can
use 7 characters to encode up to 5 billion but I prefer even numbers.

Finally, you can use a different alphabet, for example to use all greek characters:

hashids = Hashids(alphabet='ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩ')
hashids.encode(123) # 'ΣΝΦ'

This isn’t recommended though for our case because not all characters are url-safe.

To integrate the hashids with Django we are going to use a custom path converter. The custom path converter is similar to the int portion of the "detail/<int:pk>/" of the url i.e it will retrieve something and convert it to a python object. To implement your custom path converter just add a file named utils.py in one of your applications with the following conents:

from hashids import Hashids
from django.conf import settings

hashids = Hashids(settings.HASHIDS_SALT, min_length=8)


def h_encode(id):
    return hashids.encode(id)


def h_decode(h):
    z = hashids.decode(h)
    if z:
        return z[0]


class HashIdConverter:
    regex = '[a-zA-Z0-9]{8,}'

    def to_python(self, value):
        return h_decode(value)

    def to_url(self, value):
        return h_encode(value)

The above will generate a hashids global object with a min length of 8 as discussed above and retrieving a custom salt from your settings (just add HASHIDS_SALT=some_random_string to your project settings). The HashIdConverter defines a regex that will match the default aplhabet that hasid uses and two methods to convert from url to python and vice versa. Notice that hashids.decode returns an array so we’ll retrieve the first number only.

To use that custom path converter you will need to add the following lines to your urls.py to register your HashIdConverter as hashid:

from core.utils import HashIdConverter

register_converter(HashIdConverter, "hashid")

and then use it in your urls.py like this:

urlpatterns = [
  # ...
  path(
      "detail/<hashid:pk>/",
      login_required(views.ShipDetailView.as_view()),
      name="ship_detail",
  ),

That’s it! Your CBVs do not need any other changes! The hashid will match the hashid in the url and convert it to the model’s pk using the to_python method we defined above!

Of course you should also add the opposite direction (i.e convert from the primary key to the hashid). To do that we’ll add a get_absolute_url method to our Ship model, like this:

class Ship(models.Model):
  def get_hashid(self):
      return h_encode(self.id)

  def get_absolute_url(self):
      return reverse("ship_detail", args=[self.id])

Notice that you just call the reverse function passing self.id; everything else will be done automatically from the hashid custom path generator to_url method. I’ve also added a get_hashid method to my model to have quick access to the id in case I need it.

Now you don’t have any excuses to not hide your database ids from the public!

Adding a timeline of your wagtail Posts

Intro

In this small post I’ll present a small tutorial on how to add a timelne of your Wagtail posts using the Horizontal Timeline jquery plugin.

This will be a step by step tutorial to help you understand the concepts. As a base we’ll use the bakerydemo wagtail demo. After you’ve properly followed the instructions you’ll see that this demo site has a “Blog” that contains articles about breads. Following we’ll add a timeline of these articles grouped by their publish month.

Decisions, decisions

For this demo we’ll include all the “blog” pages in the timeline. However we may wanted to select which pages we want to include in the timeline. This could be done either by adding an extra field in our blog pages (class blog.models.BlogPage) like include_in_timeline or by using the Wagtail ModelAdmin functionality. For the ModelAdmin we’d create an extra Django model (i.e BlogTimeLineEntry) that would contain a link to the original page. We could enchance this field with extra fields that we may want to display in the timeline, for example a smaller description. Something like this:

The other decision is where to actually output the timeline. For the demo we’ll just put it in the BlogIndexPage page. If we wanted to add the timeline in a number of different page types then we’d need to add a template tag that would include it. But since it will be available only to a single page type we’ll just need to override the get_context method and the template of that particular type.

Overriding the get_context

As we described above, we want to group the timeline entries based on their publish month. For this, we’ll use the following code in the BlogIndexPage.get_context method:

def get_context(self, request):
    context = super(BlogIndexPage, self).get_context(request)
    context['posts'] = BlogPage.objects.descendant_of(
        self).live().order_by(
        '-date_published')

    entries = context['posts']
    dentries = {}
    for e in entries:
        month = e.date_published.strftime("%m/%Y")
        month_entries = dentries.get(month, [])
        month_entries.append(e)
        dentries[month] = month_entries

    lentries = sorted(
        [
            {
                "date_small": k,
                "date_large": v[0].date_published.strftime("%B %Y"),
                "entries": v,
            }
            for (k, v) in dentries.items()
        ],
        key=lambda z: z["entries"][0].date_published,
    )

    context.update(timeline=lentries)
    return context

So what’s the purpose of the above? First of all we use super to retrieve the context that any parent classes may have setup. After that we add a posts attribute to the context that is a queryset of all the published children of the current page (which is the BlogIndexPage), sorted by their published date.

In the for loop that follows, do some dict trickery to “gather” all entries for a particlular month/year in a list under that particular key in the dentries dict.

Finally, we create the lentries list which will be a list of the form:

[{
        "date_small": "09/2020"
        "date_large": "September 2020"
        "entries: [BlogPage, BlogPage, BlogPage...]
}, {...}, ...]

This struct will help us in the next step when we implement the timeline template.

Fixing the template

To use the horizontal timeline we need to add a couple of css/js dependencies to our template. For this, we’ll first go to the bakerydemotemplatesbase.html file and add the following snippet near the end of the file just before </body>:

{% block extra_script %}
{% endblock %}

The above is required to give us a placeholder for adding some needed js dependencies and code.

After that we’ll go to the bakerydemo\templates\blog\blog_index_page.html file and add the following just before {% block content %}

{% block head-extra %}
        <link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/gh/ycodetech/horizontal-timeline-2.0@2/css/horizontal_timeline.2.0.min.css">
        <style>

                .timeline .selected {
                        font-size: 24px;
                        font-weight: bold;
                }

                #timeline ol {
                        list-style: none;
                }

                .horizontal-timeline .events-content li {
                        background: #f2f2f2;
                        font-size: .8em;
                }

                #timeline img {
                        width: 200px;
                }
        </style>

{% endblock head-extra %}

And the following at the end of the file

{% block extra_script %}

        <script src="//cdn.jsdelivr.net/gh/ycodetech/horizontal-timeline-2.0@2/JavaScript/horizontal_timeline.2.0.min.js"></script>

        <script>

        $(function() {
                $('#timeline').horizontalTimeline({
                dateIntervals: {
                        "desktop": 200,
                        "tablet": 150,
                        "mobile": 120,
                        "minimal": true
                }
                });
        })

        </script>
{% endblock %}

Notice that the head-extra block is already there in the base.html file so we don’t need to add it again. It just has some styling changes for the timeline to be displayed nice. Also the <script> tags we added just include the needed dependency and initialize the timeline component.

Of course we haven’t yet added the actual timeline! To do that, we’ll need to add a file named timeline_partial.html under the blog/templates/blog folder (same folder that blog_index_page.html is) with the following:

{% load wagtailcore_tags wagtailimages_tags %}
<div class="horizontal-timeline" id="timeline">
  <div class="events-content">
        <ol>

          {% for month in timeline %}
                <li class="{% if forloop.last %}selected{% endif %}" data-horizontal-timeline='{"date": "{{ month.date_small }}"}'>
                  <h3>{{ month.date_large }}</h3>

                  {% for te in month.entries %}
                        <div class='row'>

                                <div class='col-md-6'>
                                  <h4><a href='{% pageurl te %}'>{{ te.title }}</a></h4>
                                  <span>{{ te.introduction }}</span>
                                </div>
                                <div class='col-md-6'>
                                  {% with img=te.image %}
                                        {% image img width-200 as img_thumb %}
                                        <img class="" src="{{ img_thumb.url }}" alt="{{ img.title }}">
                                  {% endwith %}
                                </div>

                        </div>
                        <div class="clear bottommargin-sm"></div>
                  {% endfor %}
                </li>
          {% endfor %}

        </ol>
  </div>
</div>

The above will generate a <li data-horizontal-timeline='{"date": "01/2020"}> list element for all months and inside that it will add an <h3> with the full name of the month and a bunch of bootstrap rows, one for the entries of that particular month (including its title, description and their image at the side). It should be easy enough to follow.

Finally, we need to incldue the above partial template. So add the line {% include "blog/timeline_partial.html" %} immediately above the <div class="row row-eq-height blog-list"> line in the file blog_index_page.html.

If you’ve followed the instructions you should be able to see something like this:

The timeline

Getting alerts from OS Mon in your Elixir application

When I upgraded my Phoenix template application to Phoenix 1.5.1 I also enabled the new Phoenix LiveDashboard and its “OS Data” tab. To enable that OS Data tab you have to enable the :os_mon erlang application by adding it (along with :logger and :runtime_tools) to your extra_applications setting as described here.

When I enabled the os_mon application I immediately saw a warning in my logs that one of disks is almost full (which is a fact). I knew that I wanted to understand how these warnings are generated and if I could handle them with some custom code to send an email for example.

This journey lead me to an interesting erlang rabbit hole which I’ll describe in this small post.

The os_mon erlang application

os_mon is an erlang application that, when started will run 4 processes for monitoring CPU load, disk, memory and some OS settings. These don’t work for all operating systems but memory and disk which are the most interesting to me do work on both unix and Windows.

The disk and memory monitoring processes are called memsup and disksup and run a periodic configurable check that checks if the memory or disk space usage is above a (configurable) threshold. If the usage is over the threashold then an error will be reported to the SASL alarm handler (SASL is erlang’s System Architecture Support Libraries).

The alarm handler situation

The SASL alarm handler is a process that implements the gen_event behavior. It must be noted that this behavior is rather controversial and should not be used for your own event handling (you can use your own gen server solution or gen stage). A gen_event process is an event manager. This event manager keeps a list of event handlers; when an event happens the event manager will notify each of the event handlers. Each event handler is just a module so when an event occurs all event handlers will be run in the same process one after the other (that’s the actual reason of why gen_event is not very loved).

The SASL alarm handler (the gen_event event manager) is implemented in a module named :alarm_handler. A rather unfortunate decision is that the default simple alarm handler (the gen_event event handler) is also implemented in the same module so in the following you’ll see :alarm_handler twice!

The default simple alarm handler can be exchanged with your own custom implementation or you can even add additional alarm handlers so they’ll be called one after the other.

To add another custom event handler for alarms, you’ll use the add_handler method of gen_event. To change it with your own, you’ll use the swap_handler of gen_event. When the default simple alarm handler is swapped it will return a list of the existing alarms in the system which will the be passed to the new alarm handler.

A simple alarm handler implementation

As noted in the docs, an alarm handler implementation must handle the following two events:

{:set_alarm, {alarm_id, alarm_description}} and {:clear_alarm, alarm_id}. The first one will be called from the event manager when a new alarm is created and the send one when the cause of the alarm not longer exists.

Let’s see a simple implementation of an alarm event handler:

  defmodule Phxcrd.AlarmHandler do
  import Bamboo.Email
  require Logger

  def init({_args, {:alarm_handler, alarms}}) do
    Logger.debug  "Custom alarm handler init!"
    for {alarm_id, alarm_description} <- alarms, do: handle_alarm(alarm_id, alarm_description)
    {:ok, []}
  end

  def handle_event({:set_alarm, {alarm_id, alarm_description}}, state) do
    Logger.warn  "Got an alarm " <> Atom.to_string(alarm_id) <> " " <> alarm_description
    handle_alarm(alarm_id, alarm_description)
    {:ok, state}
  end

  def handle_event({:clear_alarm, alarm_id}, state) do
    Logger.debug  "Clearing the alarm  " <>  Atom.to_string(alarm_id)
    state |> IO.inspect
    {:ok, state}
  end

  def handle_alarm(alarm_id, alarm_description) do
    Logger.debug  "Handling alarm " <>  Atom.to_string(alarm_id)

    new_email(
      to: "foo@foo.com",
      from: "bar@bar.gr",
      subject: "New alarm!",
      html_body: "<strong>Alert:"  <>  Atom.to_string(alarm_id) <> " " <> alarm_description <>  "</strong>",
      text_body: "Alert:" <>  Atom.to_string(alarm_id) <> " " <> alarm_description
    )
    |> Phxcrd.Mailer.deliver_later()

    Logger.debug  "End handling alarm " <> Atom.to_string(alarm_id)
  end

end

This implementation also has an init function that is called when the handler is first started. Notice that it receives a list of the existing alarms; for each one of them I’ll calle the handle_alarm function. This is needed to handle any existing alarms when the application is starting. The :set_alarm handler also calls handle_alarm passing the alarm_id and alarm_description it received.

The clear_alarm doesn’t do anything (it would be useful if this module used state to keep a list of the current alarms). Finally, the handle_alarm will just send an email using bamboo_smtp. Notice that I use deliver_later() to send the mail asynchronously.

As you can see this is a very simple example. You can do more things here but I think that getting the Alarm email should be enough for most situations!

Integrating the alarm handler into your elixir app

To use the above mentioned custom alarm event handler I’ve added the following line to the start of my Application.start function:

:gen_event.swap_handler(:alarm_handler, {:alarm_handler, :swap}, {Phxcrd.AlarmHandler, :ok})

Please notice that the :alarm_handler atom is encountered twice: The first is the event manager module (for which we want to swich the event handler) while the second is the event handler module (which is the one we want to replace).

os_mon configuration

The are a number of options you can configure for os_mon. You can find them all at the manual page. For example, just add the following to your config.exs:

config :os_mon,
  disk_space_check_interval: 1,
  memory_check_interval: 5,
  disk_almost_full_threshold: 0.90,
  start_cpu_sup: false

This will set the interval for disk space check to 1 minute, for memory check to 5 minutes, the disk usage threshold to 90% and will not start the cpu_sup process to get CPU info.

Testing with the terminal

If no alerts are active in your system, you can test your custom event handler using something like this from an iex -S mix terminal:

:alarm_handler.set_alarm({:koko, "ZZZZZZZZZ"}
# or
:alarm_handler.clear_alarm(:koko)

Also you can see some of the current data or configuration options:

iex(4)> :disksup.get_disk_data
[{'C:\\', 234195964, 55}, {'E:\\', 822396924, 2}]

# or
iex(7)> :disksup.get_check_interval
60000

Please notice that the check interval is in seconds when you set it, in ms when you retrieve it.

Conclusion

The above should help you if you also want to better understand alert_handler, os_mon and how to configure it to run your own custom alert handlers. Of course in a production server you should have proper monitoring tools for the health of your server but since os_mon is more or less free thanks to erlang, why not add another safety valve?

If you want to take a look at an application that has everything configured, take a look at my Phoenix template application.