We fulfill power fantasies

RSS Feed

Ultima Online 20 year post mortem online from GDC 2018

3.4.2018 21:05:14

This is really just a heads up for anyone who missed it: this year's GDC featured a new post mortem session for the now 20 year old Ultima Online, and the video is already available for free here.

UO remains a source of interesting MMO history and design topics to this day. Although the session featured many of the same talking points as the post mortem in 2012, there was a little bit of new stuff there, too. For example, this time Richard Garriott himself took part, and Rich Vogel talked about the invention of game time cards.

I only wish I could find an interview with Rick Delashmit, the original main programmer of the game. It would be interesting to know for example how they stored world state information and account data on disk back in the day!

MUTA devlog 2: cleaning up, async asset loading, GUI memory, creatures

30.3.2018 16:47:24

Alright, here's another post about MUTA, our MMORPG project. I didn't work much on the game for nearly two months since the new year holidays, but I've since been picking up the pace, working the hours I could afford. Unfortunately, it's not that much, but evenings are still something as are weekends. The days are also getting longer and longer, and I've noticed in the past my productivity drops drastically in the middle of the winter when there's only a couple of hours of daylight. Thank goodness spring seems to be making it's way swiftly!

Since Christmas, I believe I alone have touched the codebase. This has a clear advantage: there's no way to get merge conflicts if you only merge from one branch. So, I have been cleaning up here and there, including in files I didn't previously touch personally (although I didn't do anything drastic in the case of such files.), reformatting, clarifying naming conventions, etc. I also implemented asynchronous asset loading proper and started adding support for creatures, but more on that later.

Getting rid of unneeded code

Something I have been wanting to do is getting rid of code that isn't needed. Obviously, the smaller, the better when it comes to codebases - I think we can all agree on that. And we did have some code that was practically duplicate.

For example, we originally had a hashtable implementation that preserved memory addresses and another hashtable that didn't. In the case of the first container, instead of increasing the amount of buckets when more space was needed, a new bucket was created and added to a linked list of buckets at index N. Now, that's not great algorithmically because obviously search time increases linearly with the amount of items in a bucket. But I thought it would be handy when you wanted to store items instead of pointers inside a hashtable, and also wanted pointers to those items to remain valid until the bitter end, even if the table was resized.

Well, turns out the first hash table was only used in one file, in our asset system, and even there the only reason for it's use was that this table was implemented first, back when the other hashtable had not been written yet.

So I got rid of the memory address preserving hashtable.

Less macros

Our containers, including the hashtables, have previously been implemented as C++-template-like C preprocessor macros - macros that define a new struct or structs and functions that manipulate said struct. Hence, to be able to use a container for a specific type, you have to first declare it by calling a macro, like this:

DYNAMIC_HASH_TABLE_DEFINITION(str_int_table, int, const char *, uint32,    fnv_hash32_from_str, 4);
That's kind of cumbersome and not pretty. It's also difficult to read where the struct/function definition happens for anyone new to the codebase. So I would like to get rid of most of this stuff.

Dynamic arrays

The most common container type, dynamic array (vector in C++ lingo), was also implemented as a macro of this sort. So I decided to go through all of our dynamic arrays defined in this manner and replace them with a stretchy buffer like container, as I recently read Our Machinery was doing.

In case you don't know, a stretcy buffer is a group of C macros you put together to be able to create dynamic arrays on the fly. Instead of declaring specific dynamic array types, you declare a null pointer of the data type your array will contain, and then call macros like 'array_push(some_pointer)' or 'array_count(some_pointer)'. The macros will store the size and capacity of the vector at the beginning of the allocation and assign the pointer to point to the first element. For example:

int *array = 0;
array_push(array, 5);
There are a couple of methods to distinguish between a normal pointer and a stretchy buffer. You could do that by naming the varible or leaving a comment. I have taken to typedeffing the types like this: 'typedef int int_darr_t'.

Stretchy buffers only require a single function, which in our case looks like 'void *darr_ensure_growth_by(void *darr, uint32 grow_by, uint32 item_size)', and a bunch of one-liner macros. That simplifies things nicely, because our previous dynamic array definition macro was over a hundred lines long (it has now been erased), and different array types no longer need to be defined via a specific macro call.

By the way, I also made most dynamic containers have a growth rate of 2, meaning they double in size when they fill up. I thought this to be excessive before, but it turns out many standard libraries do this too. Dynamic strings I gave a growth rate of 1.5. In some places we used to have growth rates as small as 5% which was really bad for small groups of objects, leading to a lot of realloc calls (for example, in our GUI).

Other containers

We have some other container types, too. For example, we have a macro for declaring a dynamic object pool for a type, which allocates multiple objects when it needs to grow and provides a linked list-like interface. And of course, we have hash tables. I didn't do anything to these macros yet, because I wasn't sure if I wanted to, but also because of time. One could build a "typeless" hash table for example, but it would be difficult to choose the correct hash function and bucket size  without declaring them via a macro.

Thinking about dynamic memory allocation

When it comes to dynamic memory allocation, I think I'm somewhere between the extremes of liberal and conservative.  Th C++ way of thinking where everything is allocated dynamically is definitely not for me, and since I only really care about the PC as a platform, I also don't feel like I need to know the amount of memory I need beforehand (although I've done that out of curiosity before). The way I would describe my dynamic memory usage is: if it can be allocated statically, allocate it statically. Do that also when you know the absolute maximum amount of bytes an allocation may require. But it's ok to use dynamic arrays and the like when you really don't know.

That being said, I've been pretty liberal about freeing memory. I always let dynamic arrays and similar containers grow until the end of the program without freeing them in between, because I know at some point they will reach their absolute maximum. But I also have not gotten used to cleaning allocations up at the end of a program, since modern operating systems take care of that for you.

I've taken a step back in this thinking though and decided dynamically allocated recources must still be freed at the end of the program, never mind if the operating system is magically capable of doing that for you or not. That' because in larger programs, tracking memory leaks becomes a bit of a pain if you never clean anything up. Tools that track allocations, like AddressSanitizer and Valgrind, will report all leaks, and if you never free anything, the output won't really tell you much because there's a lot of noise.

So far I have added memory clean up to the MUTA client. Now if there is a leak, at least I will be able to tell where it is. I didn't give the same treatment to the server side programs yet however because the whole server side infrastructure will change drastically as the project goes on.

The case of GUI memory management

A sort of memory-related issue I also dealt with cleaning up was the way our GUI's memory was handled. This was due to an experiment I wanted to make when I first wrote the UI base.

MUTA's UI is an immediate mode style one. In the unlikely case you haven't heard of such a thing, take a look at Casey Muratori's  introduction or the popular Dear ImGui library. In short, instead of creating gui "objects", you call functions like this:

gui_begin_window("my window", 16, 16, 64, 64);
if (gui_button(BUTTON_ID, 32, 32, 32, 32))
So, it becomes a little like writing HTML when you write, say, inner windows or such. The flow is easy to follow. Anyway, immediate or retained mode, a UI needs to store state, and state storage is what I've been cleaning up.

Have you ever though of what would happen if all your GUI memory was stored in a single, contiguous dynamic block? No? I have. I don't know what was going through my mind when I decided to implement MUTA's UI memory management in this fashion, but I was probably thinking of serialization and to a lesser extent, vague ideas of "cache coherence".

You know what happens when a dynamic memory block is resized? Pointers into it are invalidated if the block has to be moved. Storing a complex structure with many pointers like this may not be the best idea. But that's what I did.

What this lead to was that all pointers had to be stored as relative pointers, integer offsets into the main block of UI memory. This block contained everything, including many dynamic arrays. When a dynamic array had to be resized, we looked in the block if there was enough free space - if not, the whole block had to be resized.

It was a real pain, because you couldn't reliably use pointers even inside functions. If you had a pointer to, say, a window structure, and after fetching that pointer called a function that might move the main memory block of the UI (for example, by drawing some vertices), you had to fetch the pointer again using the aforementioned relative pointer. It was painful to me, and it was more painful to other people who tried touching the UI code earlier.

So I did away with all that. Now the UI has normal, individually allocated dynamic arrays. But at the same time, I removed many of the dynamic elements, beacuse they weren't needed. Some elements are now statically allocated, and there's simply a maximum limit to their number. Others, such as vertices and text formatting buffers are still stored dynamically, because it's hard to know the maximum amount of things such as that beforehand, and because I want the UI to be reusable in other projects.

Immediate mode GUIs are nothing new but I kind of want to write a little about MUTA's specific implementation at some point. Maybe I will, though there are still some features that need adding.

I've uploaded the current GUI API and implementation files here and here in case you're interested.

Adding creatures and NPCs

Cleaning up is the bulk of what I have done lately, but not everything. I added some messages concerning creatures to our protocol, which is rather easy using the MUTA Packet Writer application written by Lommi. The packet writer let's us define packets in a text file, from which it produces serialiazation functions and struct definitions.

So far I only added spawning and despawning, and made sure those work with interest areas (structures that define to whom information of nearby events is sent). Plus a GM command for spawning creatures to test things. But the system is surprisingly complicated to implement, because we will need a concept of what spawn points are, how creature IDs are handled, etc. We don't just want to spawn creatures into the world without any context - that may lead to bugs like unique NPCs spawning twice, or something worse. We also need a format to store those spawn points in, and lastly we're going to need scripting. The latter will probably be done with Lua.

Asynchronous asset loading

The asset system has seen some grand internal upgrades. Textures and sounds may now both be loaded asynchornously. This was possible before, but only in a half-assed, buggy kind of way. There's still work to be done here, but it kind of works.  There's also a sort of garbage collection mechanism in place which gets rid of unused assets after a while.

The asset API remains about the same as before. It looks something like this:

tex_asset_t *
as_claim_tex_by_name(const char *name, bool32 async);

tex_asset_t *
as_claim_tex_by_id(uint32 id, bool32 async);

as_unclaim_tex(tex_asset_t *tex);

Clean up takes time, perfection is impossible

Everyone in software knows this, but there is no such thing as perfect code. There's always something to improve upon. And that's definitely something I came to think of once again while going through our codebase.

There are so many things I'd wish to do better. I want to clean up formatting, I want to touch this and that irrelevant "issue" because I know there's a more optimal solution. For example, I could do away with some slight branching in the updating of moving objects on the client side by separating the objects into two different arrays: idlers and movers. But this kind of thinking is treacherous. It's a 2D game, with only up to maybe a thousand mobile objects on screen at once. It just doesn't matter. But the engineer inside most programmers craves to take on the most minimal of issues.

Beautifying code is as treacherous as over-engineering. I could spend years making the code prettier, but if it does not significantly reduce the amount of time it takes to upkeep said code, it just isn't worth it. So, in the near future, MUTA's codebase will retain it's ugly macros and similar beauty flaws.

When the urge strikes to spend time on something that isn't worth it, I try to think of existing games. I'm certain no successful game has ever shipped without it's programmers thinking they could've done a lot better if only they'd had a little more time. Especially I think of Ultima Online, which was one of the first graphical MMOs and groundbreaking in a lot of ways, and yet most of it, according to Raph Koster, was written by a single programmer over the span of as little as three years. Now that's one heroic feat even from  a larger team of programmers, but it also signals to me that the codebase must have been pretty damn far from perfect.

Not rewriting it in Rust, and on Unity's C# source release

27.3.2018 10:04:43

Ain't not rewriting it in Rust

Who hasn't heard the phrase "rewrite it in Rust"? I certainly have, and yesterday finally thought, "you know, maybe I should give the language a try - a safer version of C sounds pretty tempting". So I installed rustc, opened a quick tutorial from the web and dived in at the shallow end.

I got as far as structs in about 5 minutes. So off I went and declared my struct:

struct s {
   x: i32,
   y: i32
All well and good I thought, but the compiler decided to be nasty. It told me:
warning: type `s` should have a camel case name such as `S`

I'm sorry, but I don't think its the job of the language to tell me how to name my data structures or variables, other than for things like whether they can include spaces/numbers or not, etc. I can go along with indentation marking scope, like in Python or other languages - I believe that's been proven to improve readability across the board. But I'm not interested in styling my naming conventions after the personal tastes of the creator of the language.

I realize styling warnings can be disabled via compiler options, but if its in the standard, you probably shouldn't. So I guess that's it for Rust from my part for now.

Unity's C# source code reference-only release

Unity recently decided to release it's engine and editor C# source code. The C++ core of the engine remains out of sight, and even the C# release is made under a reference-only license, meaning that modifying the source is not allowed. So, unfortunately we don't come even close to fulfilling the four software freedoms as traditionally defined. Unity seems to stress this themselves in the blog post. And of course, one may argue this means little since the C# source code could already be decompiled by parties interested. But I won't deny it's still a very small step in the right direction.

The release, I think, is not that big of a deal though. Unortunately, Unity still appears to hold some old-fashioned views popular in the proprietary software sphere, as proven by this statement, taken directly from the Unity blog: Wed open source all of Unity today if we thought we could get away with it and still be in business tomorrow.

I don't think that statement makes much sense. Of course taking a more open approach would require slight adjustments in the way the company does business, but Unity is already in the market of software as a service. It's not just the engine itself, its the documentation and updates people want. Even if they had a fairly open license that allowed reading, modifying and distributing source code, Unity could keep on raking in the money from the 5% they charge from people who make games built with their engine. The source being open would, I believe, really have no negative effects from this point of view.  Also, knowing Unity's limitations (which are many), I'm fairly sure there's no such advanced technology inside the engine that it would do much for a competitor to read the code, especially considering their main competitor, Unreal Engine 4, already has it's source visible.

Well, here's to hoping software vendors slowly come to their senses. Free/open source is not only ethical, but also practical - knowing your underlying technology from the inside out certainly makes it easier to make a good game.

Async MySQL C API usage. Also, bugs.

16.3.2018 10:02:26

I believe wrote in an earlier post that I had lately been working on a (proprietary) database server with MariaDB. I have little experience with database programming, so I've had to look a thing or two up on the internet and documentation. What I'll write about here is, I'm sure, very old news to anyone who's done work in this area before, but to me it was fresh knowledge.

So one thing I noticed about the MySQL C API, which MariaDB also uses, as I expect anyone who starts working with it would notice quickly, is that it isn't exactly asynchronous. mysql_query() is a blocking call that will connect to the server, send out the query and then wait for the reply, all in one function.

I didn't want to use the API like this if I didn't have to, and thanks to the internet, finding a solution wasn't too time-consuming. That's thanks to Jan Kneschke who has written about a fix right here about ten years ago.

As it turns out, the MySQL C API has a bunch of less documented functions available through the headers, namely 'mysql_send_query()' and 'mysql_read_query_result()'. With these functions available one can:

  1. Initialize multiple MYSQL instances (create multiple connections),
  3. Register each MYSQL struct's socket file descriptor with an event    interface, such as Linux's epoll,
  5. Send a bunch of queries, each through a different connection using    mysql_send_query(),
  7. Wait on the event interface until an event happens on one of the file    descriptors, then call mysql_read_result().

The big thing here is opening up multiple MySQL connections and monitoring them all with a select()-like interface. In (more or less pseudo) code on Linux:

#define NUM_MYSQLS 32

int epoll_fd = epoll_create(1);

for (int i = 0; i < NUM_MYSQLS; ++i) {
    mysql_real_connect(db, "localhost", "admin", "password",
        3306, 0, 0);

    /* Make non-blocking */
    int flags = fcntl(mysqls[i].net.fd, F_GETFL, 0);
    fcntl(mysqls[i].net.fd, flags | O_NONBLOCK);

    struct epoll_event ev;
    ev.events   = EPOLLIN | EPOLLRDHUP | EPOLLHUP;
    ev.data.ptr = &mysqls[i];
    epoll_ctl(epoll_fd, EPOLL_CTL_ADD, mysqls[i].net.fd, &ev);    

/* Start some queries */

const char *query = "SELECT * FROM my_table";

for (int i = 0; i < NUM_MYSQLS; ++i)
    mysql_send_query(mysql_send_query(&mysqls[i], query, strlen(query)));

/* Wait for replies to arrive */

struct epoll_event evs[NUM_MYSQLS];

int num_evs = epoll_wait(epoll_fd, evs, NUM_MYSQLS, -1);

for (int i = 0; i < num_evs; ++i) {
    MYSQL *mysql = evs[i].data.ptr;
    int err = mysql_read_query_result(mysql);

    if (err) {
        /* Handle error */

    MYSQL_RES *res = mysql_store_result(mysql);
    /* Handle result */


Now I thought that was pretty handy. On Windows, I expect you could get similar behaviour using select(). This approach has worked well enough in my current project so far, although obviously things aren't quite as simple as in the example.

The edge-triggered listening socket incident

I'm by no means very experienced in asynchornous programming (or any specific area of programming for that matter). So, writing a server program last week I ran into an interesting bug with a listening socket tracked by an epoll instance, which was really caused by me not thinking of the flags I passed to epoll properly.

The issue was this: a thread was sleeping on an epoll_wait() call until either one of the client's would send something, or a listening socket would receive a new connection that needed to be accept()-ed. When an epoll event on the listening socket fired (a new connection arrived), a notification would be passed to another thread in the program, which would then call the accept() function. An important bit of informatoin is that there was an atomic counter for how many times accept() had to be called. Every time an epoll event was generated by the listening socket, that counter would be incremented, and it was by this counter the other thread would then call "accept()" when notified - a counter of 10 meant you had to call accept 10 times.

But the accept queue on the other thread was being starved - clients constantly had to wait for long periods as the server would simply only accept one client at a time after having been running for about 15 minutes. Obviously there was a problem somewhere.

The solution: don't use the EPOLLET flag when adding the listening socket to the epoll instance.  See, when the EPOLLET (ET for edge-triggered) flag is defined for a file descriptor as it is registered with epoll, events will only fire if there were no previous unhandled events. So if accept() wasn't called immediately after an event and another connection arrived, no new event would be generated for the second new connection. Thus, our counter for how many accepts had to be called would not work

Eventually I added another counter measure, which was to call accept() on the epoll thread immediately after an event and then pass the resulting file descriptor on to the other thread. This way, the listening socket's backlog shouldn't fill up before accept is called on another thread even under high load.

Thank goodness for AddressSanitizer

Async bugs aren't the only bugs I've been fighting with this week, although this one was also obscure due to things having happened on multiple threads. Just two nights ago I tracked a heap corruption bug I thought was either happening because of OpenAL or the code calling it (which I didn't write) for multiple hours of work time.

But compiling with gcc's -fsanitize=address flag, I realized it was just my own old code, in a completely different place. Oops! Good thing there are tools for easily detecting problems such as this nowadays!

Jeesgen updates and other stuff, week 10, 2018

10.3.2018 19:46:30

I've been lazy in terms of working on projects outside of work for the past month or two. I have not written proper updates for MUTA in probably over four weeks and before the ongoing week 10, I also had not touched any other non-work related programming projects.

That's not to say I haven't been programming.  Since the beginning of 2018 I have written a backend server for a game-related project (in C with MariaDB), and I have partaken in a collaborative game engine project (in C++). I just haven't touched any projects I have proper personal investment in, that's all.

But that's about to change now. I'm looking forward to getting back to MUTA, for which I will likely begin writing entity scripting next. And this week, I've again been working on Jeesgen, the static website generator I wrote for the upkeep of this site.

New features

I knew a couple of features would be needed to make the site more functional, so I implemented them this week.

Individual post pages

It is now possible to generate individual pages for each post. This allows permalinking and easier linking to articles in general. It also helps RSS generation.

Whether or not individual pages will be generated for each post is controlled by a config option in the jees_config.cfg file ('generate individual post pages = true/false').

Unlisted pages

Pages in this context are individual areas of the website which all share the same template (for example, they all have the same menu for site navigation, but different content). Previously if a new page was added to a Jeesgen project, it would automatically be added to the default navigation menu. After the update, in the original post file it is possible to specify the option 'create menu entry = true/false' to ignore the creation of a navigation menu item.

This feature is handy when you want to create a page for something that doesn't need to be accessible from everywhere. For example, I may want to create a page for MUTA, but not have it appear up top there amongst the "News", "About" and "Projects" buttons.

Upcoming features

I didn't write Jeesgen to be used much by anyone but myself, although I want it to be applicable to a multitude of different types of websites. So some fairly important features are on the hold until I get a hold of myself:

  • Proper date-time formatting,
  • Characters outside of the standard ascii set,
  • Install scripts for GNU/Linux and Windows,
  • Distributable binary
  • Daylight saving time

A feature I'm having problems with figuring out how to properly implement are RSS item descriptions. I like to include HTML in my posts so I in order to copy, say, the 256 first characters of a post to the description, I would have to make sure any open HTML tags are properly terminated so that they don't mess up the RSS XML. I don't see an easy way out of this, though one option could be to make the post format support Markdown, which would mean I would not have to use HTML to format posts.

Other stuff: music I've listened to recently

This isn't really related to programming or video games, but I like music, and especially I'm a big listener of electronic music. So in the name of spreading good music, below are a couple of interesting songs I've had the joy of listening to in the past few months.

Jonas Steur - Silent Waves (Youtube)


I don't know if the subgenre is Balearic or something else, but this one's a wonderfully calming track that I don't remember having heard of until recently, and neither have I paid much attention to the artist in question before.  While the calming melody feels ambient, it seems to keep on playing even after the song is over.

Anetha - Disinhibition (Youtube)


A fresh acid track that has an nicely driving bassline, giving the feeling of the beat crawling out of the listener's own head.

Spy 71 & Eva - Take Me ... Mr Love (Youtube)

Italo Disco

An italo classic and one of the songs of that era I constantly return.

Sixteen Souls - Late Night Jam (Youtube)


Some slightly funky, chilled house music from 1998 and nothing else.

Sophie Ellis-Bextor - Murder On The Dancefloor (Youtube)


Just pop music from the early 2000s. But with a disco/disco-house twist! And the music video is great.

C++ gripes #1: static functions as friends

20.1.2018 19:01:56

While programming as a an activity is something I, as many others, derive joy out of, there are obstacles that make it less appealing from time to time. One of those obstacles is C++. I know picking on C++ is a bit of a fashionable thing to do at the moment, but it's no wonder why: the language is easy to pick on (and it's widely used, for what ever historical reason). Well, lately I've been involved in a project utilising sepples and it's features heavily, and here's my latest gripe with the language: headers polluted with implementation details.

In C, if you want to write a static function inside your source file, unknown to the header file, and the function manipulates some data structure defined in the header, you can. For example:

struct my_struct {int x;};

static void
_modify_struct(struct my_struct *s)
    {s->x = 5;}
Well, in the world of sepples where private and public are a (rather useless) concept, it's not that simple to keep our _modify_struct() function out of the user's sight. At least for now, I could not find a way to write the equivalent of the C example up there for a C++ class in the case where the function is to modify the class' private members, not without declaring the function in the header first.

So yeah, I'm not a fan of sepples.

MUTA devlog 1: generic data files

13.1.2018 09:30:00

A devlog for what?

MUTA, an isometric 2D MMOPRG, is our main project at the moment. Therefore I thought we might go ahead and post about it's development from time to time.  So here goes: the first, official MUTA devlog! (A post to explain what the game is actually about may follow... some time in the future.)

Recent work: chat commands and file formats

Since Christmas I've mostly worked on two features for the game, apart from general stability fixes: chat commands (emotes, console commands and game master commands) and generic data files.

The emote and command stuff were simple and I added the quick versions of those earlier. The system is familiar from certain other games: a slash (/) at the beginning of a chat line indicates a command or an emote that may, depending on it's type, be processed either on the client or on the server. For example, /laugh while targeting someone may produce the text "You laugh at [TARGET]". A line beginning with a dot (.) indicates a game master command, and in the case the the full processing of the line will be done on the server, if the player has the correct privilege level - if not, the line will simply be printed as a chat message.

After adding the chat command systems I decided I should tackle generic data files - a file format that's generic enough to be used across different game systems for storing data on both, the client and the server side. The reason I picked this job next was that it would be helpful in getting many other thing done in the near future. Also it required no creative thinking, and I felt out of that juice at the moment of making the decision.

So what do you need a file format for?

In a video game there's usually plenty of data that must be stored on disk and MUTA is no exception. Some of the data is stored during runtime: for example, a client may cache data it receives from the server to reduce traffic. Such cached data could be, for example, object definitions (what sprite should a monster of id 40 use, etc.). The more traditional form of data on disk is static game data, such as assets or maps; data that only changes between game versions. I wanted to create a file format suitable for both of these use cases.

MUTA has some file types that are better specialised into their own formats: map data is one example. But a lot of files are rather similar in nature: they can, more or less, be represented as tables consisting of rows and columns. Such files in MUTA include at least:

  • The asset file, listing properties of many individual media assets - for example, what parameters is a specific texture to be loaded with,
  • Tile, creature and other object definition files that describe what sprites to use, what scripts to run and what names to give specific types of game objects.
  • It would be a lot of work to write specific file types for each one of these use cases, combined with command line or engine-integrated programs for editing them. So a generic format might be useful.

    The mdb format

    The format I wrote, called mdb, is a simple binary format. Each file has a single table (think of an SQL table), with an infinite amount of columns and rows. I already knew how some games had accomplished similar goals, and the design was certainly influenced by that.

    A row represents a single entry. For example, in a table of creatures types it would represent a single creature type.

    A column represents the value of a single field. Each column may have one of the following types (for now): int8, uint8, int16, uint16, int32, uint32 and string. There is always at least a single column: ID, which is a uint32. The ID is important because we want to be able to use that as the main reference to a specific item. For example, if I want to edit the stats of a specific creature, I must be able to find it's entry by it's ID. IDs are unique and auto incremented as a running number starting from zero when the file is created.

    As I said, the format only supports a single table per file. Why is that? Well, mostly because of time. A similar format with support for multiple tables would not be difficult to write, but due to the added complexity, it would take more time. Us being two (or at times, a little more) people working on a large project, time isn't exactly on my side. Another reason for having a single table per file is also it cuts down the amount of steps it takes to modify a single value in a file using the command line editor, which I'll get to.

    File internals

    As I said, the mdb file structure is very simple. In the header, certain fixed-size values are stored: name of the file, an automatic version, number of cols and rows, running id (all rows have an automatic uint32 id field), etc.

    After the header come columns, which store a max 32 byte string name and their type.

    Following the columns are rows. Each row is simply a sequence of raw values for each of it's fields, with the exception of strings. As a field, a string instead is a single uint32, representing an offset to a string block at the end of the file where all of the file's strings are stored. Inside that block, at the offset specified by a string field of a row, the first thing to be found is another uint32 representing the length of the string, after which comes the actual string as an array of bytes. Therefore each row physically has the same length in bytes inside the file since variable length strings are not stored in the middle of them.

    So its pretty much plain data what's inside the files and no fancy tricks (not that I would know how to pull such tricks off anyway). Additionally, because we know all of the game's data files of this type will be so small (megabytes at the largest, and rarely that), the files are always fully loaded into memory at read time - hence there are no complicated hash tables or other stuff built into the files themselves (those exist inside the in-memory version of course). If the situation comes where files become very large, well, we better write a new format or improve the existing one.


    Of course, a binary file needs a way to edit it. I wrote a small command line program for this purpose. The program is a simple series of menus: the "new" menu asks you to fill in fields to create a new database (name, file path), where as the "open" menu let's you manipulate an existing file. The terminal dump below shows an existing file being opened.

    [owner@owner-PC mdb_edit]$ ./mdb_edit
    mdb edit v. 0
    [mdb] open    
    file path: jee.mdb
    Type 'help' for options.
    [edit db] help
    new col:   create new column
    new row:   create new row
    select:    select a row
    dump:      print out a database
    save:      save the opened file
    get field: print a field from selected row
    set field: set value of a field for selected row
    dump row:  print all the fields in the selected row

    Going forward

    There are still some things to polish up, not necessarily in the format itself but the functions used for manipulating it. When that's done, I'll get to converting some old text format files to this format.

    We are in this unfortunate situation in development at the moment where both, Lommi and I have 8 hours or more a day dedicated to doing other things than writing MUTA. But after a while, the time will come when we get some more breathing space. Until then, development should go forward slowly but steadily.

    jeesgen: a terrible static website generator

    30.12.2017 20:29:54

    The first post

    Well, this is it: the first post on the site (discluding the initial 'hello world' post). Being the first post, it be appropriate for it to explain what this site is all about. I'll briefly get that out of the way first before going into the actual topic.

    Lommi and I had been talking about creating a website for our (video game) projects for a while. During this year's Christmas holidays, I finally had time to sit down and get it done. So here we are.

    We are both programmers and hence like writing programs, some related to games, some not. So, why not have a blog go with the website where we could talk about those programs if we happened to ever have the urge to do so? Something you will likely see here in the future are technical posts about programs we've participated in writing to a lesser or greater extent. This is the first such post.

    The generator

    Alright, lets get down to the topic. What the topic is is a static website generator I wrote for the upkeep of this website recently, jeesgen. As I said, during the Christmas holidays I had a bunch of free time to get down to writing the long-dreamed-of website. So, I delved into the problem while I was away at relatives.

    The initial problem I had was that I did not know the modern tools. I'd written some static websites in the past since learning HTML at school in the early 2000s and CSS later, but that was it. How ever, that was OK for starting out, because I despise most modern websites with their all-over-the-place scripts, infinitely scrolling pages and the other fancy stuff. I support what Brian Lunduke recently said: make the web HTML again.

    So not knowing Wordpress or Javascript or what ever was not really the problem. The problem was that I wanted an easy way to update the website: things like posting news and adding new pages should be easy, preferably doable from the command line. What ever the tool for the job would be, I also hoped it to not have any external dependencies. That was a rather limiting requirement because web developers if anyone seem to have taken quite a liking to external dependencies.

    I checked out a couple of these tools quickly, mostly Jekyll and Hexo. But they seemed rather complicated to me, usually requiring at least node.js installed. They also didn't seem to easily create the types of webpages I wanted. So with that in mind, I thought: it would probably take me less time to write my own tool to do this than learn one of these existing ones anyway. Besides, writing it could be a nice holiday past time on the laptop.

    I decided to write the program in C - this would avoid any external dependencies and besides, its the perfect language for every job there is anyway. It took me an evening to get the posting feature working. Later I added pages and other stuff.

    The program is quite limited, but the idea of it, really, is to only generate the necessary parts of the site (menu buttons, posts and post pages) and embed the user's content, which may contain HTML/CSS or what ever, into that. The way the sites can be structured is not very flexible, but I don't need that: I made the tool for the type of website I would like to create, and I made it rather quickly.

    Generating a site with jeesgen

    I wanted a simple command line tool that could process text files and that's what I made. Here's the flow of creating a project and adding pages or posts to it:

    • Create a project directory with 'jeesgen -new my_project'
    • Modify the layout and config files inside the project. Options are included for pages, posts, page links and some other things.
    • Write a post or page into a text file.
    • Add the page or post with 'jeesgen --page' or 'jeesgen --post'
    • Generate HTML with jeesgen --generate
    And that's about precisely the workflow I wanted. In future updates I hope to add pages for specific posts, Windows compatilibity, support for umlaut characters (Finnish) and easier editing of existing content. But for now the tool should be suitable for getting the site running.

    If you are interested in the code for the tool, it may be found here. Its quite terrible, really, since it was practically just smashed in during holiday celebration off-hours, but what can you do. The program should build on GNU/Linux using gcc.

    Hello, world!

    30.12.2017 15:32:21
    Hello, world!
    Page: 0 1