We fulfill power fantasies

RSS Feed

Network message serialization in MUTA

6.10.2019 17:49:30

MMOs typically have a lot of network message types for different actions: move my character, talk to this NPC, a spell was cast by a unit.... A set of message types along with rules for reading them forms a protocol.

MUTA has many internal communication protocols, all of which run over TCP/IP: client - server, master - simulation, master - login, master - world database, master - proxy... at least. All of these are binary protocols: usually, the first one or two bytes denote the type of the message, after which comes a set of primitive values, such as sized integers, arrays, etc.


This is an example from the master - proxy protocol. This message may be sent by master to proxy or proxy to master, though most messages in MUTA are one direction only. It is used in either of the following two scenarios:

  • A client sends a message to the master server through the proxy server.
  • The master server sends a message to a client through the proxy server.
In this case, the proxy socket ID variable denotes the client who sent or is to receive the message.

Packing messages to streams takes code, and writing such code is very errorprone. Thus, MUTA has a tool for this. The original tool, MUTA Packetwriter, was written by Lommi 2 years ago. Recently, I took to rewriting the tool to add a couple of features and modify others, and to make it more maintainable.

The packetwriter is a command line tool that takes in two arguments: an in-file that defines the messages in a protocol, and an out-file into which to write the generated C code. The in-file format uses the same .def format we use all around MUTA.

packet: svmsg_new_character_created
   __group = svmsg
   id      = uint64
   name    = char <uint8> {MIN_CHARACTER_NAME_LEN, MAX_CHARACTER_NAME_LEN}
   race    = uint8
   sex     = uint8
   map_id  = uint32

Above is an example of a message definition in the in-file of the client - server protocol. The message belongs to the group svmsg, which is a group defined in the same file. If this message is the third svmsg-group message definition file in the file, it's ID will be 3 - this enumeration is what groups are for.

In the above message, the variable name is a string, the curly braces marking it a variable-size array (of type char). Strings in MUTA's protocols are never null-terminated. Instead, a length variable is written for each array in a message - if the max length is numeric, the packetwriter will automatically figure out the required type of the length variable (uint8, uint16...), but if it's alphabetical, because we can't parse #defines from C files, the type of the length variable can be explicitly marked, which is what we do with the <uint8> notation above. All arrays, strings included, must have a maximum length defined, and can have an optional minimum length.

The packetwriter produces the following code from the above declaration.

enum svmsg
{
   ... other messages ...
   SVMSG_NEW_CHARACTER_CREATED,
   ... other messages ...
};

struct svmsg_new_character_created_t
{
   uint64 id;
   struct {uint8 len; char data[MAX_CHARACTER_NAME_LEN];} name;
   uint8 race;
   uint8 sex;
   uint32 map_id;
};

static inline int
svmsg_new_character_created_compute_sz(svmsg_new_character_created_t *s);
{
   int ret = 0;
   ret += 8;
   ret += 1 + (int)s->name.len;
   ret += 1;
   ret += 1;
   ret += 4;
   return ret;
}

static inline int
svmsg_new_character_created_write(bbuf_t *bb, svmsg_new_character_created_t *s)
{
   uint8 *m = bbuf_reserve(bb, 1 + svmsg_new_character_created_compute_sz(s));
   muta_assert(m);
   pw2_write_uint8(&m, SVMSG_NEW_CHARACTER_CREATED);
   muta_assert(s->name.len >= MIN_CHARACTER_NAME_LEN);
   muta_assert(s->name.len <= MAX_CHARACTER_NAME_LEN);
   pw2_write_uint8(&m, s->name.len);
   pw2_write_uint64(&m, s->id);
   pw2_write_uint8(&m, s->race);
   pw2_write_uint8(&m, s->sex);
   pw2_write_uint32(&m, s->map_id);
   for (uint8 i = 0; i < s->name.len; ++i)
       pw2_write_char(&m, s->name.data[i]);
   return 0;
}

static inline int
svmsg_new_character_created_read(bbuf_t *bb, svmsg_new_character_created_t *s)
{
   int size = SVMSG_NEW_CHARACTER_CREATED_MIN_SZ;
   int free_space = BBUF_FREE_SPACE(bb);
   if (size > free_space)
       return 1;
   uint8 *m = BBUF_CUR_PTR(bb);
   pw2_read_uint8(&m, &s->name.len);
   if (s->name.len < MIN_CHARACTER_NAME_LEN)
       return -1;
   if (s->name.len > MAX_CHARACTER_NAME_LEN)
       return -2;
   size += s->name.len * 1;
   size -= (MIN_CHARACTER_NAME_LEN) * 1;
   if (size > free_space)
       return 2;
   pw2_read_uint64(&m, &s->id);
   pw2_read_uint8(&m, &s->race);
   pw2_read_uint8(&m, &s->sex);
   pw2_read_uint32(&m, &s->map_id);
   for (uint8 i = 0; i < s->name.len; ++i)
       pw2_read_char(&m, &s->name.data[i]);
   bb->num_bytes += size;
   return 0;
}

To use the generated code above to serialize a message to send it over the network, we just need to fill in a struct and call svmsg_new_character_created_write();

svmsg_new_character_created_t s = {
   .id = 53,
   .race = 1,
   .sex = 0,
   .map_id = 32};
memcpy(s.name.data, "John", 4);
s.name.len = 4;

/* Byte stream to write to */
uint8 *memory = ...
bbuf_t bb = BBUF_INITIALIZER(memory, svmsg_new_character_created_compute_sz(&s));

svmsg_new_character_created_write(&bb, &s);

The generated return values for the message reading functions have meaning in that a positive return value implies the message was incomplete, a negative return value implies the message was illegal, and a return value of zero implies the message was OK.

The packetwriter also supports structs if they're defined in the in-file. The structs may also be nested, or in arrays, or contain arrays.

Security features are limited to array length checking and numeric variable range checking.

Bitpacking, the act of packing multiple numbers whose legal ranges are known into a single variable as bits, is not supported yet.

That's mostly all worth saying about that subject I guess. You can find the Packetwriter 2 (and for now, also Packetwriter 1) code in the MUTA repository under tools/packetwriter2 in case you're curious. I'm sure there's still bugs out there in the code though.

MUTA devlog 6: the backend rewrite

30.9.2019 21:41:29

I'm pretty sure it started with a need to rewrite the client's entity system and rendering. That was about half a year ago - since then, I've only been working inside feature branches instead of the MUTA main branch, development. Somehow, one change followed another, and I got carried away. There was a long time when the main components of the server would not even run and a much longer period during which the client was not capable of forming a connection with the server. But that's finally behind now.

In the last devlog I listed most of features I have been working on. Some newer ones include:

  • Simulation server rewritten from scratch
  • New interest management system on the master server (now with interest lists rather than just a grid)
  • Moved many server components' protocols to use the new MUTA Packetwriter2 for serialization.
  • Precompiled windows MSVC dependencies and added them to the main repo (wanna do the same for GNU/Linux).
  • Removal of the old database server application.

Simulation server rewrite in preparation for clustering

The simulation server (previously called worldd, renamed to sim) was the most ad-hoc piece of the backend in addition to the old database server. From the features it used to have, only pathfinding remains missing in the new version - I'll probably just copy-paste it.

The original sim server had been built quickly, mostly ignoring the reason it even existed as a separate application from the master server: clustering. Each game world is intended to have many simulation servers, each simulating a separate piece of the land. Now, things have been properly prepared for that.

Initially we wanted to use Lua for the sim server's scripting language, but now I am leaning more towards C. Lua has some advantages, but a separate scripting language also adds another layer of complexity - developers must learn two languages, and an API is required to communicate between the languages. And of course, C is more powerful. So, Lua on the sim server is gone with the rewrite. I plan on making the C scripts a separate module, so that in theory one could easily build two different versions of MUTA with completely different scripts, but still wish to link everything statically.

Interest lists

Interest management is the act of deciding what objects players receive what updates from, usually based on distance, being in a party, guild, or something else.

MUTA is a tile based game, so initially I felt a grid-based approach would suit it naturally. It would save the memory, too. The world was divided into (IIRC) 16x16x8 tiles large cells, and players only received updates for objects in their own and the 26 surrounding cells. As I was rewriting the master server (which handles interest management), I decided to revert this. My gut says that iterating through 27 cells each time something happens, some of which might be empty, is no good for performance.

Interest lists are data structures that list any players that are interested in a particular object. Currently the only object types we have are players and static objects, the latter of which cannot be updated, so it is only players who get these lists assigned to them. The old-style grid is still there, but it's only used to update interest lists when objects move. So, if the master server's object view distance is 16 tiles, any objects within that distance from a player will be on that player's interest list, unless they're hidden by a spell or something. And if that player, say, casts a spell, we can walk through the player's interest list and only send the casting update to the players on it.

Precompiled dependencies

I hate 3rd party dependencies. I mean, I like the fact they make development easier, and I love the people who make them. But they are often annoying to manage.

To combat complexities like compiling/installing big libraries whose version might not even be right, I have precompiled or downloaded and added into our Git repository all of the 3rd party libraries we use on Windows.

I'm hoping to do so on GNU/Linux soon also, though it's a little more complicated there due to the variety of different systems. Unfortunately for now, we still have some dependencies that are in the repo as Git dependencies (from external repositories), some of which even require terrible build systems such as CMake to build.

Content and gameplay, coming?

With so much base-building behind, I'm feeling pretty confident about getting to gameplay programming fairly soon (that's what they all say though, right?). Programming of course isn't enough to make good content. In particular, the game still lacks graphics, so I'm on the lookout for someone capable of creating art true to the spirit of the game's world. We'll see about that.

Northern Game Summit 2019

28.9.2019 21:35:08

The yearly Northern Game Summit was once again held last Thursday here in Kajaani. It was a fun time as always.

This year the event was held at a local night club rather than the traditional Biorex movie theatre. I heard some complains about this arrangement, but I think it turned out quite alright. Maybe a minor part of the feeling of the grandness of the event was lost, but then again, NGS is intended to provide a chance for developers to connect with other developers, and students to connect with them in turn. The night club setting worked, I think, quite well for that. In terms of practical complaints, the only one I heard after the event was that the speeches were difficult to follow due to the layout of the venue, and the placement of the screens.

The 2019 speaker list was special in that I'm pretty sure all of the speakers had some previous connection to Kajaani, many having worked at the local university of applied sciences or a company located in town. It was cool: as every year so far, I was disappointed in the low amount of technical topics, but the other speeches were still useful, some especially to students who are the most numerous target audience of NGS.

I've heard rumours of NGS struggling a bit financially. From the bottom of my heart, I hope KAMK and the other sponsors keep on supporting the event, of course alongside with the many volunteers (whom I am very thankful towards despite not volunteering myself). Kajaani has managed to build a significant game development community considering the city's population and location, not in small part due to the game development programmes offered by the local UAS. Some people have worked hard to achieve this state, with many active companies in the area and new eager developers graduating every year. Northern Game Summit has an important place in keeping Kajaani a friendly town for devs, and it deserves to be kept well and alive.

OK hand gesture considered harmful

28.9.2019 20:56:27

It has come to our attention that, according to some, the OK hand gesture is now associated with hate groups. We use the sign in our current logo logo and do not associate it with any particular ideology. Consider this a sort of protest against suddenly changing the meaning of a commonly used hand sign if you want to see a deeper meaning behind it.

Anecdote: "multithreading" bug

27.8.2019 17:26:37

Last night while working on MUTA's login server I ran into a strange-seeming bug with the following piece of code.

if (res)
   mysql_free_result(res),
event_push(&com_event_buf, &new_event, 1);

In the snippet, event_push() is intended to push an event to another thread. But the other thread appeared to not be receiving the event. I tried debugging with GDB and, of course, debug printing. I noticed that while the above snippet did not work, the below one, where I have seemingly only added a debug print statement, did.

if (res)
   mysql_free_result(res),
DEBUG_PRINTFF("num_events: %d\n", com_event_buf.num_events);
event_push(&com_event_buf, &new_event, 1);

Thinking too highly of myself as most programmers do, I of course thought: I'm doing nothing wrong here, so GCC must be at fault. I have turned optimizations off, but it's still doing something weird! I made everything from function parameters to the event buffer's members volatile and tried various other tweaks to no avail.

The next morning my own stupidity hit me as I tried to run the same code compiled with MSVC. Same bug, same working solution.

Then I noticed something. That something was a missing semi-colon, accidentally replaced by a period.

Generic hashtable in C (2): the final API

14.8.2019 18:00:23

Some time back I wrote a generic hashtable in C. Since that post, the API has seen some iteration as I've been using the code in my projects.

This hashtable heavily relies on macros that call type-generic functions which modify struct members based on their offsets and do other evil stuff of that sort, all in a way that isn't necessarily completely standard-conforming nor the most efficient way to do things. In exchange, it accepts any kind of data as keys or values, and accepts any hash function that returns a size_t. Below's a short description of the final API that differs a little from the one described in the original post.

Initialization

hashtable(uint64_t, uint32_t) table;
int err;
hashtable_init(table, 16, &err);
if (err)
   /* Handle error */

Insertion

int         err;
uint64_t    key     = 12345;
uint32_t    value   = 54321;
size_t      hash    = hashtable_hash(&key, sizeof(key));
hashtable_insert(table, key, hash, value, &err);
if (err)
   /* Handle error */

Searching

uint64_t    key     = 12345;
size_t      hash    = hashtable_hash(&key, sizeof(key));
uint32_t    *value  = hashtable_find(table, key, hash);
if (value)
   /* Do something with value. */

Erasing

uint64_t    key  = 12345;
size_t      hash = hashtable_hash(&key, sizeof(key));
hashtable_erase(table, key, hash);

Iteration

uint64_t key;
uint32_t value;
hashtable_for_each_pair(table, key, value) {
   /* Do something with key and value */
}

Strings and other comlex data types

This hashtable saves a copy of each key in its entirety. For non-fixes size types, three functions must be defined compare_keys, copy_key and free_key. Below is an example of insertion into and erasure from a string-int table.

Allocation and copying function examples

int compare_keys(const void *a, const void *b, size_t size)
   {return strcmp(*(const char**)a, *(const char**)b);}

int copy_key(void *dst, const void *src, size_t size)
{
   size_t len = strlen(*(const char**)src);
   *(char**)dst = malloc(len + 1);
   if (!*(char**)dst)
       return 1;
   memcpy(*(char**)dst, *(const char**)src, len + 1);
   return 0;
}

void free_key(void *key)
   {free(*(char**)key);}

Insertion

char    *key    = "one";
size_t  hash    = hashtable_hash(key, strlen(key));
int     value   = 1;
hashtable_insert_ext(table, key, hash, value, compare_keys, copy_key,
   &err);
if (err)
   return -1;

Erasing

char    *key = "one";
size_t  hash = hashtable_hash(key, strlen(key));
hashtable_erase_ext(table, key, hash, compare_keys, free_key);

Typesafe function declarators

Finally, there's a couple of convenience macros for defining type-safe inline functions for tables of a given type. Their purpose is to reduce the need to type in all of the macro parameters every time as there can be quite a lot of them otherwise.

/* Define a struct named struct str_int_table and a set of functions for it */
hashtable_define_ext(str_int_table, const char *, int, compute_hash,
   compare_keys, copy_key, free_key);

...

struct str_int_table table;

if (str_int_table_init(&table, 8))
   /* Handle error */

if (str_int_table_insert(&table, "one", 1))
   /* Handle error */

Might still be bugs in the code, but I'll fix 'em as I come across 'em.

hashtable.h
hashtable.c
git

Crossplatform makefiles (Nmake and GNU Make)

5.8.2019 19:49:37

Here's a handy little cross platform makefile tip I came across recently. The original tip is from user Bevan Collins on StackOverflow. Thanks to him! This tip is useful if you need build stuff using nmake on Windows and GNU Make on what ever platform(s) you're using it on. Heck, it may work with other make implementations, too.

We're going to need 3 files in the same directory:

  • nmake.mk
  • gnu.mk
  • Makefile

Now start your Makefile with the following lines.

# \
!ifndef 0 # \
include nmake.mk # \
!else
include gnu.mk
# \
!endif

Next, place any Nmake-specific variables in nmake.mk and GNU Make specific variables in gnu.mk. You'll want to at least define a variable for the path separator (slash or backslash) and use it for any file paths (I just name the variable 'S').

  • For *nix: S = /
  • For Windows: S = \\

Finally, place all your build targets in the Makefile. Done!

Bevan's include code, as he explains, works because GNU Make recognizes the symbol \ as a line continuation even in comments where as Nmake doesn't.

Why can't we just use ifdefs you ask? That's because conditional statements use different keywords in Nmake and GNU make: nmake ifdef instructions start with an exclamation mark where as GNU make ones don't.

The unfortunate thing about this approach in comparison to two separate, platform-specific makefiles is that you won't be able to benefit from parallelization on Windows if you simply build the same targets on both platforms. That's because while GNU make has the -j flag to process multiple targets in parallel, Nmake lacks similar functionality. If you want to create parallel builds using cl (the Microsoft compiler) on the command line, you must do so at the compiler level (/MP), which means the makefiles' targets are going to inevitably look a little different. If your project isn't too big though, you're unlikely to benefit much from parallel building on Windows - I rather take the maintainability of a single Makefile than that. And I still get to keep away from awful, overly complicated build systems like CMake.

Trying out a different desktop environment

30.6.2019 14:28:44

I'm a fan of tiling window managers on GNU/Linux. There's something real comfy about using your graphical desktop environment with nothing but your keyboard. I find the paradigm is excellent especially for programming, where you often have multiple terminal windows open at the same time. Now, GNU Screen and tmux exist to split up the terminal into multiple windows, but they're a bit of a hassle and come with their own problems. A desktop environment usually automatically supports multiple monitors and virtual desktops, and has sane default keybindings to work with, with the added benefit that you can also tile any non-terminal programs.

i3 has been my window manager of choice for many years now, and it has worked well. I love the workflow I've developed for it. While programming, I usually have...

  • One virtual desktop for Vim.
  • One virtual desktop for compiling and running (2 terminals).
  • One virtual desktop for Firefox and other graphical programs.
  • Possibly one virtual desktop for running multiple programs after compilation at the same time.
  • One virtual desktop on a secondary monitor with IRC, a music player and possible other chat programs.

Typically I spend most of my time in Vim and compiling/running the program I'm currently developing - two different virtual desktops. i3, like many window managers, has great, intuitive virtual desktop support.

i3 vs spectrwm

Having used the same desktop environment for such a long time, I felt like maybe I should try out something new for a change, you know, to give the old brain a tickle of sorts, and also to see if there's optimizations to be made in my workflow. Sometimes you just get stuck doing the same thing because you've always done it that way, right? So, I tried a few different window managers.

Of the ones I tried, the only tiling WM I ended up liking was spectrwm. It's quite similar to i3, but there were a couple of features about it I really liked:

  • The default config is very sane and good-looking.
  • No mouse required (like i3).
  • It feels light. For some reason, opening windows felt faster than i3 - in i3, you can sometimes kind of see the window resizing after it's been opened. Placebo? I don't know!
  • The bottom bar configuration scheme with a single shell script awesome.

Especially the bottom bar configuration is a feature I like. In spectrwm, the bottom bar's text is the output of a user-defined shell script. You can print just about anything there - just write a bash script with a timed while loop in it, print out the current time, network status, etc. and you have a nice-looking bottom bar with all your important info on it. What would make this event nicer is if there was a way to call into spectrwm to refresh the script - that way, when certain events happen, the script could be updated immediately rather than in the next iteration of the timed loop. Maybe there already is a way and I'm ignorant, I don't know. Of course, there are more compilated ways of doing such a thing in bash, but I don't think I'm up for that much work.

All in all, I really like spectrwm, but it has one feature that makes it bad for my desktop use. The feature is that with a dual-monitor setup, selecting any virtual desktop will place it in your current window, so that even if I created the desktop on the right monitor, it may end up in the left monitor later by accident. I find this a very distracting feature, and if there's a way to disable the behaviour, I'd appreciate to hear of it.

After all that, I am keeping i3 on my desktop machine, but on my laptop, I've moved to spectrwm, since it only has a single monitor and thus doesn't suffer from the aforementioned desktop-monitor-switching feature. It hasn't really changed my workflow, but feels faster and looks slicker by default. Besides, the configuration scheme with the shell scripts and all makes me childishly feel like a bit of a hacker.

MUTA devlog 5: solo dev, client entity system, master server rewrite, world database server...

25.6.2019 18:05:54

It's been a while since the last one of these, but now as we're celebrating midsummer and nightless nights here in Finland, I think its time for me to sit down and write a bit of a catch-up post about what's going on with MUTA, the Multi-User Timewasting Activity.

Officially developing an MMORPG solo

To recap, MUTA is a free and open source MMOPRG project started in 2017 as a student project. The idea for it came from myself and another Kajaani UAS student, Lommi, as we had both been talking of writing an MMO for a while. In fact, wanting to write an MMO was the primary reason I personally applied to the school's game development programme.

We spent two full project courses, each 2-3 months long, on writing the game. During that time we got much assistance from many different people. Game art students created art, and programming students helped us with some great tools and certain parts of the engine, and production students helped us get organized.


MUTA after 2 months of development, during a test with around 20 people online. The 64x64 art at this point was created by three Singaporean summer school students.

After (and between) said courses, it was just me and Lommi working on the game. I'm more of an engine guy, he's more of a gameplay guy, though neither of us is one thing exclusively. While it initially only took us two and a half months to get an OK-looking demo game running, writing an MMO engine and toolset properly takes a lot of time. And while such tools are in development, content creation is difficult, if not impossible. The lack of visible progress during a technology development phase such as this, I think, is problematic for people who are primarily driven by gameplay and visuals, and so after a while, I was mostly working on the game alone, always telling my friend I would try to get the engine and tools ready for content-development ASAP.

Last month, Lommi finally told me he felt the project was too big for him to work on it on the side at this point (he's also employed full-time and has been for quite a while). That leaves me as the only official developer of MUTA. But that's not such a big change after all: likely more than 90% of the current codebase was already written by me alone at this point, as Lommi has not really been involved during the last year or more.

In fact, it's a bit of a relief for me. I no longer need to worry about getting the game ready for others to develop content for it. I'm working a full-time job and try to spend the time I can on MUTA, but often time is simply hard to come by.

With this organizational shift in mind, I have some more changes coming up. I'm planning on reworking the theme of the game and possibly making the art myself. I'm a terrible artist, so it will probably come to simple indie pixel crap. But that's alright, the gameplay is the important part. As for the theme, I'm planning on simple high fantasy, due to the fact I'm not good enough of a visual artist to present a universe anything like what we originally planned for - the original idea was a sword and sorcery -type world inspired by the works of Robert E. Howard. I'm of the opinion that the theme should support the gameplay and not the other way around. However, I don't want to see fireballs flying all over in the style of Warcraft either - it's gonna be something lower key than that.


Character "art" from my first-ever game project, Isogen.

For now, MUTA remains a hobby project I try to pour much of my free time into. Time will tell what it actually evolves into, but I've got high hopes that one day it will be a real, online MMORPG. If not that, at least the code will be available for anyone to inspect.

Code changes

Phew, it's been many months since I last wrote about MUTA, so a lot of things have changed in the codebase, and some of them I don't even remember anymore. Some of the changes include (in a semi-chronological order):

       
  • Reworking of the immediate mode GUI into a standalone library.
  •    
  • Shared code cleanup (mostly just renaming things and organizing them into files)
  •    
  • Moving to MojoAL from OpenAL Soft on at least GNU/Linux. I don't know how great of an idea this is, but MojoAL is easy to embed into the project, having only two source files as opposed to OpenAL Soft's CMake hell.
  •    
  • New entity system for the client.
  •    
  • Rewriting the client's rendering.
  •    
  • Completely rewriting the master server.
  •    
  • Writing a world database server.
  •    
  • Writing a new async database API.
  •    
  • packetwriter2 tool for network message serialization.
  •    
  • Shared API and authentication for server-server connections (svchan_server and svchan_client).
  •    
  • New generic hashtable written as a separate library.

Client entity system

The entity system on the client needed a rework. It was something of an entity-component system (and I know how pretentious that term is) and remains so. This job had two distinct motivations:

       
  • Making the code clearer
  •    
  • Performance
I feel like both goals were achieved. First of all, the code needed breaking into more source files as previously the whole world code was in a single file (I feel this sort of isolation is more future-proof for this project), but second, I really wanted less weird macros and more flexibility in component implementation. To recap the new system:
       
  • An entity has a fixed-size array of components.
  •    
  • Each component has an enumerator it's referred with. The enum is an    index to an entity's component array.
  •    
  • Components in an entity generally point to a handler structure. The    component might have an iterable element in a tight array associated with    it, but this is not visible to the component's user - they access it    through a set of functions.
  •    
  • Components communicate mainly through events (callbacks).

Components are defined by creating an instance of a static-duration component_definition_t struct.

struct component_definition_t {
   int (*system_init)(world_t *world, uint32 num_components);
   void (*system_destroy)(world_t *world);
   void (*system_update)(world_t *world, float dt);
   component_handle_t (*component_attach)(entity_t *entity);
   void (*component_detach)(component_handle_t handle);
   entity_event_interest_t     *entity_event_interests;
   uint32                      num_entity_event_interests;
   component_event_interest_t  *component_event_interests;
   uint32                      num_component_event_interests;
   uint32                      index; /* Autofilled, initialize to 0! */
   component_event_callback_t  *component_event_callbacks; /* Autofilled, initialize to 0! */
};

So the component_definition_t structure is really just a set of callbacks. Components are also pooled, but the pools are members of world instances, hence not visible int he above example (the functions just accept a pointer to a world_t, as seen).

Using a component definition, components can be added to an entity and then manipulated.

component_handle_t entity_attach_component(entity_t *entity,
   component_definition_t *component_definition);

The component handle returned by entity_attach_component can be used to access the component. It could be laid out in memory in various ways - the API does not set restrictions on this, except that the handle must be a constant address pointer until destruction.

void mobility_component_set_speed(component_handle_t handle, float speed);

Component event callbacks are attached to the component definitions rather than individual components. This does away with some flexibility, but saves memory and likely performs better in the average case, since in MUTA, certain sets of components in a single entity type are very common (creatures have a certain set of components, players another, etc.) The callbacks get called mostly immediately when a component fires an event. An example use case of events would be animations: when the mobility component fires a "start move" event, the event can trigger the animation component to start playing a different animation.

Rewriting the client's rendering

This one's a pretty simple one. Tile depth sorting was moved to the GPU, and with the new entity system, entity rendering was also changed.

Previously, the world rendering system walked through each rendering component in the world every frame, looked up the entity's position from a separate memory address, then decided whether to cull it or not, and so on. In the new system, positions are cached in more CPU cache-friendly structures. For example, if a entity moves, an event is fired to it's rendering component, and the rendering component logic culls the entity and caches its position in an array of render commands. The array of render commands is iterated through every frame to draw all visible entities - render commands contain all the necessary data to place the entity's sprites properly on the screen.

Master server rewrite

The master server is the authoritative part of a single MUTA shard/world. It knows all the entities in the world and generates unique IDs for everything. Multiple simulation servers connect to the master server, each one of them simulating different parts of the world map.

While writing MUTA's proxy server (of which I also wrote my bachelor thesis), I feel like I finally "got" how I want to do multithreading with servers: handle all state on one thread, have other threads post events to that thread. The event loop works with a wait function akin to poll. Basically, an event-based approach.

Since then, I've been wanting to rewrite the rest of MUTA's server applications to use a similar architecture. To explain a little, the below table displays the programs that make up the server side software.

                                                                                                                                                                                                                                                       
ProgramDirectory name in repoEvent-based?
MasterserverNo
Simulation ServerworlddNo
Login Serverlogin-serverYes
Proxy ServerproxyYes
Old database serverdb-serverNo
World database serverworld_db (server_rewrite branch)Yes

Don't worry about the discrepansies in subproject naming conventions, I do have a plan for them now, believe it or not. It's just that the plan keeps on changing...

For the master server, an architecture change means that the main loop will no longer only run at a fixed rate: it will also be able to respond to events immediately, using blocking event queues. This is achieved with a structure akin to the pseudo-code example below.

int target_delta    = 17; /* Milliseconds */
int last_tick       = time_now();
int time_to_wait    = target_delta;
for (;;) {
   event_t events[64];
   int num_events = event_wait(events, 64, time_to_wait);
   for (int i = 0; i < num_events; ++i)
       _handle_event(&events[i]);
   int delta_time = time_now() - last_tick;
   if (delta_time < target_delta) {
       time_to_wait = target_delta - delta_time;
       continue;
   }
   update(delta_time);
}

Changing the architecture has meant a rather large amount of refactoring, as it affects nearly all systems on the master server. Since this has largely meant a complete rewrite, I have taken to also rewriting some of the systems into a mold I feel is better suited for the future. For example, the world/entity system that's used to control player characters, creatures and other game objects, is being completely written from scratch in the server_rewrite Git branch. The world API contains functions such as the ones below.

uint32 world_spawn_player(uint32 instance_id, client_t *client,
   player_guid_t id, const char *name, int race, int sex, int position[3], int direction);
void world_despawn_player(uint32 player_index);
int world_player_find_path(uint32 player_index, int x, int y, int z);
Calls such as the ones above are asynchronous in nature, as they involve the simulation server instances connected to the master server. Hence, I've been thinking of reworking them in such a way that they would accept a callback, "on_request_finished" (or whatever). That would be alright for code clarity, but then, that would involve some memory overhead. The alternative is to handle finished requests inside the world API itself, meaning it will have to call back to some other API's that called it. You know, I'm constantly pondering where the line of abstraction should lie: tight coupling isn't pretty, but abstraction often comes at a great programmatic resource cost. In the above case, there's little reason to create a datastructure for saving callbacks and their user data (void pointers) if there's really only one logical path the code can take when a response arrives. I try not to fall down the trap of "OOP" and "design patterns" just for the sake of such silly things, but at the same time, sometimes I have an engineer's urge to overengineer things. Usually I end up with the more practical, less abstraction-based approach. After all, I know every dark corner of my own program, or so I at the very least believe.

The rewrite has taken about two months now and I think it will still take some more time, partly because at the same time I must make changes to other programs in the server side stack as well. At the same time, new programs are coming in, such as the world database server. It will be interesting to see how things will work out when the server starts up again for the first time... Well, maybe frustrating is a more appropriate word.

World database server

The world database server is a new introduction to the server side stack. Previously, MUTA had a "db-server" application, but there was no separation between individual world/shard databases and account databases - now, that separation is coming.

The WDB is an intermediate program in between the MySQL server and the MUTA master server. It's sole purpose is to serve requests by master servers through a binary protocol while caching relevant results. The intention is that this is the de-facto way to access a single shard's database.

There's an asynchoronous API associated with the WDB. It's consists of a set of functions, each one of which performs a specific query. The query functions also take in callbacks as parameters - the callbacks are called when the query completes or fails.

wdbc_query_id_t wdbc_query_player_character_list(
   void (*on_complete)(wdbc_query_id_t query_id, void *user_data, int error,
       wdbc_player_character_t *characters, uint32 num_characters),
   void *user_data, account_guid_t account_id);

Not much else to say about it right now... It's event-based like the rest of the newer applications on the server side. Will keep working on it!

packetwriter2

Back when the MUTA project was started, Lommi wrote an application called MUTA_Packetwriter for network packet serialization. It compiles C code, structs and serialization functions, from a simple file format where the fields of each network packet are defined.

Lommi's tool has saved us countless of hours of writing arbitrary code and debugging it, but now that he is no longer working on the project, and there are many new packets making their way into the protocol with new applications such as the world database coming, I've deemed it necessary to write a new version of this program.

packetwriter2 will use a new, simpler file format, the ".def" format used in many of MUTA's data files. It will support features I've wanted for a long time, such as arrays of structs and nested structs. I've started writing the parser, and below is an example of the file format.

include: types.h
include: common_defs.h

group: twdbmsg
   first_opcode = 0

group: fwdbmsg
   first_opcode = 0

struct: wdbmsg_player_character_t
   query_id    = uint32
   id          = uint64
   name        = int8{MIN_CHARACTER_NAME_LEN, MAX_CHARACTER_NAME_LEN}
   race        = int (0 - 255)
   sex         = int (0 - 1)
   instance_id = uint32
   x           = int32
   y           = int32
   z           = int8

packet: fwdbmsg_reply_query_player_character_list
   __group     = fwdbmsg
   query_id    = uint32
   characters  = wdbmsg_player_character_t{MAX_CHARACTERS_PER_ACC}

Along the way, I think the encryption scheme needs a rework, too. Not the basic algorithms behind it (MUTA uses libsodium for that), but the fact that currently, messages are encrypted on a per-message-type basis. Being able to turn encryption on and off in the stream would save bandwith and improve performance, as multiple messages could be encrypted in a single set.

Keepin' busy

Honestly, I've been having a tough time scraping up enough time to work on MUTA after starting my current job. Also, turns out motivating myself is difficult if I don't constantly have something to prove to someone (you know, like progress reports to people you know in real life and stuff). Guess I should go and get one of those productivity self-help books soon.

A generic hashtable in C

2.3.2019 23:58:15

When ever I've implemented hashtables in C in the past, the way I've made them "generic" has been to create macros that declare new types and sets of functions for the tables wanted. For example, taken from the MUTA source code:

#define DYNAMIC_HASH_TABLE_DEFINITION(table_name, value_type, key_type, \
   hash_type, hash_func, bucket_sz) \
   ... \

...

DYNAMIC_HASH_TABLE_DEFINITION(str_int_table, int, char *, uint32,
   fnv_hash32_from_str, 4);

The above macro would declare a new type, str_int_table_t, and functions such as str_int_table_init(), str_int_table_insert() and str_int_table_erase().

With such macros, it's a bother to have to call the macro before using any new type of a hashtable. However, for example the stb library implements easier-to-use, generic dynamic arrays with macro trickery, so why not try the same with hashtables?

Turns out working with a more complex structure like a hashtable takes a little more work than dynamic arrays. We cannot automatically create versions of a single function for different data types in C like we could with C++ templates, so the main problem is passing compile-time data such as struct member types and sizes to functions. And we do want to use functions because we will need complex statements such as loops - such statements do not (very cleanly at least) fit inside plain macros.

The above paragraph might not have made immediate sense, so I'll attempt to clarify my point in practice. Below is the new hashtable declaration macro.

#define hashtable(key_type, value_type) \
   struct { \
       struct { \
           struct { \
               key_type    key; \
               value_type  value; \
               size_t      hash; \
               uint8_t     reserved; \
           } items[HASHTABLE_BUCKET_SIZE]; \
       } *buckets; \
       size_t num_buckets; \
       size_t num_values; \
   }

Below is how it's used.

/* Declare a hashtable called 'table' whose key type is char * and value type
* is int. */
hashtable(char *, int) table;

As may be observed, the type of the resulting struct is anonymous, meaning it cannot even be passed to a struct as a void * and then cast back to its original type. So how do we do what we need to do? Below is the hashtable_insert_ext() macro and the signature of the _hashtable_insert() function the macro calls.

#define hashtable_insert_ext(table_, pair_, compare_keys_func_, \
   copy_key_func_, ret_err_) \
   (table_.buckets = _hashtable_insert((ret_err_), \
       (uint8_t*)table_.buckets, &table_.num_buckets, &table_.num_values,        \
       sizeof(*table_.buckets), sizeof(table_.buckets[0].items[0]), \
       _hashtable_ptr_offset(&table_.buckets[0].items[0], \
           &table_.buckets[0]), \
       _hashtable_ptr_offset(&table_.buckets[0].items[0].key, \
           &table_.buckets[0].items[0]), \
       _hashtable_ptr_offset(&table_.buckets[0].items[0].value, \
           &table_.buckets[0].items[0]), \
       _hashtable_ptr_offset(&table_.buckets[0].items[0].hash, \
           &table_.buckets[0].items[0]), \
       _hashtable_ptr_offset(&table_.buckets[0].items[0].reserved, \
           &table_.buckets[0].items[0]), \
       &pair_.key, sizeof(pair_.key), pair_.hash, &pair_.value, \
       sizeof(pair_.value), compare_keys_func_, copy_key_func_))

void *_hashtable_insert(int *ret_err, uint8_t *buckets, size_t *num_buckets,
   size_t *num_values, size_t bucket_size, size_t item_size,
   size_t items_offset, size_t key_off1, size_t value_off1, size_t    hash_off1,
   size_t reserved_off1, void *key, size_t key_size, size_t hash, void    *value,
   size_t value_size,
   int (*compare_keys)(const void *a, const void *b, size_t size),
   int (*copy_key)(void *dst, const void *src, size_t size));

Yep, instead of passing in a struct to _hashtable_insert(), we pass pointers to each of the struct's relevant members. Sizes of some data types are also passed, and finally, struct offsets using the macro _hashtable_ptr_offset(). That allows us to modify everything at the byte level, which we do inside _hashtable_insert(). As a side note, the function callbacks passed are there to allow for dynamic types such as copyable strings.

One may wonder if this is an efficient way to do things. I would say it probably isn't, not in terms of runtime efficiency anyway. But in terms of programmer efficiency, I think it might be the right choice until I come across a better solution. That's because with the this implementation, doing things such as the following is super easy compared to what I used to have to do.

hashtable(uint32_t, int) table;
hashtable_init(table, 8, 0);

struct {
   uint32_t    key;
   int         value;
   size_t      hash;
} insert_data = {123, 5, hash_u32(123)};
hashtable_insert(table, insert_data);

struct {
   uint32_t    key;
   size_t      hash;
} find_data = {123, hash_u32(123)};
int *value = hashtable_find(table, find_data);
if (value) {
   ...
}

Okay, maybe it isn't the most intuitive API ever, but I like it far more than having to write type/function declaration macros like I used to. And for my hashtable use-cases, I'm willing to pay for the potential runtime-overhead.

Also, I didn't actually have a hashtable that stored the actual keys in MUTA before. While not storing keys is fine for assets and similar items to save on memory, for runtime-generated ID's a table like this will be required.

I've been working on this over the weekend and it likely still has some bugs in it, but in case you're curious, the Git repository can be found here. I've also uploaded the current header and source files here and here, respectively. I'll keep on working on it and start using it in MUTA very soon.

Edit 3.3.2019
After a night of thinking I decided to the change the API so that it would not require special structs created by the user. The reason I used them originally was an attempt to force certain type-safety, namely preventing accidental pointer key casting to void * (since that can happen implicitly in C). An example of such a mistake would be when you try to pass in the address of a char *, but accidentally pass in the pointer's value rather than it's address.

char *key = ...;
hashtable_insert(
   table,
   key, // Should be &key but no error is generated due to implicit cast to void * inside the macro
   &value,
   NULL );

I concluded that similar "type-safety" could be achieved by having the user pass references to variables directly instead of their addresses. The macros will instead fetch the addresses and we should get a compilation error when certain mistakes happen. For example, if the user passes in a key as '&key', the macro would format it to '&&key', which would result in an error. The API change should reduce the amount of lines required to achieve the same result, and I think could also prove out to be more clear. Below is an example of the new API in use.

hashtable(uint64_t, uint32_t) table;
...
uint64_t    key     = 12345;
uint32_t    value   = 54321;
size_t      hash    = hashtable_hash(&key, sizeof(key));
hashtable_insert(table, key, hash, value, NULL);

I've also uploaded the newer source files here and here.

Page: 0 1 2 3