Game Update April 16, 2018

My focus over the last week was to start using wfBase to persist game data. I ran into a situation where I needed to key some records off of a string. Without getting into to much detail, I had been working on wfBase’s table class, which is accessed via a numeric id field. WfBase has an index class that can be used for keying off of other data types like the string, but during the refactoring of wfBase, I had been mostly ignoring the index class. It still compiled, but I felt it was likely broken.

I was correct, I had let a few logic bugs creep into the code. I have spent several days trying to bring the index class back up to speed. For the most part, I think I am there. I still need to create some more test to make sure the index system is working as intended, but I am getting good results at the moment.

As part of refactoring the index class, I realized that I was keeping the entire index tree in memory. In my test case of writing a million records with a 50-byte key, the program was using 150mb of memory. I added some code that would flush out the index nodes every 1000 updates. Now the same programs uses only 20mb of memory. I thought that would slow things down, but instead, the code ran a bit faster. When I flush the nodes instead of deleting the nodes I now add them to a node cache to be reused. This turned out to speed things up enough to counteract the slow down of having to periodically reload nodes back into memory.

Since I was already messing about with the code I looked to see if there were any other ways to speed things up a bit. I relized that I was spending a lot of time writing a node back to disk everytime the node was updated. I change the code to delay the writing of the node back to disk. Now I only write the changes when the nodes are flushed, or the index is closed. This change basically doubled the performance of my index class. However, by delaying the write to disk I increased the chance for corruption of the index in situations like an unexpected power outage. To minimize that issue, I added code to check for corruption, and if the index looks to be corrupted I now rebuild it.

The following table shows results for writing, then reading 1 million records. The records I am writing contain 6 fields. Overall I think the test shows respectable results:

Writing 1 million table records with no index: 3.54 seconds
Reading 1 million table records with no index: 2.82 seconds
Writing 1 million table records with 50-byte index: 7.16 seconds
Reading 1 million table records by 50-byte index: 3.37 seconds

I was going to show some comparisons to SQLite, but then I decided not to spend the time learning SQLite enough to create the comparison test. One of the main reasons for creating wfBase in the first place was so that I would have a library for persistent storage that just worked no matter what platform I compiled my code on. WfBase meets that need, and the speed that I am getting is more than fast enough for my needs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s