Edit:
Looking through the article I find the following very interesting.
One example of this would be when we use "read uncommitted" when loading a solar system configuration, knowing there won't be any inserts or updates to the data we are reading so we can allow ourselves to read with no locking. Another trick would be that we most often only need to allow the user to filter data within a day. For example, we do not need to allow a user to select all records from the player journal between 14:00 and 15:35; it is sufficient to allow filtering by only a date. In that case we simply need to keep track of the clustered key at 00:00 every day and use that in our queries. This means we don't need to index on date/time columns in every nook and cranny, making things faster and slimmer.The reason that this is interesting is that it implies that they are doing some rather odd things to improve performance. The stuff about "read uncommitted" is a way of getting around some of the limitations of older (pre-2005) versions of SQL Server. Often database performance problems are related to locking and not due to actual speed issues. Using "read uncommitted" is basically saying we are going to ignore locks for this transaction and get data faster, even if there is going to be faulty data. Things that keep DBAs up at night like 'Ghost records' are often the result.
It is particularly interesting since SQL server since 2005 has supported a different isolation model called Read committed snapshot. This model uses row versions in a similar way to how Oracle handles locking. That they are not using this model implies that they are either too scared to try it, or that they have tried it and consider the performance hit it entails (there is some) unacceptable.
The other thing they mention is the Clustered Key on the date. This is interesting because it really stresses just how transaction oriented the database must be. The solution they have is the sort of solution you need if you want extremely high transaction loads. By using a clustered index (what I assume that they mean) you can save a lot on the update speeds if done correctly. Since SQL server uses a simpler method for storing and organizing data records than Oracle (I won't bore you with the details) this implies that SQL Server is in fact the ideal solution for EVE despite what many on the forums have said.
Anyways I am a database geek so this sort of information is like crack for me.
I'm late finding this article and reading it, but better late than never. Besides, it should have at least one comment.
ReplyDeleteYour opinion about SQL Server as ideal for Eve caught my eye. I admit my opinion has been tainted by comments about SQL Server on the forums and in other blogs/forums not specific to Eve. There seems to be a lot of negative opinions about SQL Server, and I was ready to believe them. My experiences with Vista made it easy to believe MS could impair their DB product equally badly.
I'm a server and network geek, so this stuff is a magnet to me also. I'd love to know more details about their node allocation algorithms and design constraints/decisions. I've done a bit of work with load balancing in a web server environment, but Eve goes so far beyond that.
On load balancing, I was encouraged to read the recent dev blog about Character Nodes. It was interesting to read about creating a character node to the mix of existing nodes. It shows a broad thinking about the lag problem - not just code problems like those in the module lag dev blog, but also a back to basics look at the work to be done and how it's allocated to nodes.
The Gamasutra article also reinforces how the single-shard environment of Eve is so win.