I am currently working an c# application that will work as the server-side of a multiplayer game and I am slightly unsure as to how I should be handling multi-threading issues. Before I continue, it is probably worth mentioning that I am quite new to this topic.
The problem
One of the requirements of the server-side application is that it should contain application-specific data, such as information on peers that have connected to the server, as well as their location and so on. The problem is that without some form of thread-safe mechanic, it is possible for two requests to be reading and writing to the same piece of data, which is obviously problematic.
Solving the problem
Up until now, to solve the problem I have simply been wrapping every request inside of a lock block, ensuring that every request happens in a serial order, so that data is only ever being manipulated by one peer at a time.
Recently, after doing some research on the topic I was introduced to the idea of fibers, and a way of setting up a "fiber pool", allowing the ability to queue actions onto a single fiber as another way of trying to ensure that requests happen in a serial order.
The question
My knowledge on threading and these types of topics is pretty limited. I would love to know more about this topic, in particular I would love to know the pros and cons of either solution and ultimately which route I should be taking.
Any help would be greatly appreciated.
I really can't figure out how fibers would solve your problem, since they basically don't provide a means to reduce contention on a shared memory resource.
I would rather focus on strategies to reduce contention on the resources, reduce duplicated computation and to reduce thread resource usage with asynhronous processing.
Using a global lock on top of all the request processing basically reduces all of processing to a single live thread. As an alternative you could try to reduce locking using locks only per resource.
Disclamer: Example code presented here is by no means a production quality, it's here just for the illustrate the concepts.
Reduce contention
You can come up with a granular locking strategy, when you lock only some region of data in question for particular operation.
Following is an example Sorting Game, which defines simple rules: Each player grabs an item in a list, and swaps it with the next one, if the left item is less then the right. Game ends when all items are sorted. Nobody wins, just a lot of fun.
using System;
using System.Threading;
using System.Collections.Generic;
using System.Linq;
public class Program
{
public static void Main()
{
var game = new SortingGame();
var random = new Random(234);
// Simulate few concurrent players.
for (var i = 0; i < 3; i++)
{
ThreadPool.QueueUserWorkItem(o =>
{
while (!game.IsSorted())
{
var x = random.Next(game.Count() - 1);
game.PlayAt(x);
DumpGame(game);
};
});
}
Thread.Sleep(4000);
DumpGame(game);
}
static void DumpGame(SortingGame game)
{
var items = game.GetBoardSnapshot();
Console.WriteLine(string.Join(",", items));
}
}
class SortingGame
{
List<int> items;
List<object> lockers;
// this lock is taken for the entire board to guard from inconsistent reads.
object entireBoardLock = new object();
public SortingGame()
{
const int N = 10;
// Initialize a game with items in random order
var random = new Random(1235678);
var setup = Enumerable.Range(0, N).Select(i => new { x = i, position = random.Next(0, 100)}).ToList();
items = setup.OrderBy(i => i.position).Select(i => i.x).ToList();
lockers = Enumerable.Range(0, N).Select(i => new object()).ToList();
}
public int Count()
{
return items.Count;
}
public bool IsSorted()
{
var currentBoard = GetBoardSnapshot();
var pairs = currentBoard.Zip(currentBoard.Skip(1), (a, b) => new { a, b});
return pairs.All(p => p.a <= p.b);
}
public IEnumerable<int> GetBoardSnapshot()
{
lock (entireBoardLock)
return new List<int>(items);
}
public void PlayAt(int x)
{
// Find the resource lockers for the two adjacent cells in question
var locker1 = GetLockForCell(x);
var locker2 = GetLockForCell(x + 1);
// It's important to lock the resources in a particular order, same for all the contending writers and readers.
// These can last for a long time, but are granular,
// so the contention is greatly reduced.
// Try to remove one of the following locks, and notice the duplicate items in the result
lock (locker1)
lock (locker2)
{
var a = items[x];
var b = items[x + 1];
if (a > b)
{
// Simulate expensive computation
Thread.Sleep(100);
// Following is a lock to protect from incorrect game state read
// The lock lasts for a very short time.
lock (entireBoardLock)
{
items[x] = b;
items[x + 1] = a;
}
}
}
}
object GetLockForCell(int x)
{
return lockers[x];
}
}
Eliminate duplicated computations
If you need some expensive computation to be up to date, but not dependent on a particular request, trying to compute it for every request would be just a waste of resources.
The following approach allows to skip repeated computation if computation have already been started for another request.
It's different from caching, because you actually get the best result possible to compute in a time frame this way:
void Main()
{
for (var i = 0; i < 100; i++)
{
Thread.Sleep(100);
var j = i;
ThreadPool.QueueUserWorkItem((o) => {
// In this example, the call is blocking becase of the Result property access.
// In a real async method you would be awaiting the result.
var result = computation.Get().Result;
Console.WriteLine("{0} {1}", j, result);
});
}
}
static ParticularSharedComputation computation = new ParticularSharedComputation();
abstract class SharedComputation
{
volatile Task<string> currentWork;
object resourceLock = new object();
public async Task<string> Get()
{
Task<string> current;
// We are taking a lock here, but all the operations inside a lock are instant.
// Actually we are just scheduling a task to run.
lock (resourceLock)
{
if (currentWork == null)
{
Console.WriteLine("Looks like we have to do the job...");
currentWork = Compute();
currentWork.ContinueWith(t => {
lock (resourceLock)
currentWork = null;
});
}
else
Console.WriteLine("Someone is already computing. Ok, will wait a bit...");
current = currentWork;
}
return await current;
}
protected abstract Task<string> Compute();
}
class ParticularSharedComputation : SharedComputation
{
protected override async Task<string> Compute()
{
// This method is thread safe if it accesses only it's instance data,
// as the base class allows only one simultaneous entrance for each instance.
// Here you can safely access any data, local for the instance of this class.
Console.WriteLine("Computing...");
// Simulate a long computation.
await Task.Delay(2000);
Console.WriteLine("Computed.");
return DateTime.Now.ToString();
}
}
Go async, not just multithreaded
Even if you are doing multithreading, you could be wasting thread resources, and the threads are actually expensive because of the stack memory allocated for each thread and because of context switching.
A well designed async application would actually use just as many threads as there are CPU cores in your system.
Look into making your application async, not just multithreaded.