I have an application (the "server") which updates a block of data in memory - around 100k bytes - every second.
There are 1 to 4 other instances of a "client" application running on other workstations on the same network, and these need to read the same 100k image every second.
This has been implemented up til now by writing the image to a file on the server and having the clients read from that file across the network. This has worked with no problems for many years, but lately (coincident with a move to windows 8-based hardware) it has developed a problem where the file becomes inaccessible to all nodes except one. Exiting the client application running on this node frees up the file and it then becomes accessible again to everyone.
I'm still perplexed as to the the cause of this lockout, but I'm wondering if it may be the mechanism discussed here, where a file isn't closed due to a network glitch. I'm thinking that having the clients request the data over TCP/IP would avoid this.
There doesn't need to be any handshaking other than the clients failing to connect or read data - the server just needs to go about it's business and respond to requests by grabbing the data and sending it. I'm pretty hazy however about the best architecture to achieve this. Are TidTCPClient and TidTCPServer going to cut it? I'm assuming the clients would request the data in a thread, but does this mean the server needs to run a thread continuously to respond to requests?
TIdTCPServer
is a multi-threaded component. Its clients run in worker threads that it manages for you. All you have to do is implement the OnExecute
event to send your data.
TIdTCPClient
is not a multi-threaded component. It runs in whatever thread you use it in. So if you need to read data continuously, best to run your own worker thread to handle the reading. Indy has a TIdThreadComponent
component that wraps a thread, or you can write your own TThread
code manually.
100K is not a lot of data, so I would suggest simply forgetting the file altogether and allocate a buffer in memory instead. Your TIdTCPServer.OnExecute
event handler can read from that buffer whenever needed. And I wouldn't even bother having the clients request data, just have the server continuously push the latest data to active clients.
Try something like this:
server:
var
Buffer: TIdBytes;
Lock: TMREWSync;
procedure TForm1.IdTCPServer1Execute(AContext: TIdContext);
begin
Lock.BeginRead;
try
AContext.Connection.IOHandler.Write(Buffer);
finally
Lock.EndRead;
end;
Sleep(1000);
end;
procedure TForm1.UpdateBuffer;
begin
Lock.BeginWrite;
try
// update the Buffer content as needed...
finally
Lock.EndWrite;
end;
end;
initialization
Lock := TMREWSync.Create;
SetLength(Buffer, 1024 * 100);
finalization
SetLength(Buffer, 0);
Lock.Free;
client:
procedure TForm1.IdThreadComponent1Run(Sender: TIdThreadComponent);
var
Buffer: TIdBytes;
begin
IdTCPClient1.IOHandler.ReadBytes(Buffer, 1024 * 100);
// use Buffer as needed...
end;
procedure TForm1.Connect;
begin
IdTCPClient1.Connect;
try
IdThreadComponent1.Start;
except
IdTCPClient1.Disconnect;
raise;
end;
end;
procedure TForm1.Disconnect;
begin
IdTCPClient1.Disconnect;
IdThreadComponent1.Stop;
end;