As defined in Joe's book a TCP parallel server handle connections like this :
{ok, Listen}=gen_tcp:listen(....),
spawn(fun() ->parallel(Listen) end).
parallel(Listen) ->
{ok, Socket}=gen_tcp:accept(Listen),
spawn(fun() ->parallel(Listen) end),
doSomething(Socket).
doSomething(....) ->
....
This is logic, when a listener accept a connection, it spawns a process for listening to new incoming connections before handling this accepted connection, this is the parallelism rule, ok good. in EJABBERD module ejabberd_listener.erl who represents the network layer of the server this is what i found:
case listen_tcp(Port, SockOpts) of
{ok, ListenSocket} ->
....
accept(ListenSocket, Module, State, Sup, Interval, Proxy),
....
accept(ListenSocket, Module,... ) ->
case gen_tcp:accept(ListenSocket) of
{ok, Socket} ->
%%%% a lot of work
....
accept(ListenSocket, Module,.... );
So this is a sequential listener and it runs slower than parallel, so why they don't use the parallel mechanism for more efficiency and performance ? iam new in ejabberd and i may missing something
I am assuming that you are talking about this code: [1].
In that case, if you look more closely, a bit further, the function start_connection
is called[2].
Inside that function a dynamic supervisor is used and a child is added [3]. You do not use the spawn
primitive here, but it's abstract by the supervisor:start_child
function[4].
So in short, yes, each connection is handled concurrently, except here each process is added to a dynamic supervisor, instead of plain processes created by the spawn primitive.