I am developing a video conferencing application for education purposes that uses WebRTC. It needs to be done in a star topology as it connects up to 20 participants.
Conceptually it is easy to understand, but I don’t know how to start, as I do not have any examples.
All clients will connect to a server using WebRTC, and the server will mix the video streams in a specific layout and send it back to all clients. Here are my questions/difficulties:
How to implement the server part? What’s the best technology (e.g. NodeJS)? Are there simple examples of a star topology application like that?
How can we start writing the MCU code? Are there examples? Or is it easier to customize an open source MCU like Licode/Lynckia?
How can I estimate the right AWS EC2 instance type that we will use as the MCU server?
How can I estimate the data transfer cost (the size, in GB/TBs) which will be transferred in 1h of conference?
Thanks a lot in advance, Carlos
My two cents on your various doubts:
Personally, I prefer NodeJS, but from what I have seen, application server does not play much of a role in WebRTC communication other than passing messages between peers/ media servers, so go with a technology you are comfortable with.
That said, for examples, you can check out kurento's Tutorials in both Java and Node.js, Licode example(using NodeJS) and Jitsi Meet in Java.
Yes, I think going with existing MCU is good idea, better one is SFU, difference being SFU justs forwards streams not mixes them, mixing streams is a costly process thus MCU needs to have high processing power. SFUs are comparitively light, all you need is a good bandwidth for the server.
About last two points, not much idea, depends on your use case, what is video resolution of streams, how many people, you needs to run some tests and guage it.
simulcast is another interesting idea, unfortunately I believe it is still in development.