I see some documentation about running MarkLogic clusters in Docker containers but not sure if anyone running MarkLogic docker containers in Production.
Anyone using MarkLogic in docker containers for production services?
If you are asking if there are people using MarkLogic in Docker in production, yes, I know of people succesfuly doing so. If you are asking is it 'ready for production' (title) or is MarkLogic 'supported' or Tested or Certified in Docker, that would be a question for your sales or support rep. Usually that kind of open ended question has a similar open ended answer. For example there are many Linux OS distributions which are not either 'supported' nor 'certified' nor tested but in fact work and you may find that support helps you with them or fixes bugs you report. Similarly there are 'supported' operating systems that depending on the configuration and use case do not work well or require more then the usual amount of customization.
Things to watch out for in docker -- which are also similar to issues in VMs (not all of which are supported either but are used in production sites). Many of these are possible to work around.
memory tuning - Many Docker containers 'see' the memory configuration of the host machine (or VM). MarkLogic makes several initial internal configuration choices based on that. So for example if you have a huge host (say 512 GB RAM) but a small docker container (say 1GB) -- ML may configure memory use tuned for running in 512 GB. That can lead to problems as extreme as crashing on startup.
Core/Processor use. Similar to memory use, docker containers may 'see' the total core count of the host server/os/vm and over-subscribe.
IO performance. As with all ML installations its up to you to provision IO resources efficiently. Its generally more difficult to do so for docker then for the host.
Networking. Docker is evolving very quickly. The network layer is highly complex and variable across implementations and with a single implementation across configurations. MarkLogic has simple requirements -- the most relevant being that its hostname as set in the ML server is resolvable in all other ML servers in the cluster to the same host. Its up to you to configure docker such that this requirement is managed. Similar to running in cloud, ML by default associates the lifecycle of a ML server with the lifecycle of its data (/var/opt/MarkLogic). If you stop and start docker instances not only may the hostname change but the data volumes may change. Its up to you to configure things so they do not.
There are many other considerations. In general they are very similar in nature to running servers in a VM or Cloud environment. There are some basic assumptions of how the host system behaves that are generally true of 'bare metal' know-OS installations but in VMs and other more software-defined platforms are not easy to pin down and have a much higher variability -- so harder to test or even discuss.
References: Discussed at MarkLogic World 2017 https://www.marklogic.com/resources/chicago-lightning-talks/
Developer Site: https://developer.marklogic.com/code/docker
My personal favorite: https://github.com/DALDEI/ml-docker