I'm executing parallel_tests
within a docker container, e.g. one host command.
parallel_tests will examine the resources, and spawn a process per core, and in my instance I have 8 cores available:
# docker info
Containers: 5
Images: 75
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 85
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-49-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 15.61 GiB
Name: camacho
ID: ZOYN:QGDO:UGMJ:TDDM:WEEM:ZEHJ:4OKB:V5WR:RGCL:NOKG:F5W5:SDEL
WARNING: No swap limit support
On the same machine without docker, it is clear that these tests are running in parallel and use up all the available resources (exactly what we want for CI).
When executed within docker, it seems that everything runs in a single process, and results only come synchronously from each of the test runners (and it is much slower by comparison).
Do I need to run parallel host commands for it to use resources? Can I set an option to allow my docker command to fork more parallel processes?
Part of the problem was running on OSX with virtualbox, and virtualbox was setup to run with only one core. dinghy halt
then virtualbox
| settings
| system
| processor
. Set it to the number of threads i.e. 8 threads for a quad core hyperthreaded cpu.