resources - Puzzled by the cpushare setting on Docker. -
resources - Puzzled by the cpushare setting on Docker. -
i wrote test programme 'cputest.py' in python this:
import time while true: _ in range(10120*40): pass time.sleep(0.008)
, costs 80% cpu when running in container (without interference of other runnng containers).
then ran programme in 2 containers next 2 commands:
docker run -d -c 256 --cpuset=1 imagename python /cputest.py docker run -d -c 1024 --cpuset=1 imagename python /cputest.py
and used 'top' view cpu costs. turned out relatively cost 30% , 67% cpu. i'm pretty puzzled result. kindly explain me? many thanks!
i sat downwards lastly night , tried figure out on own, ended not beingness able explain 70 / 30 split either. so, sent email other devs , got response, think makes sense:
i think misunderstanding how task scheduling works - why maths doesn't work. i'll seek , dig out article @ basic level kernel assigns slices of time each task needs execute , allocates slices tasks given priorities.
so priorities , tight looped code (no sleep) kernel assigns 4/5 of slots , 1/5 b. hence 80/20 split.
however when add together in sleep gets more complex. sleep tells kernel yield current task , execution homecoming task after sleep time has elapsed. longer time given - if there higher priority tasks running. when nil else running kernel sits idle sleep time.
but when have 2 tasks sleeps allow 2 tasks interweave. when 1 sleeps other can execute. leads complex execution can't model simple maths. sense free prove me wrong there!
i think reason 70/30 split way doing "80% load". numbers have chosen loop , sleep happen work on pc single task executing. seek moving loop based on elapsed time - loop 0.8 sleep 0.2. might give closer 80/20 don't know.
so in essence, time.sleep()
phone call skewing expected numbers, removing time.sleep()
causes cpu load far closer you'd expect.
resources cpu docker allocation
Comments
Post a Comment