28 October 2015

Stress Test New VMware Infrastructure

I was performing some stress tests on Linux VMware machines for a new environment to evaluate and find the resource bottlenecks and how SAN storage reacts in terms of CPU I/O waiting before using this environment for production purpose.

I got the interesting results and would like to share with you guys.

The "stress" tool has been used with options shown below:


# stress -c 65 -i 60 -m 25 --vm-bytes 256M -t 120m -v


-c: number of CPU workers
-i: number of IO workers
-m: number of Memory workers
--vm-bytes: allocate memory size for each memory worker(default 256MB)
-t: the duration of running the test
-v: make the test in verbose mode

With this options, I went through 150 for 5min CPU load and almost 100 for 15min load which is pretty amazing.

It has carried out in a test VM with 2 vCPU and 4GB memory.


# stress -c 65 -i 60 -m 25 --vm-bytes 256M -t 120m -v

top - 17:44:56 up 55 min,  3 users,  load average: 151.21, 124.73, 99.78
Tasks: 254 total,  67 running, 187 sleeping,   0 stopped,   0 zombie
Cpu(s): 74.5%us, 24.2%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.3%si,  0.0%st
Mem:   3924688k total,  3837772k used,    86916k free,      260k buffers
Swap:  3354616k total,  2968112k used,   386504k free,     4040k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3309 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3327 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3330 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3342 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3372 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3385 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3387 root      20   0  6516  124  104 R  3.2  0.0   0:06.42 stress
 3391 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3393 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3395 root      20   0  6516  124  104 R  3.2  0.0   0:08.21 stress
 3403 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3405 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3409 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3413 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3417 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3437 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3315 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3324 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3333 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3336 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3339 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3354 root      20   0  6516  124  104 R  2.8  0.0   0:08.21 stress
 3381 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3389 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3397 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3401 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3407 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3411 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3415 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3429 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3447 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3449 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3453 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3454 root      20   0  6516  124  104 R  2.8  0.0   0:06.61 stress
   39 root      20   0     0    0    0 D  1.6  0.0   0:23.33 kswapd0
 3306 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3312 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3318 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3321 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress


Following demonestrate the results of dd writing of 1GB to the SAN with different block sizes of 1MB, 10MB and 100MB.

The bigger the block size get, the slower the writing to the SAN becomes:

# dd if=/dev/zero of=/tmp/test22 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 77.165 s, 13.6 MB/s

# dd if=/dev/zero of=/tmp/test22 bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 104.555 s, 10.0 MB/s

# dd if=/dev/zero of=/tmp/test22 bs=100M count=10
10+0 records in
10+0 records out
1048576000 bytes (1.0 GB) copied, 1204.23 s, 871 kB/s






No comments:

Post a Comment