28 October 2015

Stress Test New VMware Infrastructure

I was performing some stress tests on Linux VMware machines for a new environment to evaluate and find the resource bottlenecks and how SAN storage reacts in terms of CPU I/O waiting before using this environment for production purpose.

I got the interesting results and would like to share with you guys.

The "stress" tool has been used with options shown below:


# stress -c 65 -i 60 -m 25 --vm-bytes 256M -t 120m -v


-c: number of CPU workers
-i: number of IO workers
-m: number of Memory workers
--vm-bytes: allocate memory size for each memory worker(default 256MB)
-t: the duration of running the test
-v: make the test in verbose mode

With this options, I went through 150 for 5min CPU load and almost 100 for 15min load which is pretty amazing.

It has carried out in a test VM with 2 vCPU and 4GB memory.


# stress -c 65 -i 60 -m 25 --vm-bytes 256M -t 120m -v

top - 17:44:56 up 55 min,  3 users,  load average: 151.21, 124.73, 99.78
Tasks: 254 total,  67 running, 187 sleeping,   0 stopped,   0 zombie
Cpu(s): 74.5%us, 24.2%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  1.3%si,  0.0%st
Mem:   3924688k total,  3837772k used,    86916k free,      260k buffers
Swap:  3354616k total,  2968112k used,   386504k free,     4040k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 3309 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3327 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3330 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3342 root      20   0  6516  124  104 R  3.2  0.0   0:06.40 stress
 3372 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3385 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3387 root      20   0  6516  124  104 R  3.2  0.0   0:06.42 stress
 3391 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3393 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3395 root      20   0  6516  124  104 R  3.2  0.0   0:08.21 stress
 3403 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3405 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3409 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3413 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3417 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3437 root      20   0  6516  124  104 R  3.2  0.0   0:06.41 stress
 3315 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3324 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3333 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3336 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3339 root      20   0  6516  124  104 R  2.8  0.0   0:06.39 stress
 3354 root      20   0  6516  124  104 R  2.8  0.0   0:08.21 stress
 3381 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3389 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3397 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3401 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3407 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3411 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3415 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3429 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3447 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3449 root      20   0  6516  124  104 R  2.8  0.0   0:06.40 stress
 3453 root      20   0  6516  124  104 R  2.8  0.0   0:06.41 stress
 3454 root      20   0  6516  124  104 R  2.8  0.0   0:06.61 stress
   39 root      20   0     0    0    0 D  1.6  0.0   0:23.33 kswapd0
 3306 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3312 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3318 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress
 3321 root      20   0  6516  124  104 R  1.6  0.0   0:06.74 stress


Following demonestrate the results of dd writing of 1GB to the SAN with different block sizes of 1MB, 10MB and 100MB.

The bigger the block size get, the slower the writing to the SAN becomes:

# dd if=/dev/zero of=/tmp/test22 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 77.165 s, 13.6 MB/s

# dd if=/dev/zero of=/tmp/test22 bs=10M count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 104.555 s, 10.0 MB/s

# dd if=/dev/zero of=/tmp/test22 bs=100M count=10
10+0 records in
10+0 records out
1048576000 bytes (1.0 GB) copied, 1204.23 s, 871 kB/s






6 October 2015

IBM Storwize v3700 Configuration Backup

Since IBM Storwize storage system doesn't use typical command line interface, you need to know its special commands in order to carry out your stuff via CLI. 

I needed to create backup of storage configurations and transfer them to my local computer. Things didn't go quiet easy as after logged in to CLI you will notice even "ls" or "cd" commands are not exist.

Here is how I carry out this task together with the commands and results.

lsdumps


To check the previous backed up files, use "lsdumps":

IBM_2072:STGHostname:superuser>lsdumps
id filename
0  ethernet.7806628-1.trc
1  livedump.7806628-1.140819.163538
2  snap.7806628.tgz
3  7806628-1.trc
4  svc.config.cron.bak_7806628-1
5  svc.config.cron.sh_7806628-1
6  svc.config.cron.xml_7806628-1
7  svc.config.cron.log_7806628-1



svcconfig


To create the backup, "svcconfig" is the command:

IBM_2072:STGHostname:superuser>svcconfig backup
................................................................................ 
................................................................................                                           ................................................................................                                           ................................................................................                                            
CMMVC6155I SVCCONFIG processing completed successfully


After backup finished 
successfully, there would be 3 new files created as per lines 8,9 and 10:

IBM_2072:STGHostname:superuser>lsdumps
id filename
0  ethernet.7806628-1.trc
1  livedump.7806628-1.140819.163538
2  snap.7806628-1.tgz
3  7806628-1.trc
4  svc.config.cron.bak_7806628-1
5  svc.config.cron.sh_7806628-1
6  svc.config.cron.xml_7806628-1
7  svc.config.cron.log_7806628-1
8  svc.config.backup.xml_7806628-1
9  svc.config.backup.log_7806628-1
10 svc.config.backup.sh_7806628-1



PSCP


It is actually not possible to use WinScp to transfer config backups over offside due to Storewize lack of basic commands such as "cd". But "pscp" does the trick. 

You need to download it from it's official website (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html), then using following commands to transfer config backup files to your computer: 

If you downloaded "pscp.exe" in your windows desktop folder, and
using "superuser" as default admin account of Storwize, and
10.8.224.163 is IP of Storwize, and
you would like to transfer the backups to your desktop folder, then
the command would be:

C:\Users\username\Desktop>pscp superuser@10.8.224.163:/tmp/svc.config.backup.* C:\Users\username\Desktop/
superuser@10.8.224.163's password:
svc.config.backup.log     | 27 kB |  27.8 kB/s | ETA: 00:00:00 | 100%
svc.config.backup.sh      | 13 kB |  14.0 kB/s | ETA: 00:00:00 | 100%
svc.config.backup.xml     | 221 kB | 221.2 kB/s | ETA: 00:00:00 | 100%