Monday 11 December 2017

Ansible - multiple if else statement or case alternative

Ansible prefer to use when: statement and not have classic implementation of if else or case statement

To use many if statement you can use simple trick.
Example check many variables and depend of this value (if variable is different than empty string '') counting latest occurrence of "not null" value


- name: count number of nic
  vars:
    count_nic: "{{ 5 if NETWORK_5 != '' else '4' if NETWORK_4 != '' else '3' if NETWORK_3 != '' else '2' if NETWORK_2 != '' else '1'}}"
#import playbook based on variable
  include: tasks/nic_{{ count_nic }}.yml
  register: count_nic

Ansible - Restart server and wait until available

Since ansible 2.3 you can use wait_for_connection function to check when host will be available

Example:

- name: Reboot server
  shell: ( sleep 2 && shutdown -r now & )
  async: 1
  poll: 0
  ignore_errors: true

- name: Waiting for host {{ ansible_default_ipv4.address }} after reboot
  wait_for_connection:
    timeout: 300
    delay: 30


This example solve your problem :-)

Ansible "The destination directory (/etc) is not writable"

I've used ansible playbook:

- name: copy resolv.conf
  become: root
  become_method: sudo
  copy:
    src: "resolv.conf"
    dest: "/etc/resolv.conf"
    owner: root
    group: root
    mode: 0644
    force: yes


Error "The destination directory (/etc) is not writable" occur

The problem is with function become - this function require true or false as parameter

Working configuration is  (SOLVED)

- name: copy resolv.conf
  become: yes
  become_method: sudo
  copy:
    src: "resolv.conf"
    dest: "/etc/resolv.conf"
    owner: root
    group: root
    mode: 0644
    force: yes
roles/os_conf_dn

Thursday 11 May 2017

IBM V7000 compression performance - dedicated card for compression

Configuration of storage:
IBM V7000 gen2+
10x SSD 4TB
Interface 10GBe (iSCSI)
Dedicated card for compression

Configuration of test VM
2CPU, 4GB RAM
Linux Debian on ESX 6
Datastore mapped to ESX

Better IOPS during read operation and only a little lower on write. CPU usage for compression 30%.

Without compression:
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 40960MB)

test: (groupid=0, jobs=1): err= 0: pid=3987: Wed Mar 29 07:57:35 2017
  read : io=30723MB, bw=139125KB/s, iops=34781 , runt=226126msec
  write: io=10237MB, bw=46360KB/s, iops=11589 , runt=226126msec
  cpu          : usr=9.58%, sys=37.75%, ctx=709816, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=7864963/w=2620797/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=30723MB, aggrb=139125KB/s, minb=139125KB/s, maxb=139125KB/s, mint=226126msec, maxt=226126msec
  WRITE: io=10237MB, aggrb=46359KB/s, minb=46359KB/s, maxb=46359KB/s, mint=226126msec, maxt=226126msec

Disk stats (read/write):
  sdb: ios=7856666/2618202, merge=0/45, ticks=10471096/3310356, in_queue=13777900, util=100.00%
test read 100%
test: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=3991: Wed Mar 29 08:00:59 2017
  read : io=40960MB, bw=206382KB/s, iops=51595 , runt=203230msec
  cpu          : usr=9.88%, sys=38.44%, ctx=665551, majf=0, minf=68
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=10485760/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=40960MB, aggrb=206382KB/s, minb=206382KB/s, maxb=206382KB/s, mint=203230msec, maxt=203230msec

Disk stats (read/write):
  sdb: ios=10475310/2, merge=0/1, ticks=11981140/0, in_queue=12170624, util=100.00%
test write 100% 
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=3995: Wed Mar 29 08:05:59 2017
  write: io=40960MB, bw=139572KB/s, iops=34892 , runt=300512msec
  cpu          : usr=7.17%, sys=29.45%, ctx=546661, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=10485760/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=40960MB, aggrb=139571KB/s, minb=139571KB/s, maxb=139571KB/s, mint=300512msec, maxt=300512msec

Disk stats (read/write):
  sdb: ios=0/10480731, merge=0/60, ticks=0/17984528, in_queue=17979712, util=100.00%


With compression:

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 40960MB)

test: (groupid=0, jobs=1): err= 0: pid=4021: Wed Mar 29 08:54:43 2017
  read : io=30719MB, bw=164117KB/s, iops=41029 , runt=191668msec
  write: io=10241MB, bw=54715KB/s, iops=13678 , runt=191668msec
  cpu          : usr=10.62%, sys=45.65%, ctx=419306, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=7863971/w=2621789/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=30719MB, aggrb=164116KB/s, minb=164116KB/s, maxb=164116KB/s, mint=191668msec, maxt=191668msec
  WRITE: io=10241MB, aggrb=54715KB/s, minb=54715KB/s, maxb=54715KB/s, mint=191668msec, maxt=191668msec

Disk stats (read/write):
  sdb: ios=7853399/2618338, merge=0/38, ticks=7420120/3239660, in_queue=10970028, util=100.00%
test read 100%
test: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=4024: Wed Mar 29 08:56:51 2017
  read : io=40960MB, bw=328339KB/s, iops=82084 , runt=127743msec
  cpu          : usr=13.15%, sys=60.54%, ctx=294145, majf=0, minf=68
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=10485760/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=40960MB, aggrb=328339KB/s, minb=328339KB/s, maxb=328339KB/s, mint=127743msec, maxt=127743msec


Disk stats (read/write):
  sdb: ios=10466773/3, merge=0/1, ticks=6400748/8700, in_queue=6405540, util=100.00%
test write 100% 
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=4028: Wed Mar 29 09:02:27 2017
  write: io=40960MB, bw=125056KB/s, iops=31264 , runt=335393msec
  cpu          : usr=6.91%, sys=31.93%, ctx=491626, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=10485760/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=40960MB, aggrb=125056KB/s, minb=125056KB/s, maxb=125056KB/s, mint=335393msec, maxt=335393msec 

IBM V7000 performance DRAID5 vs DRAID6 on SSD

Configuration of storage:
IBM V7000 gen2+
10x SSD 4TB
Interface 10GBe (iSCSI)

Configuration of test VM
2CPU, 4GB RAM
Linux Debian on ESX 6
Datastore mapped to ESX

Performance limited by disks, CPU usage under 40% on storage.

DRAID5 (8D + 1P + 1S)

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 40960MB)

test: (groupid=0, jobs=1): err= 0: pid=3987: Wed Mar 29 07:57:35 2017
  read : io=30723MB, bw=139125KB/s, iops=34781 , runt=226126msec
  write: io=10237MB, bw=46360KB/s, iops=11589 , runt=226126msec
  cpu          : usr=9.58%, sys=37.75%, ctx=709816, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=7864963/w=2620797/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=30723MB, aggrb=139125KB/s, minb=139125KB/s, maxb=139125KB/s, mint=226126msec, maxt=226126msec
  WRITE: io=10237MB, aggrb=46359KB/s, minb=46359KB/s, maxb=46359KB/s, mint=226126msec, maxt=226126msec

Disk stats (read/write):
  sdb: ios=7856666/2618202, merge=0/45, ticks=10471096/3310356, in_queue=13777900, util=100.00%
test read 100%
test: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=3991: Wed Mar 29 08:00:59 2017
  read : io=40960MB, bw=206382KB/s, iops=51595 , runt=203230msec
  cpu          : usr=9.88%, sys=38.44%, ctx=665551, majf=0, minf=68
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=10485760/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=40960MB, aggrb=206382KB/s, minb=206382KB/s, maxb=206382KB/s, mint=203230msec, maxt=203230msec

Disk stats (read/write):
  sdb: ios=10475310/2, merge=0/1, ticks=11981140/0, in_queue=12170624, util=100.00%
test write 100% 
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=3995: Wed Mar 29 08:05:59 2017
  write: io=40960MB, bw=139572KB/s, iops=34892 , runt=300512msec
  cpu          : usr=7.17%, sys=29.45%, ctx=546661, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=10485760/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=40960MB, aggrb=139571KB/s, minb=139571KB/s, maxb=139571KB/s, mint=300512msec, maxt=300512msec

Disk stats (read/write):
  sdb: ios=0/10480731, merge=0/60, ticks=0/17984528, in_queue=17979712, util=100.00%


DRAID6 (7D + 2P + 1S)
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 40960MB)

test: (groupid=0, jobs=1): err= 0: pid=4021: Tue Mar 28 19:43:26 2017
  read : io=30719MB, bw=127238KB/s, iops=31809 , runt=247222msec
  write: io=10241MB, bw=42419KB/s, iops=10604 , runt=247222msec
  cpu          : usr=9.06%, sys=33.66%, ctx=787004, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=7864034/w=2621726/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=30719MB, aggrb=127238KB/s, minb=127238KB/s, maxb=127238KB/s, mint=247222msec, maxt=247222msec
  WRITE: io=10241MB, aggrb=42418KB/s, minb=42418KB/s, maxb=42418KB/s, mint=247222msec, maxt=247222msec

Disk stats (read/write):
  sdb: ios=7862987/2621478, merge=0/49, ticks=11209100/3770780, in_queue=14975764, util=100.00%
test read 100%
test: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=4025: Tue Mar 28 19:47:06 2017
  read : io=40960MB, bw=190892KB/s, iops=47723 , runt=219721msec
  cpu          : usr=9.46%, sys=35.56%, ctx=756041, majf=0, minf=68
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=10485760/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=40960MB, aggrb=190892KB/s, minb=190892KB/s, maxb=190892KB/s, mint=219721msec, maxt=219721msec

Disk stats (read/write):
  sdb: ios=10478253/3, merge=0/1, ticks=13149764/169304, in_queue=13315696, util=100.00%
test write 100% 
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process

test: (groupid=0, jobs=1): err= 0: pid=4029: Tue Mar 28 19:53:33 2017
  write: io=40960MB, bw=108597KB/s, iops=27149 , runt=386225msec
  cpu          : usr=5.93%, sys=22.56%, ctx=660059, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=10485760/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=40960MB, aggrb=108597KB/s, minb=108597KB/s, maxb=108597KB/s, mint=386225msec, maxt=386225msec

Disk stats (read/write):
  sdb: ios=0/10484791, merge=0/83, ticks=0/23609532, in_queue=23605968, util=100.00%