20 April 2015

Git Bash Alias

Here is how my Git Bash Aliases look like:

$ cat /.bash_profile
alias _cd="cd ~/downloads/git/puppet"
alias gs="git status"
alias ga="git add -A"
alias gc="git commit"
alias gd="git diff origin/master"
alias gp="git pull --rebase origin master"
alias gpu="git push"

and in my shared home directory to run puppet:

alias pa="sudo puppet agent -t"





9 April 2015

Install Puppet Automation on Red hat Enterprise Linux /CentOS 6.5

Intro

What is Puppet?

If you are managing hundreds to thousands servers, Automation is meaningful to you.
And Puppet is the server/client tool that orchestrate your infrastructure. Puppet works in a way that you can define your desired state of your system and services, files or directories in puppet master, and reset of your servers as puppet agent, will check in by default every 30 minutes and get their new configuration if any exist for them. So basically Puppet holds the control of config files in your servers and since you can consider everything in Linux as file, Puppet is the tool to manage the files.

Some of the Puppet capabilities:

  • Set cron jobs
  • Install or remove packages
  • Ensure services are running
  • Control files
  • Execute commands

Implementation 

I have demonstrated below how to implement puppet in your environment both online and offline for those servers that are not connected to internet via local repositories. 

Online Method

Installing Puppet Master 

1. Download and install the puppetlabs rpm to add appropriate yum repository for puppet installation:

# rpm -ivh http://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs-release-6-11.noarch.rpm


2. Now we are ready to install puppet master:

# yum -y install puppet-server


3. After installation done, it's time to generate certificate and start puppet server:

# puppet master --verbose --no-daemonize

Info: Creating a new SSL key for ca
Info: Creating a new SSL certificate request for ca
Info: Certificate Request fingerprint (SHA256): 64:A3:74:65:14:80:A0:A9:7F:6A:A5:C2:48:D3:57:98:12:B9:E7:65:7D:5A:45:7E:0F:36:59:77:06:0B:7E:F9
Notice: Signed certificate request for ca
Info: Creating a new certificate revocation list
Info: Creating a new SSL key for centos6.lab.local
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for centos6.lab.local
Info: Certificate Request fingerprint (SHA256): 17:95:82:A2:B7:68:F1:3E:D9:89:7E:01:C5:F6:AA:9B:CA:72:C4:9D:28:E9:C0:6E:5D:2D:93:7C:F6:EB:93:19
Notice: centos6.lab.local has a waiting certificate request
Notice: Signed certificate request for centos6.lab.local
Notice: Removing file Puppet::SSL::CertificateRequest centos6.lab.local at '/var/lib/puppet/ssl/ca/requests/centos6.lab.local.pem'
Notice: Removing file Puppet::SSL::CertificateRequest centos6.lab.local at '/var/lib/puppet/ssl/certificate_requests/centos6.lab.local.pem'
Notice: Starting Puppet master version 3.8.2

We can see new SSL key and certificate have been created. Now we press "ctrl+c" to exit the setup:

^CNotice: Caught INT; storing stop


4. Add the certificate name to puppet config file. To check the certificate name:

# puppet cert list --all
+ "centos6.lab.local" (SHA256) A3:AE:51:03:A9:01:3C:CE:36:B1:FB:C7:C7:7D:A3:0F:66:6F:99:1F:BE:E4:C8:DB:BB:08:6E:7C:20:DA:24:B0 (alt names: "DNS:centos6.lab.local", "DNS:puppet", "DNS:puppet.lab.local")

In my case the cert name is "centos6.lab.local" (same as my hostname). Now add the cert name to puppet config under [main] section:

# vim /etc/puppet/puppet.conf

[main]
    # The Puppet log directory.
    # The default value is '$vardir/log'.
    logdir = /var/log/puppet

    # Where Puppet PID files are kept.
    # The default value is '$vardir/run'.
    rundir = /var/run/puppet

    # Where SSL certificates are kept.
    # The default value is '$confdir/ssl'.
    ssldir = $vardir/ssl

    certname = centos6.lab.local


5. If hostname of your server is not "puppet", then need to add dns_alt_names to puppet config:

# vim /etc/puppet/puppet.conf

[main]
    # The Puppet log directory.
    # The default value is '$vardir/log'.
    logdir = /var/log/puppet

    # Where Puppet PID files are kept.
    # The default value is '$vardir/run'.
    rundir = /var/run/puppet

    # Where SSL certificates are kept.
    # The default value is '$confdir/ssl'.
    ssldir = $vardir/ssl

    certname = centos6.lab.local
    dns_alt_names = puppet, puppet.example.com 


6. Install apache httpd server and Ruby Passenger application server:

# yum install httpd httpd-devel mod_ssl openssl-devel libcurl-devel zlib-devel rubygems ruby-devel apr-devel apr-util-devel gcc gcc-c++


7. Install ruby gem Rack Passenger:

# gem install rack passenger
Successfully installed rack-1.6.4
Building native extensions.  This could take a while...
Successfully installed rake-10.4.2
Successfully installed passenger-5.0.16
3 gems installed
Installing ri documentation for rack-1.6.4...
Installing ri documentation for rake-10.4.2...
Installing ri documentation for passenger-5.0.16...
Installing RDoc documentation for rack-1.6.4...
Installing RDoc documentation for rake-10.4.2...
Installing RDoc documentation for passenger-5.0.16...

8. Install the puppet master rack application:

# mkdir -p /usr/share/puppet/rack/puppetmasterd/public /usr/share/puppet/rack/puppetmasterd/tmp

# cp /usr/share/puppet/ext/rack/config.ru /usr/share/puppet/rack/puppetmasterd/

# chown puppet:puppet /usr/share/puppet/rack/puppetmasterd/config.ru


9. Create puppet apache virtual host:

# cp /usr/share/puppet/ext/rack/example-passenger-vhost.conf /etc/httpd/conf.d/puppetmaster.conf 

Now edit puppetmaster config file and modify the SSL Certificate and DocumentRoot directives: 

# vim /etc/httpd/conf.d/puppetmaster.conf 

# This Apache 2 virtual host config shows how to use Puppet as a Rack
# application via Passenger. See
# http://docs.puppetlabs.com/guides/passenger.html for more information.

# You can also use the included config.ru file to run Puppet with other Rack
# servers instead of Passenger.

# you probably want to tune these settings
PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

Listen 8140

<VirtualHost *:8140>
        SSLEngine on
        SSLProtocol             ALL -SSLv2 -SSLv3
        SSLCipherSuite          EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!IDEA:!ECDSA:kEDH:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
        SSLHonorCipherOrder     on

        SSLCertificateFile      /var/lib/puppet/ssl/certs/centos6.lab.local.pem
        SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/centos6.lab.local.pem
        SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem 
        SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
        # If Apache complains about invalid signatures on the CRL, you can try disabling
        # CRL checking by commenting the next line, but this is not recommended.
        SSLCARevocationFile    /var/lib/puppet/ssl/ca/ca_crl.pem 
        # Apache 2.4 introduces the SSLCARevocationCheck directive and sets it to none
        # which effectively disables CRL checking; if you are using Apache 2.4+ you must
        # specify 'SSLCARevocationCheck chain' to actually use the CRL.
        # SSLCARevocationCheck chain
        SSLVerifyClient optional
        SSLVerifyDepth  1
        # The `ExportCertData` option is needed for agent certificate expiration warnings
        SSLOptions +StdEnvVars +ExportCertData

        # This header needs to be set if using a loadbalancer or proxy
        RequestHeader unset X-Forwarded-For

        RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

        DocumentRoot /usr/share/puppet/rack/puppetmasterd/public
        RackBaseURI /
        <Directory /etc/puppet/rack/>
                Options None
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>
</VirtualHost>


10. Now it's time to install apache module for passenger:

# passenger-install-apache2-module 

Welcome to the Phusion Passenger Apache 2 module installer, v5.0.16.

This installer will guide you through the entire installation process. It
shouldn't take more than 3 minutes in total.

Here's what you can expect from the installation process:

 1. The Apache 2 module will be installed for you.
 2. You'll learn how to configure Apache.
 3. You'll learn how to deploy a Ruby on Rails application.

Don't worry if anything goes wrong. This installer will advise you on how to
solve any problems.

Press Enter to continue, or Ctrl-C to abort. 

Press Enter to continue:

Installation through RPMs recommended

It looks like you are on a Red Hat or CentOS operating system, with SELinux
enabled. SELinux is a security mechanism for which special Passenger-specific
configuration is required. We supply this configuration as part of
our Passenger RPMs.

However, Passenger is currently installed through gem or tarball and does not
include any SELinux configuration. Therefore, we recommend that you:

 1. Uninstall your current Passenger install.
 2. Reinstall Passenger through the RPMs that we provide:
    https://www.phusionpassenger.com/library/install/apache/yum_repo/

What would you like to do?

Press Ctrl-C to exit this installer so that you can install RPMs (recommended)
  -OR-
Press Enter to continue using this installer anyway

This means SELinux is enabled in server. We can disable it by changing "enforcing" to "disabled" in selinux config file:

# vim /etc/selinux/config
...
SELINUX=disabled

You need to reboot the server now to completely disable SElinux. And run "passenger-install-apache2-module" again after server comes up.

Now in this stage, choose 'Ruby' in the list below and press Enter:

Which languages are you interested in?

Use <space> to select.
If the menu doesn't display correctly, press '!'

 > (*)  Ruby
   ( )  Python
   ( )  Node.js
   ( )  Meteor


Once you see following message:


Almost there!



Please edit your Apache configuration file, and add these lines:

   LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-5.0.16/buildout/apache2/mod_passenger.so
   <IfModule mod_passenger.c>
     PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-5.0.16
     PassengerDefaultRuby /usr/bin/ruby
   </IfModule>


open another terminal to the server and add mentioned block at the end of your httpd.conf file:

# vim /etc/httpd/conf/httpd.conf
...
...
#    ServerName dummy-host.example.com
#    ErrorLog logs/dummy-host.example.com-error_log
#    CustomLog logs/dummy-host.example.com-access_log common
#</VirtualHost>

LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-5.0.16/buildout/apache2/mod_passenger.so
   <IfModule mod_passenger.c>
     PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-5.0.16
     PassengerDefaultRuby /usr/bin/ruby
   </IfModule>

save the file and return to the previous terminal with the message and hit the Enter. Then you should see a confirmation similar to below:

Validating installation...

 * Checking whether this Passenger install is in PATH... ✓
 * Checking whether there are no other Passenger installations... ✓
 * Checking whether Apache is installed... ✓
 * Checking whether the Passenger module is correctly configured in Apache... ✓

Everything looks good. :-)

11. Finally, restart apache and make sure puppetmaster service is stopped:

# /etc/init.d/puppetmaster stop

# /etc/init.d/httpd restart
Stopping httpd:                                            [FAILED]
Starting httpd:                                              [  OK  ]

# chkconfig puppetmaster off
# chkconfig httpd on
# chkconfig --list|egrep 'puppetmaster|httpd'
httpd           0:off   1:off   2:on    3:on    4:on    5:on    6:off
puppetmaster    0:off   1:off   2:off   3:off   4:off   5:off   6:off

12. Make sure to open port 8140 in your firewall if you are using firewall in order for puppet master to be accessible to other servers: 

# iptables -I INPUT -p tcp -m tcp --dport 8140 -j ACCEPT
# iptables-save > /etc/sysconfig/iptables
# /etc/init.d/iptables restart

Installing Puppet Agent

1. Install puppet client in other servers that you would like to be managed by puppet master:

# rpm -ivh http://yum.puppetlabs.com/el/6.5/products/x86_64/puppetlabs-release-6-11.noarch.rpm

# yum -y install puppet


2. if you are not using DNS server, make sure to manually add the pupper master entry to hosts file of agent as well as adding agent's hostname to master's hosts file so both master and agent can ping each other via hostname:


In the agent server: 

# echo "10.8.8.254 centos6.lab.local centos6 puppet.example.com puppet " >> /etc/hosts 

In puppet master:

# echo "10.8.8.238 puppetagent.lab.local puppetagent " >> /etc/hosts 


3. Add the master server entry to the agent's puppet config file:

# echo "server = puppet.example.com" >> /etc/puppet/puppet.conf


4. Start the puppet agent:

# /etc/init.d/puppet start

Starting puppet agent:                                   [  OK  ]

# chkconfig puppet on



5. In this point, if everything goes good, you should be able to see the agent certificate at the puppet master. Run following in the master:

# puppet cert list -all

  "puppetagent.lab.local" (SHA256) 8A:29:5D:25:22:34:D5:D1:7A:E9:87:00:2F:45:4B:47:17:22:ED:0E:53:2A:F3:0F:A6:2B:8F:C4:4C:1F:CF:31
+ "centos6.lab.local"  (SHA256) A3:AE:51:03:A9:01:3C:CE:36:B1:FB:C7:C7:7D:A3:0F:66:6F:99:1F:BE:E4:C8:DB:BB:08:6E:7C:20:DA:24:B0 (alt names: "DNS:centos6.lab.local", "DNS:puppet", "DNS:puppet.lab.local")


6. We can see our agent certificate is seen by master but a "+" is missing next to it meaning master needs to sign the agent certificate to officially accept it as a managed server:

# puppet cert sign --all
Notice: Signed certificate request for puppetagent.lab.local

Notice: Removing file Puppet::SSL::CertificateRequest puppetagent.lab.local at '/var/lib/puppet/ssl/ca/requests/puppetagent.lab.local.pem'


7. Issue following command to check if our agent can connect to master:

# puppet agent -tv 
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetagent.lab.local
Info: Applying configuration version '1440784754'
Notice: Finished catalog run in 0.05 seconds



8. Now that communication has been established, we can try our first magic with puppet. In the master create the main manifest as below:

# vim /etc/puppet/manifests/site.pp

file {'/tmp/puppet_test':                                        
  ensure  => present,                                            
  mode    => 0644,                                               
  content => "Hello from Puppet! I am : ${hostname}!\n",
}

9. Now in the agent server, run below command:

# puppet agent -tv
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetagent.lab.local
Info: Applying configuration version '1440784977'
Notice: /Stage[main]/Main/File[/tmp/puppet_test]/ensure: created
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.14 seconds

and check to see if /tmp/puppet_test is created with desired content:

# cat /tmp/puppet_test
Hello from Puppet! I am : puppetagent!


And Bingo! Congratulation! You have just setup your puppet automation for your environment. 


Offline Method 

If your server have no access to internet for puppet installation, you need to download the whole puppet repository which is applicable to your specific Linux, e.g. puppet repository for CentOS 6.5.

Then create a local repo accessible to puppet master and agents in order to use "yum" package installation method.

After you have setup your local repo, you need to create a yum repo config file in the yum.repos.d directory of both master and agent with following content:

My sample yum repo file for my local puppet repository:
  # cat /etc/yum.repos.d/puppet.repo

[puppet-products]
name=Puppet products
#baseurl=file:///puppetsrc/el/6.5/products/x86_64/
baseurl=ftp://10.8.227.50/pub/PuppetRepo/el/6.5/products/x86_64
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[puppet-deps]
name=Puppet deps
baseurl=ftp://10.8.227.50/pub/PuppetRepo/el/6.5/dependencies/x86_64
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[puppet-devel]
name=Puppet devel
baseurl=ftp://10.8.227.50/pub/PuppetRepo/el/6.5/devel/x86_64
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[puppet-pc1]
name=Puppet pc1
baseurl=ftp://10.8.227.50/pub/PuppetRepo/el/6.5/PC1/x86_64
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


and then update the yum database to make sure it can see the puppet packages: 

# yum update

After this has been done, the rest is same as installation instruction above in the online method section.





6 April 2015

VMware VM with High CPU usage becomes unresponsive

Today one of our producation VMs became unreposnsive with VMware graphs were showing CPU 100% usage and CPU high alerts in vCenter logs,

After diving deep into logs, below are my observations from VMware and Linux logs within period before the crash,


From ESXi side, 
we can see host has lost access to SAP01 datastore, but our VM is resided in DS01, So it should not affects this VM operation.

Lost access to volume 547f2eaa-1a00d590-44f0-001018b3dc14 
(SAP01) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. info 7/17/2013 6:56:47 AM SAP01

Successfully restored access to volume 547f2eaa-1a00d590-44f0-001018b3dc14 (SAP01) following connectivity issues. info 7/17/2013 6:56:47 AM 10.8.225.211


From VM guest side, 
there is only 1 entry related to CPU status has been recorded:

Alarm 'Virtual machine CPU usage' changed from Green to Red info 7/16/2015 1:59:10 PM PRODvm


From VM guest vmware.log file, 
there is only an entry of high CPU has been recorded in 13:25hr,

2013-07-16T13:25:25.109Z| vmx| I120: SnapshotVMX_TakeSnapshot start: 'high cpu', deviceState=1, lazy=1, logging=0, quiesced=0, forceNative=0, tryNative=0, sibling=0 saveAllocMaps=0 cb=CDA3C80, cbData=3236B4E0
2015-07-16T13:25:25.127Z| vcpu-0| I120: Destroying virtual dev for scsi0:0 vscsi=8371
2015-07-16T13:25:25.127Z| vcpu-0| I120: VMMon_VSCSIStopVports: No such target on adapter



From within Linux guest, 
we can see below SAR report, that Linux CPU was totally idle before the crash:

# sar -f /var/log/sa/sa16
01:10:01 PM CPU %user %nice %system %iowait %steal %idle
01:20:02 PM all 0.12 0.00 0.27 0.06 0.00 99.55
01:30:01 PM all 0.12 0.00 0.28 0.06 0.00 99.54
01:40:01 PM all 0.20 0.00 0.32 0.10 0.00 99.38
01:50:01 PM all 0.10 0.00 0.27 0.04 0.00 99.58
Average: all 0.14 0.00 0.28 0.32 0.00 99.26

06:05:18 PM LINUX RESTART


06:10:01 PM CPU %user %nice %system %iowait %steal %idle
06:20:01 PM all 0.16 0.00 0.46 0.29 0.00 99.09
06:30:01 PM all 0.05 0.00 0.11 0.10 0.00 99.74
06:40:01 PM all 0.16 0.00 0.52 1.19 0.00 98.13


Therefore, Linux CPU was not busy as opposed to VMware that shows CPU used 100%.

Linux didn’t report any complain about high CPU I/O wait, so disks were accessible in a normal way to Linux,

And CPU run queue was normal as well:

# sar -q -f /var/log/sa/sa16
01:10:01 PM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15
01:20:02 PM 0 147 0.00 0.00 0.00
01:30:01 PM 0 148 0.00 0.00 0.00
01:40:01 PM 0 159 0.00 0.00 0.00
01:50:01 PM 0 147 0.00 0.00 0.00
Average: 0 147 0.01 0.01 0.00

06:05:18 PM LINUX RESTART

06:10:01 PM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15
06:20:01 PM 0 161 0.00 0.00 0.00
06:30:01 PM 0 153 0.00 0.00 0.00
06:40:01 PM 0 152 0.00 0.01 0.00
06:50:01 PM 0 151 0.01 0.01 0.00
07:00:01 PM 0 148 0.00 0.01 0.00



Memory usage was also fine from within the Linux:

# sar -r -f /var/log/sa/sa16
01:10:01 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
01:20:02 PM 1248036 806492 39.25 133084 439992 274044 4.39
01:30:01 PM 1248028 806500 39.25 133092 440012 274044 4.39
01:40:01 PM 1234796 819732 39.90 133108 440064 290796 4.65
01:50:01 PM 1247664 806864 39.27 133108 440080 274044 4.39
Average: 1250279 804249 39.15 132749 439035 270872 4.33

06:05:18 PM LINUX RESTART


06:10:01 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
06:20:01 PM 1766932 287596 14.00 33744 88596 289664 4.64
06:30:01 PM 1766764 287764 14.01 34080 94268 274408 4.39
06:40:01 PM 1634960 419568 20.42 107564 115004 274408 4.39
06:50:01 PM 1634232 420296 20.46 107920 120440 266008 4.26
07:00:01 PM 1633088 421440 20.51 109056 118656 261192 4.18


Swap usage was normal as well:

# sar -S -f /var/log/sa/sa16
01:10:01 PM kbswpfree kbswpused %swpused kbswpcad %swpcad
01:20:02 PM 4194296 0 0.00 0 0.00
01:30:01 PM 4194296 0 0.00 0 0.00
01:40:01 PM 4194296 0 0.00 0 0.00
01:50:01 PM 4194296 0 0.00 0 0.00
Average: 4194296 0 0.00 0 0.00

06:05:18 PM LINUX RESTART


06:10:01 PM kbswpfree kbswpused %swpused kbswpcad %swpcad
06:20:01 PM 4194296 0 0.00 0 0.00
06:30:01 PM 4194296 0 0.00 0 0.00
06:40:01 PM 4194296 0 0.00 0 0.00
06:50:01 PM 4194296 0 0.00 0 0.00


Linux disk read/write was normal before the crash, and after 1:50pm nothing more recorded:

# sar -b -f /var/log/sa/sa16
01:10:01 PM tps rtps wtps bread/s bwrtn/s
01:20:02 PM 1.73 0.00 1.73 0.00 16.30
01:30:01 PM 1.73 0.00 1.73 0.00 16.22
01:40:01 PM 3.44 0.00 3.44 0.03 37.70
01:50:01 PM 1.72 0.00 1.72 0.00 16.28
Average: 1.87 0.00 1.87 0.00 18.12

06:05:18 PM LINUX RESTART


06:10:01 PM tps rtps wtps bread/s bwrtn/s
06:20:01 PM 4.19 2.51 1.68 147.58 17.06
06:30:01 PM 0.92 0.34 0.59 38.68 5.66
06:40:01 PM 72.39 56.18 16.21 627.77 225.36
06:50:01 PM 3.35 0.01 3.34 0.11 47.39



Hence, based on above observation, CPU/Memory/Disk from within the OS were all reported normal behavior,


As VMware shows weird hike of CPU, I suppose it was something between ESXi and guest VM resource allocation and CPU shares or fight for some resources that send the VM to an unresponsive mode.