24 July 2016

Running Docker Wildfly/JBoss Application Server in Debug mode via Eclipse

I have recently built the whole Development Environment of Java Enterprise Edition (J2EE) project on multiple Docker containers. It has saved the hassle of setting up a dev env for new developers. All it takes to setup the environment is to clone our github repositories and run the Docker images.

I will explain the setup in a future post.

Build Wildfly Docker Image with Debugging Enabled

So basically in order to run the Wildfly in debug mode, we need to add --debug flag to the Wildfly standalone.sh startup script.

Also we need to expose port 8787 in our image as it's the default port for Wildfly JVM debug mode.

1. Create or change your Wildfly Dockerfile as follows:
$ cat Dockerfile
FROM jboss/wildfly:10.0.0.Final
EXPOSE 8787 
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "", "-bmanagement", "", "--debug"]
2. Build and tag the image:
$ docker build -t wildfly-debug .
3. Run the image either via  docker run:
$ docker run -it -p 8080:8080 -p 8787:8787 --name wfdebug wildfly-debug 
or via docker-compose:
$ cat docker-compose.yml
version: '2'
    image: wildfly-debug
      - "8080:8080"
      - "8787:8787"
$ docker-compose up wfdebug

Eclipse Settings:

In the Eclipse, go to following path:

Window -> Perspective -> Open Perspective -> Debug

Now click on the arrow key next to the Debug button and choose "Debug Configurations"

On the left sidebar, choose "Remote Java Application", right click and New. 

- Give it a Name
- In the Connect tab, left the Project blank
- Choose the Connection Type as "Standard (Socket Attach)"
- Connection Properties: Host: "localhost", Port: "8787"
- In the Source tab, click Add, choose Java Project, and choose the project(s) you want to debug

Apply the changes, and click on Debug!

Boom! It should now connect to the remote Wildfly server running on docker in Debug mode.

I have faced below error several times whenever I don't properly follow the mentioned steps:
"Failed to connect to remote VM. Connection refused."

19 June 2016

Jenkins Docker Container Problem Connecting to JFrog Artifactory Docker Container

So I was setting up Jenkins in Docker to build Gradle projects and push the WAR artifacts to JFrog Artifactory Docker container.

I have started the Jenkins and Artifactory containers as follows:

Jenkins via  docker run:
$ docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /home/arvin/jenkins:/var/jenkins_home jenkins
Jenkins via  docker-compose:
    image: jenkins:latest
      - "8080:8008"
      - "50000:50000"
      - /home/arvin/jenkins:/var/jenkins_home jenkins
Artifactory via  docker run:
$ docker run -d --name myartifact -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
Artifactory via docker-compose:
    image: docker.bintray.io/jfrog/artifactory-oss:latest
      - "8081:8001"
      - /home/arvin/artifact/data:/var/opt/jfrog/artifactory/data
      - /home/arvin/artifact/logs:/var/opt/jfrog/artifactory/logs
      - /home/arvin/artifact/etc:/var/opt/jfrog/artifactory/etc

Artifactory plugin for jenkins has been installed. In the main jenkins configuration, when try to configure Artifactory server with following settings I get the error:
URL: http://localhost:8081/artifactory
Default Deployer Credentials
Username: admin
Password: password

"Error occurred while requesting version information: Connection refused (Connection refused)"

Even though default artifactory credentails admin/password was correct.


After much hassle, it turns out for some unknown reasons, jenkins artifactory plugin couldn't resolve http://localhost:8081/artifactory even though the docker mappings was correct and was possible to connect to artifactory web based console with the same URL.

Replacing "localhost" with docker container IP solved did the trick.
$ docker inspect myartifactoryID
or simply login to the running container and read hosts file:
$ docker exec -it myartifactory cat /etc/hosts localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters b92facb1d664

Eventually I was able to authenticate the jenkins plugin with artifactory's container IP as follow:

17 May 2016

Terraforming Amazon AWS Lambda function and related API Gateway

I've been recently working on terraforming the AWS lambda function and its related API gateway.

Using terraform with AWS API gateway is fairly new at this point.

If you have no idea what is AWS Lambda function, here is a quick intro:

AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.

The Terraform code that I've come up with is look like below:
resource "aws_iam_role" "test_role" {
    name               = "test_role"
    count              = "${var.enable_redirect-lambda}"
    assume_role_policy = <<POLICY
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      "Action": "sts:AssumeRole"

resource "aws_iam_role_policy" "test-policy" {
    name   = "test-policy"
    role   = "${aws_iam_role.test_role.id}"
    count  = "${var.enable_redirect-lambda}"
    policy = <<POLICY
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "*"
      "Effect": "Allow",
      "Action": [
      "Resource": "*"
      "Effect": "Allow",
      "Action": [
      "Resource": "*"

resource "aws_lambda_function" "test_role" {
    function_name    = "test"
    description      = "test description"
    role             = "${aws_iam_role.test_role.arn}"
    handler          = "file_name.handler"
    memory_size      = "128"
    timeout          = "3"
    filename         = "file_name.zip"
    source_code_hash = "${base64sha256(file("file_name.zip"))}"
    runtime          = "nodejs4.3"
    count            = "${var.enable_redirect-lambda}"

resource "aws_lambda_permission" "allow_api_gateway" {
    function_name = "${aws_lambda_function.test.function_name}"
    statement_id  = "AllowExecutionFromApiGateway"
    action        = "lambda:InvokeFunction"
    principal     = "apigateway.amazonaws.com"
    source_arn    = "arn:aws:execute-api:${var.region}:${var.account_id}:${aws_api_gateway_rest_api.test_api.id}/*/${aws_api_gateway_integration.test-integration.integration_http_method}${aws_api_gateway_resource.test_resource.path}"
    count         = "${var.enable_redirect-lambda}"

resource "aws_api_gateway_rest_api" "test_api" {
  name        = "test_api"
  description = "API for test"
  count       = "${var.enable_redirect-lambda}"

resource "aws_api_gateway_resource" "test_resource" {
  rest_api_id = "${aws_api_gateway_rest_api.test_api.id}"
  parent_id   = "${aws_api_gateway_rest_api.test_api.root_resource_id}"
  path_part   = "test_resource"
  count       = "${var.enable_redirect-lambda}"

resource "aws_api_gateway_method" "test-get" {
  rest_api_id   = "${aws_api_gateway_rest_api.test_api.id}"
  resource_id   = "${aws_api_gateway_resource.test_resource.id}"
  http_method   = "GET"
  authorization = "NONE"
  count         = "${var.enable_redirect-lambda}"

resource "aws_api_gateway_integration" "test-integration" {
  rest_api_id             = "${aws_api_gateway_rest_api.test_api.id}"
  resource_id             = "${aws_api_gateway_resource.test_resource.id}"
  http_method             = "${aws_api_gateway_method.test-get.http_method}"
  type                    = "AWS"
  uri                     = "arn:aws:apigateway:${var.region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${var.region}:${var.account_id}:function:${aws_lambda_function.test.function_name}/invocations"
  credentials             = "arn:aws:iam::${var.account_id}:role/test_role"
  integration_http_method = "${aws_api_gateway_method.test-get.http_method}"
  count                   = "${var.enable_redirect-lambda}"

resource "aws_api_gateway_method_response" "302" {
  rest_api_id                 = "${aws_api_gateway_rest_api.test_api.id}"
  resource_id                 = "${aws_api_gateway_resource.test_resource.id}"
  http_method                 = "${aws_api_gateway_method.test-get.http_method}"
  status_code                 = "302"
  count                       = "${var.enable_redirect-lambda}"
  response_parameters_in_json = <<PARAMS
    "method.response.header.Location": true

resource "aws_api_gateway_integration_response" "test_IntegrationResponse" {
  rest_api_id                 = "${aws_api_gateway_rest_api.test_api.id}"
  resource_id                 = "${aws_api_gateway_resource.test_resource.id}"
  http_method                 = "${aws_api_gateway_method.test-get.http_method}"
  status_code                 = "${aws_api_gateway_method_response.302.status_code}"
  count                       = "${var.enable_redirect-lambda}"
  depends_on                  = ["aws_api_gateway_integration.test-integration"]
  response_parameters_in_json = <<PARAMS
    "method.response.header.Location": "integration.response.body.location"

resource "aws_api_gateway_deployment" "test_deploy" {
  depends_on  = ["aws_api_gateway_integration.test-integration"]
  stage_name  = "beta"
  rest_api_id = "${aws_api_gateway_rest_api.test_api.id}"
  count       = "${var.enable_redirect-lambda}"

15 April 2016

Datadog Integration with Postgres/MongoDB/Apache/Elasticsearch

Datadog has integration with almost any popular services. I have integrated our Datadog to specifically monitor our needed services.

Here is how I did it.

Step 1: Datadog Agent

Install Datadog agent for your specific OS in your server. You can download the specific agent in the Datadog UI under Integrations -> Agent.

Step 2: Configure Agent

You need to configure the YAML file for each specific service you would like to be monitored.


1. Create a read-only Postgres role with access to pg_stat_database

In our case, we have a 3 servers clustered Postgres with PG-Pool. So you need to create the Postgres role and PG-Pool user in each 3 servers.

A.  Create PG role
Connect to your pg instance in each server and run following queries: 
$ psql -h server -U postgres -W
postgres# create user datadog with password 'datadog';
postgres# grant SELECT ON pg_stat_database to datadog;
B. Create encrypted password with MD5 for PG-Pool
Login to each server and run pg_md5:
$ sudo pg_md5 -p -m -u datadog
Supply the same password as you've provided during creating the role.

This command will create encrypted MD5 hash for the user and updates the following pg-pool config:
$ cat /etc/pgpool-II/pool_passwd
Test if you are able to connect with your created role:
$ psql -h server -U datadog postgres -c \
"select * from pg_stat_database LIMIT(1);"
&& echo -e "\e[0;32mPostgres connection - OK\e[0m" || \ 
|| echo -e "\e[0;31mCannot connect to Postgres\e[0m"
2. Configure the Datadog agent

Create or open datadog postgres.yaml config file:
$ sudo vi /etc/dd-agent/conf.d/postgres.yaml
Fill it as per your environment values


  - host: <IP>
    port: 5432
    username: datadog
    password: datadog
    ssl: true

In my case I had to define the config file in a way that datadog connects to my postgres via SSL with ssl: true as connections been defined to be always SSL encrypted to my postgres. When I didn't define the config file in this way for the first time, I received this error when Datadog tried to connect to postgres:
Error: md5 authentication for user datadog failed.
Based on my pool_hba.conf file, all connections to all databases from all sources need to be md5 encrypted:
$ sudo cat /etc/pgpool-II/pool_hba.conf
# "local" is for Unix domain socket connections only
local   all         postgres                          trust
local   all         all                               md5
# IPv4 local connections:
host    all         postgres          trust
host    all         all          md5
host    all         all             md5
3. Restart the Datadog agent and check the status:

$ sudo /etc/init.d/datadog-agent restart
$ sudo /etc/init.d/datadog-agent info


Same as Postgres, we need a user with read only role on admin database.

1. Create Mongo user
Connect to mongo server on admin database and create the datadog user:

$ mongo <server>/admin -ssl --sslAllowInvalidCertificates -u root -p <passwd>

> db.createUser({
  "pwd": "datadog",
  "roles" : [
    {role: 'read', db: 'admin' },
    {role: 'clusterMonitor', db: 'admin'},
    {role: 'read', db: 'local' }

2. Configure mongo.yaml

sudo vi /etc/dd-agent/conf.d/mongo.yaml
Fill it as per your environment values


  - server: mongodb://datadog:datadog@localhost:27017/admin

    ssl: true
    ssl_cert_reqs: 0
      - durability
      - locks
      - top
In my case, mongo is using SSL but certificate has expired. So I had to define ssl: true and ssl_cert_reqs: 0 so it ingonres the SSL cert. Fail to define that, I received `Mongodb: Connection Closed` error in datadog.

3. Restart the Datadog agent and check the status:
$ sudo /etc/init.d/datadog-agent restart
$ sudo /etc/init.d/datadog-agent info


In case of Apache, it is quiet straight forward except you need to make sure you have mod_status loaded in your apache service. Another requirement is that ExtendedStatus need to be set to on, which is set on by default in apache version 2.3.6 onward. 

1. Configure apache.yaml

sudo vi /etc/dd-agent/conf.d/apache.yaml
Fill it as per your environment values


  - apache_status_url: http://<IP>:6666/server-status

    disable_ssl_validation: true
2. Restart the Datadog agent and check the status:

$ sudo /etc/init.d/datadog-agent restart
$ sudo /etc/init.d/datadog-agent info


There is no trick here for Elasticsearch integrations.

Configure and restart the agent:
$ sudo vim /etc/dd-agent/conf.d/elastic.yaml
Fill it as per your environment values
init_config: null
-   tags:
    - role:production
    url: http://localhost:9200

$ sudo /etc/init.d/datadog-agent restart
$ sudo /etc/init.d/datadog-agent info

23 March 2016

Terraforming Amazon AWS for giving a Group the Restart Policy of EC2 instances

We are managing our Amazon AWS infrastructure with Terraform.

If you have no idea what Terraform is, below is a short description:

"Terraform is a tool for building, changing, and versioning Cloud infrastructure.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. 
The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features.
Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used."

We are usually assigning read only policy to our DevOps team for safety reasons. Our needs has recently been changed and we needed to allow our DevOps team to Start, Stop and Restart the EC2 instances.

Following is what I come up with via Terraform:

Terraform file for defining new IAM policy and attaching it to our target DevOps group:

resource "aws_iam_policy" "devops-aws-EC2RestartAccess" {
    name        = "devops-aws-EC2RestartAccess"
    description = "Allowing devops to Restart EC2 instances"
    path        = "/"
    policy      = "${file("${path.module}/aws-EC2RestartAccess.policy")}"
resource "aws_iam_policy_attachment" "devops-aws-EC2RestartAccess" {
    name = "devops-aws-EC2RestartAccess"
    groups = ["${aws_iam_group.devops.name}"]
    policy_arn = "${aws_iam_policy.devops-aws-EC2RestartAccess.arn}"
And the policy file:
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": "*"

15 February 2016

MongoDB Auth Failed

So I logged in to new MongoDB server via currently installed mongo shell. Even though provided credentials were correct, I was receiving:

exception: login failed
$ mongo mongosrver:27017/admin -ssl --sslAllowInvalidCertificates  -u root -p password
MongoDB shell version: 2.6.10
connecting to: mongoserver:27017/admin
2016-05-26T15:53:45.952+0100 Error: 18 { ok: 0.0, errmsg: "auth failed", code: 18 } at src/mongo/shell/db.js:1287
exception: login failed
After much struggle, I came across a forum thread and got the clue that Mongo server and client might be mismatched.

I had 2.6.1 installed: 

$ mongo --version
 version 2.6.10
And it was the latest version in ubuntu repositories:

$ sudo apt show mongodb-clients 
Package: mongodb-clients
Version: 1:2.6.10-0ubuntu1
Priority: optional
Section: universe/database
Source: mongodb
Origin: Ubuntu
Maintainer: Ubuntu Developers 
Therefore, I had to download the package from Mongo official repos.

Adding the mongo repo key:

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
Executing: /tmp/tmp.3VAqGKcmyO/gpg.1.sh --keyserver
gpg: requesting key A15703C6 from hkp server keyserver.ubuntu.com
gpg: key A15703C6: public key "MongoDB 3.4 Release Signing Key " imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
Adding the mongo official repo to apt sources:

$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse
gpg:               imported: 1  (RSA: 1)
Update the apt packages and install the latest mongo shell:

$ sudo apt-get update
$ sudo apt-get install -y mongodb-org-shell
We have the latest version now:

$ mongo --version
MongoDB shell version v3.4.1
git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
allocator: tcmalloc
modules: none
build environment:
    distmod: ubuntu1604
    distarch: x86_64
    target_arch: x86_64
And eventually we are able to connect:

$ mongo mongoserver:27017/admin -u root -p password
MongoDB shell version v3.4.1
connecting to: mongodb://mongoserver:27017/admin
MongoDB server version: 3.2.11
> show collections

MongoDB Dump and Restore

First we need to have mongo tools installed. If you have already added the Mongo official repo mentioned above, then issue following command:

$ sudo apt install mongodb-org-tools
Now we will back up all of our databases and collections via mongodump to backup folder:

$ mongodump --host mongoserver --port 27017 --username root --password passwd --out ./backup
And restore the mongo dump via mongorestore from backup folder:

$ mongorestore -vvvvv --drop --host localhost --port 27017 ./backup
We set -vvvvv flag for maximum verbose and --drop flag to drop collections if they are already exist on database.

14 January 2016

Puppet Vgextend LVM module

I've recently extended our lvm2 Puppet module with the vgextend feature.

Below is what I've come up with. There are a Puppet file and a bash script for checking if VG needs to be extended.

You have to define VG name and PVs in the node YAML file.


# == Define: lvm2::vgextend
# Extend an LVM Volume Group (VG)
# === Parameters
# [*name*]
#   Volume Group Name
# [*physicalvolumes*]
#   Array of physical volumes (PV) to extend the VG

define lvm2::vgextend($physicalvolumes) {


  include '::lvm2'

  $pv = join($physicalvolumes,' ')
  $onlyif_check = $::lvm2::vg_onlyif_check
  $cmd = "${onlyif_check} apply ${name} ${pv}"
  $pipe = $::lvm2::dryrun_pipe_command
  $command = $::lvm2::dryrun ? {
    false   => $cmd,
    default => "echo '${cmd}' ${pipe}",

  exec { "extend_volume_group_${name}" :
    command => $command,
    path    => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
    tag     => 'lvm2',
    onlyif  => "${onlyif_check} check ${name} ${pv}",



# This is used by exec onlyif Puppet parameter to check if the vgextend
# should be done (i.e., vgextend will be attempted only when exit code is 0)
set -u
set -e

CUR_PV=($(vgdisplay -v 2> /dev/null |awk '/PV Name/ {print$3}'))
NEW_PV=`echo "${ALL_PV[@]}" "${CUR_PV[@]}" | tr ' ' '\n' | sort | uniq -u`
CMD="vgextend $VG_NAME $NEW_PV"

case $FLAG in
check )
if [[ "${#ALL_PV[@]}" > "${#CUR_PV[@]}" ]]; then
   echo "New Physical Volume(s) $NEW_PV found!"
   exit 0
elif [[ "${#ALL_PV[@]}" == "${#CUR_PV[@]}" ]]; then
if [[ -z "$NEW_PV" ]]; then
echo "No change in Physical Volume(s) of $VG_NAME Volume Group"
exit 1
echo "We don't remove $NEW_PV Physical Volume from Volume Group $VG_NAME !"
exit 1
elif [[ "${#ALL_PV[@]}" < "${#CUR_PV[@]}" ]]; then
echo "We don't remove $NEW_PV Physical Volume from Volume Group $VG_NAME !"
exit 1
apply )

12 December 2015

Gluster filesystem mount issue in LXC container

I was setting up a Gluster cluster file system in 2 LXC containers where I ran into an error.

Implementation process

1. Install gluster-server,

in both containers:

$ sudo apt-get install glusterfs-server

2. Peer the gluster nodes,

in container01:

$ sudo gluster peer probe container02
peer probe: success

in container02:

$ sudo gluster peer probe container01
peer probe: success

3. Create the glusterfs manager folder,

in both containers:

$ sudo mkdir /gluster_data

4. Create vol1 volume,

in containers01:

$ sudo gluster volume create vol1 replica 2 transport tcp container01:/gluster_data container02:/gluster_data force
volume create: vol1: success: please start the volume to access data

5. Start the volume,

in container01:

$ sudo gluster volume start vol1
volume start: vol1: success

6. Mount the newly created shared storage vol1:

$ sudo mkdir /pool1
$ sudo mount /pool1 

And here, when I tried to mount the vol1, i got following error:

[2015-12-12 21:19:37.277176] I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.2 (/usr/sbin/glusterfs --volfile-id=/vol1 --volfile-server=container01 /pool1)
[2015-12-12 21:19:37.277756] E [mount.c:267:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
[2015-12-12 21:19:37.277768] E [xlator.c:390:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

As per error it seems gluster looking for fuse device /dev/fuse which apparently isn't exist.

$ sudo ls /dev/fuse
ls: cannot access /dev/fuse: No such file or directory

To create the fuse device manually,

in both containers:

$ sudo mknod /dev/fuse c 10 229

Now try to mount again:

$ sudo mount /pool1 
$ df -h /pool1
Filesystem            Size  Used Avail Use% Mounted on
container01:/vol1     30G   26G  2.5G  92% /pool1

And if you want to make the mount boot persistent, add the following line in the /etc/fstab for each container respectively: 

$ cat /etc/fstab
container01:/vol1 /pool1 glusterfs defaults,_netdev 0 0 

10 December 2015

Create Snappy Ubuntu as a Docker Image

Snappy ubuntu core is the latest member of ubuntu family that specifically designed to run on Linux containers. To put it simple it's a stripped down ubuntu with some advanced features such as transactional upgrades/rollback to bring more stability, security with AppArmor and new snappy package manager instead of apt-get.

I wanted to run snappy ubuntu on a docker container, however I couldn't find the base snappy image on the docker hub to pull. So I decided to make and push it myself to docker hub.

First we need to download the Snappy image. It can be downloaded from https://developer.ubuntu.com/en/snappy/start/#try-x86

# wget http://releases.ubuntu.com/15.04/ubuntu-15.04-snappy-amd64-generic.img.xz

It comes as a XZ compressed IMG filesystem dumped image. we will unzx it:

# unzx ubuntu-15.04-snappy-amd64-generic.img.xz

For ISO images we simply can loop mount and access them:

# mkdir /mnt/iso
# mount -o loop image.iso /mnt/iso

And proceed to read the mounted directory as tar and create the docker image from STDIN:

# tar -C /mnt/iso -c . | docker import - yourrepo/yourimage:anytag

However, for IMG dump images it's a bit hassle-y:

1. Load the nbd kernel module with max_parts option if you are in a debian-based machine:

# modprobe nbd max_parts=16

2. Mount the image to /dev/nbd0 with qemu:

# qemu-nbd -c /dev/nbd0 ubuntu-15.04-snappy-amd64-generic.img

3. Ask kernel to re-read the partition table:

# partprobe /dev/nbd0

We can see the filsystem within this image via fdisk:

# fdisk -l /dev/nbd0

Disk /dev/nbd0: 3,6 GiB, 3899999744 bytes, 7617187 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 374F5ED9-98FE-47B5-B430-A8983D7CB41B

Device Start End Sectors Size Type

/dev/nbd0p1 8192 16383 8192 4M BIOS boot
/dev/nbd0p2 16384 278527 262144 128M EFI System
/dev/nbd0p3 278528 2375679 2097152 1G Linux filesystem
/dev/nbd0p4 2375680 4472831 2097152 1G Linux filesystem
/dev/nbd0p5 4472832 7614463 3141632 1,5G Linux filesystem

Our desired contents are located in nbd0p3, so we will mount it:

# mount /dev/nbd0p3 /mnt/snappy/

There we are! We have all that is required to create our docker image:

# tar -C /mnt/snappy/ -c . | docker import - arvinep/ubuntu:snappy

To check the newly created image, let's list the local docker images:

# docker images
arvinep/ubuntu-snappy snappy 2e796faa9adc  9 minutes ago  604.8 MB
ubuntu                latest d55e68e6cc9c  26 hours ago   187.9 MB

Cool! Let's run it and access to our docker container:

# docker run -it arvinep/ubuntu:snappy /bin/bash

It's up and running alongside with other docker containers:

# docker ps
CONTAINER ID   IMAGE                 COMMAND      CREATED             STATUS              PORTS  NAMES
25d60fa0238d   arvinep/ubuntu:snappy "/bin/bash"  About a minute ago  Up About a minute          sleepy_banach       
1cff710b5604   ubuntu:latest         "/bin/bash"  2 hours ago         Up 2 hours                 angry_davinci       
afe31fbcbf7d   ubuntu:latest         "/bin/bash"  2 hours ago         Up 2 hours                 serene_lovelace     
74a46b4b203f   ubuntu:latest         "/bin/bash"  2 hours ago         Up 2 hours                 tender_bartik    

I have pushed the snappy ubuntu to docker hub so you can pull and enjoy it:

# docker run -it arvinep/ubuntu-snappy /bin/bash

5 December 2015

OpenSSH client access issues after patching to version 7

After OpenSSH has been patched from vulnerable version 5 to the latest secure version 7.1p, we have encountered some connection issues with some of the clients.

# tail -f /var/log/messages 
fatal: Unable to negotiate with no matching cipher found. 
Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,arcfour [preauth]

Root Cause:
Based on the version 7.1 release note, many ciphers have been disabled due to security issues:

OpenSSH 7.1 release note: 
 * Several ciphers will be disabled by default: blowfish-cbc,
   cast128-cbc, all arcfour variants and the rijndael-cbc aliases
   for AES.

Need to add legacy ciphers to sshd_config in order to support the ssh client:

# vim /etc/ssh/sshd_config
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,blowfish-cbc,aes128-cbc,3des-cbc,cast128-cbc,arcfour,aes192-cbc,aes256-cbc,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,arcfour

After adding the ciphers and restarting daemon, same client encounter different error:

# tail -f /var/log/messages 
fatal: Unable to negotiate with no matching key exchange method found. Their offer: 
diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1 [preauth]

Root Cause:
Based on the version 7.0 release note, some of the key exchange methods have been disabled

OpenSSH 7.0 release note: 
 * Support for the 1024-bit diffie-hellman-group1-sha1 key exchange
   is disabled by default at run-time. It may be re-enabled using
   the instructions at http://www.openssh.com/legacy.html

 * ssh(1), sshd(8): extend Ciphers, MACs, KexAlgorithms,
   HostKeyAlgorithms, PubkeyAcceptedKeyTypes and HostbasedKeyTypes
   options to allow appending to the default set of algorithms
   instead of replacing it. Options may now be prefixed with a '+'
   to append to the default, e.g. "HostKeyAlgorithms=+ssh-dss".

To add the legacy MAC and key exchange algorithms back:

# vim /etc/ssh/sshd_config
MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-sha1-96,hmac-md5-96

KexAlgorithms +diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1