Running Docker Wildfly/JBoss Application Server in Debug mode via Eclipse

I have recently built the whole Development Environment of Java Enterprise Edition (J2EE) project on multiple Docker containers. It has saved the hassle of setting up a dev env for new developers. All it takes to setup the environment is to clone our github repositories and run the Docker images. I will explain the setup in a future post. Build Wildfly Docker Image with Debugging Enabled So basically in order to run the Wildfly in debug mode, we need to add --debug  flag to the Wildfly  startup script. Also we need to expose port 8787  in our image as it's the default port for Wildfly JVM debug mode. 1. Create or change your Wildfly Dockerfile as follows: $ cat Dockerfile FROM jboss/wildfly: 10.0 . 0 .Final EXPOSE 8787 CMD [ "/opt/jboss/wildfly/bin/" , "-b" , "" , "-bmanagement" , "" , "--debug" ] 2. Build and tag the image: $ docker build -t wildfly-debug . 3.

Jenkins Docker Container Problem Connecting to JFrog Artifactory Docker Container

So I was setting up Jenkins in Docker to build Gradle projects and push the WAR artifacts to JFrog Artifactory Docker container. I have started the Jenkins and Artifactory containers as follows: Jenkins via   docker run : $ docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /home/arvin/jenkins:/var/jenkins_home jenkins Jenkins via   docker-compose : jenkins: image: jenkins:latest ports: - "8080:8008" - "50000:50000" volumes: - /home/arvin/jenkins:/var/jenkins_home jenkins Artifactory via   docker run : $ docker run -d --name myartifact -p 8081:8081 Artifactory via  docker-compose : artifactory: image: ports: - "8081:8001" volumes: - /home/arvin/artifact/data:/var/opt/jfrog/artifactory/data - /home/arvin/artifact/logs:/var/opt/jfrog/artifactory/logs - /home/arvin/artifact/etc:/v

Terraforming Amazon AWS Lambda function and related API Gateway

I've been recently working on terraforming the AWS lambda function and its related API gateway. Using terraform with AWS API gateway is fairly new at this point. If you have no idea what is AWS Lambda function, here is a quick intro: AWS Lambda  is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use  AWS Lambda  to extend other  AWS  services with custom logic, or create your own back-end services that operate at  AWS  scale, performance, and security. The Terraform code that I've come up with is look like below: resource "aws_iam_role" "test_role" { name = "test_role" count = "${var.enable_redirect-lambda}" assume_role_policy = <<POLICY { "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "" , "Effect"

Datadog Integration with Postgres/MongoDB/Apache/Elasticsearch

Datadog has integration with almost any popular services. I have integrated our Datadog to specifically monitor our needed services. Here is how I did it. Step 1: Datadog Agent Install Datadog agent for your specific OS in your server. You can download the specific agent in the Datadog UI under Integrations -> Agent. Step 2: Configure Agent You need to configure the YAML file for each specific service you would like to be monitored. Postgres: 1. Create a read-only Postgres role with access to  pg_stat_database In our case, we have a 3 servers clustered Postgres with PG-Pool. So you need to create the Postgres role and PG-Pool user in each 3 servers. A.  Create PG role Connect to your pg instance in each server and run following queries:  $ psql -h server -U postgres -W postgres# create user datadog with password 'datadog' ; postgres# grant SELECT ON pg_stat_database to datadog; B. Create encrypted password with MD5 for PG-Pool Lo

Terraforming Amazon AWS for giving a Group the Restart Policy of EC2 instances

We are managing our Amazon AWS infrastructure with Terraform. If you have no idea what Terraform is, below is a short description: "Terraform is a tool for building, changing, and versioning Cloud infrastructure. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.  The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features. Infrastructure as Code : Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used." We are usually assigning read only policy to our DevO

MongoDB Auth Failed

So I logged in to new MongoDB server via currently installed mongo shell. Even though provided credentials were correct, I was receiving: exception: login failed $ mongo mongosrver:27017/admin -ssl --sslAllowInvalidCertificates -u root -p password MongoDB shell version: 2.6.10 connecting to: mongoserver:27017/admin 2016-05-26T15:53:45.952+0100 Error: 18 { ok: 0.0, errmsg: "auth failed", code: 18 } at src/mongo/shell/db.js:1287 exception: login failed After much struggle, I came across a forum thread and got the clue that Mongo server and client might be mismatched. I had 2.6.1 installed:  $ mongo --version version 2.6.10 And it was the latest version in ubuntu repositories: $ sudo apt show mongodb-clients Package: mongodb-clients Version: 1:2.6.10-0ubuntu1 Priority: optional Section: universe/database Source: mongodb Origin: Ubuntu Maintainer: Ubuntu Developers Therefore, I had to download the package from Mongo official repos. Adding the mongo repo key:

Puppet Vgextend LVM module

I've recently extended our lvm2 Puppet module with the vgextend feature. Below is what I've come up with. There are a Puppet file and a bash script for checking if VG needs to be extended. You have to define VG name and PVs in the node YAML file. vgextend.pp # == Define: lvm2::vgextend # Extend an LVM Volume Group (VG) # # === Parameters # [*name*] #   Volume Group Name # # [*physicalvolumes*] #   Array of physical volumes (PV) to extend the VG define lvm2::vgextend($physicalvolumes) {   validate_array($physicalvolumes)   include '::lvm2'   $pv = join($physicalvolumes,' ')   $onlyif_check = $::lvm2::vg_onlyif_check   $cmd = "${onlyif_check} apply ${name} ${pv}"   $pipe = $::lvm2::dryrun_pipe_command   $command = $::lvm2::dryrun ? {     false   => $cmd,     default => "echo '${cmd}' ${pipe}",   }   exec { "extend_volume_group_${name}" :     command => $command,     path    =>