Improving upon the Containerization Process at Terrax Micro-Brewery Inc

High time to continue on our blog series about containerization. We’re gonna improve upon our previous setup and throw even more of our brews into containers. In fact the whole setup in this blogpost will be containerized.

We’ll start from the bottom up. First we’ll containerize our MySQL database (using a docker volume to persist the data), next we’ll improve a little upon our Spring Boot container and as an added bonus we’ll load balance the traffic to our application containers with the help of NGINX, which – needless to say – will be run in a container as well.

By the end of this blog we’ll roughly have the setup depicted in the figure below.

Along the way we’ll dive a lot in the Docker CLI and build some images using Docker files. The containers we’ll build and use in this blogpost can be pulled from Docker Hub, while the code of the three upper layers in the picture can be viewed on GitHub. See the references section for the links.

Network

First things first! In the old days of Docker one used links to connect containers, but they are a legacy feature now and should be avoided. Instead, you should use docker networks. So let’s create a network now:

docker network create tb-network-shared

That’s it! There’s not much more to it. If at any time you want to get an overview of the current network setup, you can use the docker inspect command:

docker network inspect tb-network-shared

Not much to see right now, except a name, a subnet and a gateway.

Note that the docker cli can help you autocomplete a lot of commands by using the <Tab> button.

Database layer

This time we’ll put the MySQL database in a proper container. Since containers are per definition stateless, should be immutable and should be replaceable on the fly, we need a way to persist the data so it survives a stopped container: enter Docker volumes!

Database volume

Creating a docker volume is as simple as creating a network. All you need is a name:

docker volume create tb-mysql-shared

A volume can also be inspected – with the docker volume inspect – command. It’ll give you the Mountpoint, i.e. the place on the host where the data of the volume is stored, in my case /var/lib/docker/volumes/tb-mysql-shared/_data.

Containerizing MySQL

In our previous blogpost we ran MySQL on the Docker Host. Here we’re gonna pull the official MySQL image from Docker Hub. For a good reference on how to set up a MySQL database in a docker container, you can check out this blogpost and this one.

For a first test run, spin up a MySQL container and initialize it with a user and a database:

docker run --name=test-mysql --detach \
--env="MYSQL_ROOT_PASSWORD=root" \
--env="MYSQL_USER=tb_admin" \
--env="MYSQL_PASSWORD=tb_admin" \
--env="MYSQL_DATABASE=db_terrax" \
mysql

If all goes well, you’ll have a running MySQL container now. Let’s check out if the database and user has been created. First connect to the container and open a bash shell:

docker exec --tty --interactive test-mysql bash

You should get a root prompt now from where you can start a mysql client session:

mysql -utb_admin -ptb_admin

If the connection succeeds, it means the user tb_admin has been added successfully. Issue the following command to check if the db_terrax database has been created as well:

show databases;

If all went well, you should get the following result:

Let’s stop this container and move on:

docker stop test-mysql

Now for the Spring Boot container (next section) to be able to play well with the db_terrax database, the database should contain the necessary tables, i.e. beer and brewery.

So how do we get those tables in the database container? As it happens, MySQL provides an option for initializing a database: you can put an initialization script at a fixed location (/docker-entrypoint-initdb.d/). Upon startup of the container any script in there gets executed. Let’s build an image with a proper initialization script in place. I’ve put the source code (just a Dockerfile and the sql script) in GitHub.

The sql script just creates the beer and brewery tables – if not already present – joined by a foreign key:

create table if not exists brewery (
  id bigint not null auto_increment,
  country varchar(255),
  name varchar(255) not null,
  primary key (id)
) engine=InnoDB;

create table if not exists beer (
  id bigint not null auto_increment,
  beer_type varchar(255) not null,
  name varchar(255) not null,
  brewery_id bigint,
  primary key (id),
  FOREIGN KEY (brewery_id)
  REFERENCES brewery(id)
) engine=InnoDB;

The Dockerfile is as simple as can be. It just takes a base MySQL image (version 8.0.15) from Docker Hub, sets the environment variables to their proper values and copies the initialize script to its proper location:

FROM mysql:8.0.15
ENV MYSQL_ROOT_PASSWORD=root \
  MYSQL_USER=tb_admin \
  MYSQL_PASSWORD=tb_admin \
  MYSQL_DATABASE=db_terrax
COPY ./create.sql /docker-entrypoint-initdb.d/create.sql

Let’s build the image and push it to Docker Hub. The tag version tb-docker-2.0 is a fixed version number we’ll be using for all of the images we build in this blogpost.

docker build --tag rphgoossens/tb-mysql-docker:tb-docker-2.0
docker push rphgoossens/tb-mysql-docker:tb-docker-2.0

Now we can startup a container (tb-mysql-db) for the image we’ve built and inspect it to see if our initialization script did its proper work:

docker run --name tb-mysql-db --detach \
--network tb-network-shared \
--mount source=tb-mysql-shared,target=/var/lib/mysql \
rphgoossens/tb-mysql-docker:tb-docker-2.0

Note that we’re using the docker network and volume we created earlier here.

Now check the database again:

docker exec -it tb-mysql-db bash

Connect to the mysql client

mysql -utb_admin -ptb_admin

Connect to the db_terrax database and check its tables.

connect db_terrax;
show tables;

Now if all went well, you’ll see that the tables have been added to the database by the initialization script:

That’s it for the database layer! We now have a running mySQL container that’s ready for business (based on a proper base image). Now let’s see if we can let our Spring Boot application interact with it.

Spring Boot application

We’ll use the same Spring Boot application that we developed in our previous blog post with one slight adjustment: We’ll add the Spring Boot Actuator to the pom.xml

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

And we’ll expose the env endpoint of the actuator. With this we can check if our NGINX load balancer that we’ll add in the next section, is doing a proper job. Add the following to the application-docker.yaml file.:

management:
  endpoints:
    web:
      exposure:
        include: env

The final code can be found on GitHub.

Last step is building and pushing the new image to Docker Hub:

mvn clean install
mvn dockerfile:build
mvn dockerfile:push

Now spin up 2 containers on different ports and POST (use the swagger ui at http://localhost:809x/swagger-ui.html) some data to see if the connection with the mysql container works as expected (use two different publish ports, eg. 8091 and 8092).

docker run --name tb-springboot-app-x --detach \
--network tb-network-shared \
--publish 809x:8090 \
--env "SERVER_PORT=8090" \
--env "DB_USERNAME=tb_admin" \
--env "DB_PASSWORD=tb_admin" \
--env "DB_URL=mysql://tb-mysql-db:3306/db_terrax" \
rphgoossens/tb-springboot-docker:tb-docker-2.0

To check if the system survives a shutdown of the MySQL container and see if the data survives, stop and remove the container and spin up a new one:

docker stop tb-mysql-db
docker rm tb-mysql-db
docker run --name tb-mysql-db --detach \
--network tb-network-shared \
--mount source=tb-mysql-shared,target=/var/lib/mysql \
rphgoossens/tb-mysql-docker:tb-docker-2.0

If you followed all the steps closely, the Spring Boot apps should work and a GET on the brewery resource should return the already persisted data.

NGINX load balancer

Now that we have multiple Spring Boot apps running on different ports, it’s time to put a load balancer in place. We’re gonna use NGINX for this. And just like we did in the MySQL section, we’re also gonna build a custom NGINX image and push it to Docker Hub.

The code for this project can be found at GitHub and like the MySQL image, it’s fairly simple. It’s just a docker file and a configuration file.

The Dockerfile’s main task is to load the configuration into the container (besides that, it also exposes the 8080 port and fires up NGINX):

FROM nginx:1.15.12
LABEL Roger Goossens
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]

The NGINX configuration file can get pretty complex if you really wanna do fancy stuff with it. For a detailed description on the NGINX configuration, please go here. This is the one we’ll be using to build the image:

events { worker_connections 1024; }

http {
  upstream tb-springboot-app {
    server tb-springboot-app-1:8090;
    server tb-springboot-app-2:8090;
  }

  server {
    listen 8080;
    server_name localhost;

    location / {
      proxy_pass http://tb-springboot-app;
      proxy_set_header Host $http_host;
    }
  }
}

The server section takes care that the server will run on and listen to localhost:8080. The http://localhost:8080/ location will proxy requests to the upstream locations of both our Spring Boot application urls (taken care by the proxy_pass and corresponding upstream sections). The proxy_set_header directive ensures that the base URL of the Swagger UI that will be served by NGINX will have the proper value of http://localhost:8080, so all calls on the UI will be served by one of our two Spring Boot containers.

For now that should be enough information. Let’s build and push the image (it can be found on Docker Hub now):

docker build --tag rphgoossens/tb-nginx-docker:tb-docker-2.0 .
docker push rphgoossens/tb-nginx-docker:tb-docker-2.0

And finally spin up the last container based on this image:

docker run --detach \
--network tb-network-shared \
--publish 8080:8080 \
rphgoossens/tb-nginx-docker:tb-docker-2.0

Note that both Spring Boot containers need to be up so the corresponding upstream section in the NGINX configuration is valid. If one or both containers are down or running on different ports or with different names, the NGINX container won’t start and instead will throw an error (if someone knows a fix for this, please let me know!). After startup however, we can start shutting down Spring Boot containers without affecting the NGINX container.

After the NGINX container has been started, the Swagger UI will be available on http://localhost:8080/swagger-ui.html. You can play around with it and stop containers to see how it behaves. Shutting down one Spring Boot container won’t impact the system. The NGINX container will forward all concurrent requests to the one running container. If both Spring Boot containers are down, you’ll eventually get a 502 error.

One other way to inspect the load balancing is to check the environment via the Spring Boot Actuator. To query the HOSTNAME, enter: http://localhost:8080/actuator/env/HOSTNAME. If both containers are running, you’ll see that the HOSTNAME value will alternate between two values:

These values correspond to the hostname that can be found via the docker container inspect command:

Again, if you shut down one container and refresh, you’ll notice that the HOSTNAME will stick to the value of the one container that is still running.

Also of interest (we mentioned it earlier) is the docker network inspect command to get an overview of the entire network:

docker network inspect tb-network-shared

Clean up

After you played around enough and you want to clean up all the containers, networks, volumes and images on your system, first stop all running containers and then issue the following commands (note that this will remove every docker data on your system!):

docker system prune --volumes
docker image prune --all

Summary

We covered a lot of ground in this quite extensive blog post. We’ve shown how to create a multilayered application with all layers running in Docker containers. To quickly create the whole setup here’s a recap of all the statements needed to get the container constellation up and running:

docker volume create tb-mysql-shared
docker network create tb-network-shared
docker run --name tb-mysql-db --detach --network tb-network-shared --mount source=tb-mysql-shared,target=/var/lib/mysql rphgoossens/tb-mysql-docker:tb-docker-2.0
docker run --name tb-springboot-app-1 --detach --network tb-network-shared -e "SERVER_PORT=8090" -e "DB_USERNAME=tb_admin" -e "DB_PASSWORD=tb_admin" -e "DB_URL=mysql://tb-mysql-db:3306/db_terrax" rphgoossens/tb-springboot-docker:tb-docker-2.0
docker run --name tb-springboot-app-2 --detach --network tb-network-shared -e "SERVER_PORT=8090" -e "DB_USERNAME=tb_admin" -e "DB_PASSWORD=tb_admin" -e "DB_URL=mysql://tb-mysql-db:3306/db_terrax" rphgoossens/tb-springboot-docker:tb-docker-2.0
docker run --detach --network tb-network-shared --publish 8080:8080 rphgoossens/tb-nginx-docker:tb-docker-2.0

Note that we didn’t publish the Spring Boot app ports anymore. Since we’re using a docker network in all the containers, they can access each other’s ports and there’s no need to explicitly expose them. There’s also no need to publish the individual Spring Boot application ports because we want the NGINX load balancer to handle all the traffic and be the only gateway to our application.

Working with all these individual docker cli statements is cumbersome and luckily there is a better way to orchestrate such a container constellation, i.e. Docker Compose.

In our next (hopefully lot shorter) blog post, we’ll deep dive into Docker Compose and use it to rebuild the setup from this blog post in a much more manageable way.

Till then, stay tuned and grab another beer!!!

References

Source code and containers

Other

Advertisements

Delivering crafts in containers – HelloBeer merges with TerraX Micro-Brewery Inc

Alright, the owners of HelloBeerTM have rested on their laurels for quite some time now. All their profits have been spent on the finer things in life. Time to get back in business again. And time to get more commercial while doing so.

For this HelloBeer recently joined Terra 10 – a startup specialized in containers and cloud beers. Their respective breweries have been merged to form Terrax Micro-Brewery Inc, a collab that promises to take the beer brewing business by storm.

This blog will be the start of a series of blogs where we will examine Docker, Containerization, Kubernetes and finally see our brewery startup making its move to the Amazon Cloud.

In this first blog of the series we’ll start at the basics. We’ll just run a simple Spring Boot Rest service in a Docker container and connect to a MySQL database running on the same host as where Docker is running.

First things first, let’s setup all the dependencies before we dive into the code. Oh and the final code, as always, can be found on GitHub here.

Docker installation

There are lots of good guides out there. I used the official guy found here.

MySQL installation

For the MySQL installation (on Ubuntu, what else?) I recommend to follow this guide here.

After installation we can setup a schema that will hold the data for our Spring Boot Service. The initial setup we’re using is explained in detail here.

First we create the database and user that we’ll use to store our beer data.

create database db_terrax; -- Create the new database
create user 'tb_admin'@'%' identified by 'tb_admin'; -- Creates the user
grant all on db_terrax.* to 'tb_admin'@'%'; -- Gives all the privileges to the new user on the newly created database

To make our lives easier for we just use the hibernate database initialization option to initially create our database tables.

Spring Boot application

This goal of this blog to demonstrate the Docker features. So I’ll not explain the application in detail here. The application consists of a couple of REST services. Both expose simple crud operations for the entities Beer, resp. Brewery:

For the code, like I said, check GitHub. For inspiration on how to create such an application, I’ve put quite a few blogs online. You can create it from scratch using the Spring Initializr, like I did in this blog here (and that’s actually the way I created the services for this blog). Or to get even more of a headstart you could use JHipster, like I did in this blog here.

First run (without Docker)

In this first run we’re gonna run the application locally against the mysql database. This will create the tables and gives us a chance to put some data in place.

For this we’ll make use of Spring profiles. The production profile has the necessary properties needed to connect with the mysql database. These properties are set in the application-prod.yml file:

server:
  port: 8090
spring:
  jpa:
    properties:
      javax:
        persistence:
          schema-generation:
            create-source: metadata
            database:
              action: update
              scripts:
                action: drop-and-create
                create-target: ./create.sql
                drop-target: ./drop.sql
  datasource:
    url: jdbc:mysql://localhost:3306/db_terrax
    username: tb_admin
    password: tb_admin

Optionally adjust the settings to make them reflect your environment. The database.action=update property ensures that the database tables are created in the mysql database. I’ve also put in place some properties to generate ddl scripts.

I’ll add these scripts to the GitHub source code so we could basically skip this whole section and just run the create.sql script against our database schema. That would be enough to lay the groundwork to directly work with the Docker images we’ll build next (see the next paragraph).

To set the profile to production, set the following environment variable (as an alternative you could also set this variable in the application.yml file) (note that if you don’t set an active profile, the default will be the dev profile and your application will startup running against an in-memory H2 database)

export SPRING_PROFILES_ACTIVE=prod

Now run the application:

mvn spring-boot:run

If all goes well the application will startup successfully (adding the necessary database objects in the mysql database while doing so) and we can test the application. The easiest way to do so is via the Swagger UI that should be available at http://localhost:8090/swagger-ui.html.

Just POST a few breweries and eventually issue a GET to see what’s been created:

Enter Docker

Alright. Now that we’ve put some data in our mysql database. Let’s containerize our Spring Boot services and see if we can connect them with our database running on the docker host.

For the containerized application, we will use a different Spring profile. The properties will be set in the application-docker.yml file

server:
  port: ${SERVER_PORT}
spring:
  jpa:
    properties:
      javax:
        persistence:
          schema-generation:
            database:
          	  action: none
  datasource:
    url: jdbc:${DB_URL}
    username: ${DB_USERNAME}
    password: ${DB_PASSWORD}

The biggest difference with the production properties is that we’ve externalized the server port and database properties (these will be passed when we run a container based on the Docker image we’ll be creating soon). And we also set the schema-generation.database.action to none here (we don’t want to introduce database changes when we spin up a new container instance).

Dockerfile

The Dockerfile serves as a basis for building a Docker image. For the contents of the file in case of Dockerizing a Spring Boot application, I’ll refer to the information provided here.

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
ENV SPRING_PROFILES_ACTIVE=docker
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","nl.terrax.tbspringbootdocker.TbSpringbootDockerApplication"]

Notice that I set the active Spring profile to docker in the Dockerfile.

Deploy the image to DockerHub

Alright we’re almost there. Let’s build the application and the docker image and subsequently push the image to DockerHub:

To build the image:

mvn clean install dockerfile:build

Now when you enter docker image ls (or docker images) at the command line, you’ll see that the image is available locally (together with the openjdk image it depends upon)(notice that rphgoossens is my Docker Hub repository):

Now let’s push the image to Docker Hub, so it can be shared:

mvn dockerfile:push

The image is now available on Docker Hub here. So no need to build the image from scratch anymore. When you want to run a container based on this image, Docker will download for you from Docker Hub.

Running the application

Network-wise there are two ways of running the application as a Docker container, i.e. bridged (the default) and hosted. The hosted variant will expose your host environment in its entirety to the container and is therefore not really recommended. It is the easiest to setup though, so I’ll mention it here briefly

host networking

The main advantage here is that the mysql database is directly available on localhost. The image we pushed to Docker Hub can be run directly in a container via docker like this:

docker run --network host -t -e "SERVER_PORT=8090" -e "DB_USERNAME=tb_admin" -e "DB_PASSWORD=tb_admin" -e "DB_URL=mysql://localhost:3306/db_terrax" rphgoossens/tb-springboot-docker

The -e option is used to set the environment variables, while the -t (as opposed to the -d) setting runs the container in the terminal window and shows you all the logging upon startup.

After the container has been started up, you can verify that the Swagger page is available on http://localhost:8090/swagger-ui.html.

You can spin up another container like this on a different port.

When you hit docker container ls after that you’ll see two containers running:

Alright this is not the prefered way of running the application in a container since it’s way less secure that using bridge networking. So let’s stop the containers (docker stop ) and remove them (docker container rm ) before heading to bridge networking.

bridge networking

The is the default and way more secure way of connecting your container to the mysql database. In this setting the database is NOT available on localhost (since localhost will refer to the container itself when using bridge networking).

So we will need to bind the mysql database to the IP address of the host (the container wil thenl be able to connect via that IP address).

Checkout the IP address you host is running on (sudo ifconfig, check for the docker0 interface) and add the following line to your mysql configuration (available at /etc/mysql/mysql.conf.d/mysqld.cnf)

bind-address = <ip-address-host>

That’s it. After a restart of mysql (sudo service mysql restart), the database will be bound to the host’s IP address. You can check it by running sudo netstat -tln (you should see a <ip-address-host>:3306 line there)

Now let’s spin up a container in bridge mode:

docker run -p 8091:8090 -t -e "SERVER_PORT=8090" -e "DB_USERNAME=tb_admin" -e "DB_PASSWORD=tb_admin" -e "DB_URL=mysql://<<i>ip-address-host</i>>:3306/db_terrax" rphgoossens/tb-springboot-docker

The biggest difference with the host networking variant is that you have to bind the port that the application is running on in the container (8090 in the example) to a port available on your host (8091 in the example), you do this with the -p (or –publish) option.

After startup the Swagger page of the application will be available on

Deploying a Camel Spring DSL integration on Red Hat JBoss EAP

The guys from HelloBeer are still recuperating from a massive hangover after recently celebrating their 15th blog post. Massive amounts of quality beers were consumed and great new ideas for future blog posts were discussed during the festivities..

Time to take a small pause! Before continuing our HelloBeer series, let’s do a quick tutorial on running a Camel integration on a standalone Red Hat JBoss EAP server.

So, in this blog post we’re gonna build a simple File-to-File integration with some content-based-routing added for fun. The integration is based on the default project generated from the maven archetype camel-archetype-spring. For deployment we’re gonna use some integrated features in IntelliJ provided by the JBoss Integration plugin.

Final code can be downloaded from GitHub and is available here.

EAP 7 installation

For this blog I’m using EAP 7.1.0. Which can be downloaded here.

Installation instructions are available here.

For simplicity sake I just installed the server in a directory in my local home, i.e. ~/EAP-7.1.0.

The project

Like I already said, the project is a simple file-to-file camel integration with a little content-based routing using the Spring DSL. Some pointers for setting up the project can be found here and here. Let’s break it down.

pom.xml

The most interesting part of the maven pom file are the camel and spring dependencies.

<dependencyManagement>
  <dependencies>
    <!-- Camel BOM -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-parent</artifactId>
      <version>2.22.2</version>
      <scope>import</scope>
      <type>pom</type>
    </dependency>
  </dependencies>
</dependencyManagement>

<dependencies>
  <dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-core</artifactId>
    <version>${camel.version}</version>
  </dependency>
  <dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-spring</artifactId>
    <version>${camel.version}</version>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.1.2.RELEASE</version>
</dependency>

We’re importing the camel-parent and are only using camel-core – the File component is part of the core -. The dependency for camel-spring is needed to use the Spring DSL and the spring-web dependency is needed to run the application as a Spring web application.

web.xml

Apart from the spring-web dependency you also need to add a listener to the web.xml (present in the src/main/webapp/WEB-INF folder) file to enable Spring:

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
         version="2.5">
  <listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
  </listener>
</web-app>

applicationContext.xml

Now the last piece of the puzzle is the actual integration. The route is coded in Spring DSL and has been put in the (default) applicationContext.xml file present in the src/main/webapp/WEB-INF folder.

<camelContext xmlns="http://camel.apache.org/schema/spring">
  <!-- here is a sample which processes the input files
       (leaving them in place - see the 'noop' flag)
       then performs content based routing on the message using XPath →
  <route>
  <from uri="file:{{env:HOME}}/Work/data?noop=true"/>
    <choice>
      <when>
        <xpath>/person/city = 'London'</xpath>
        <log message="UK message"/>
        <to uri="file:{{env:HOME}}/Work/data/messages/uk"/>
      </when>
      <otherwise>
        <log message="Other message"/>
        <to uri="file:{{env:HOME}}/Work/data/messages/others"/>
      </otherwise>
    </choice>
  </route>
</camelContext>

This is the default route generated by the maven archetype camel-archetype-spring. The route hopefully speaks for itself. I’ve adjusted it so it checks the $HOME/Work/data directory for new xml files to process instead of a data directory present in the project.

jetty

To enable quick testing without having to deploy to EAP every time, I’ve also put the jetty plugin in the maven pom. So with all the pieces now in place, we can verify if our basic integration actually works.:

mvn jetty:run

Now if you copy the example xml files (available in the data directory) in your $HOME/Work/data directory you can see the magic happen:

Deploying

For deployment to the EAP server we’re gonna use the JBoss Integration Plugin in IntelliJ

After we’ve installed the plugin, we can add a configuration to the project to fire up the server and deploy the war to EAP. For pointers see here and here.

First let’s add an artifact to the project pointing to a deployable (File > Project Structure > Artifacts).

Select Web Application Exploded > From Modules…, select the project and, after creation, change the Output directory to a subdirectory of the target directory (so a mvn clean will clean it up):

Now that we have a deployable artifact defined, the last step is to add a configuration to the project to deploy the artifact to EAP as part of the build.

Click on Add Configuration… and add a configuration based on the JBoss Server Local template.

Give the configuration a meaningful name, configure the Application Server to point to the EAP installation home and add the artifact created in the previous step to the Deployment tab:

Testing

Now let’s run our new configuration and see what happens:

Looking at the last three log entries, we can see that our integration is working nicely after deployment to EAP.

The management console being opened automatically, also shows our exploded archive in the Deployments section:

Summary

In this blog post we’ve built a simple camel integration using the Spring DSL, packaged it as a Spring Web application and deployed it to a local JBoss EAP server with a little help from IntelliJ.

References

Hello Beer Camel Quality Control

In our previous blog post we saw our Camels smuggling along their craft beer contraband to our thirsty customers. We can expect them to expand our craft business quite rapidly in the near future and open up new black and white markets (hopefully this will keep our shareholders happy and quiet for the time being!). For all this expansion to succeed however, we need to get our quality control in order pronto! The last thing we want is for dromedaries disguised as camels to deliver imitation crafts to our customers and thereby compromise our highly profitable trade routes. So, high time we put some unit testing in place and make our implementation a little more flexible and maintainable.

In this blog post we’ll unit test our Spring REST controller and our Camel route. We also get rid of those hardcoded endpoints and replace them with proper environment-specific properties. So buckle up, grab a beer and let’s get started!

Oh and as always, final code can be viewed online.

Unit testing the controller

Though technically this has nothing to do with Camel, it’s good practice to unit test all important classes, so let’s first tackle and unit test our Spring Boot REST controller.

We only need the basic Spring Boot Starter Test dependency for this guy:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Let’s test the most interesting part of the controller, i.e. the saveOrder method.

@RunWith(SpringRunner.class)
@WebMvcTest(OrderController.class)
public class OrderControllerTest {

    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private OrderRepository orderRepositoryMock;

    @Test
    public void saveOrder() throws Exception {
        OrderItem orderItem1 = new OrderItemBuilder().setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem orderItem2 = new OrderItemBuilder().setInventoryItemId(2L).setQuantity(50L).build();
        Order order = new OrderBuilder().setCustomerId(1L).addOrderItems(orderItem1, orderItem2).build();

        OrderItem addedItem1 = new OrderItemBuilder().setId(2L).setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem addedItem2 = new OrderItemBuilder().setId(3L).setInventoryItemId(2L).setQuantity(50L).build();
        Order added = new OrderBuilder().setId(1L).setCustomerId(1L).addOrderItems(addedItem1, addedItem2).build();

        when(orderRepositoryMock.save(any(Order.class))).thenReturn(added);

        mockMvc.perform(post("/hello-camel/1.0/order")
            .contentType(TestUtil.APPLICATION_JSON_UTF8)
            .content(TestUtil.convertObjectToJsonBytes(order)))
            .andExpect(status().isOk())
            .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8))
            .andExpect(jsonPath("$.id", is(1)))
            .andExpect(jsonPath("$.customerId", is(1)))
            .andExpect(jsonPath("$.orderItems[0].id", is(2)))
            .andExpect(jsonPath("$.orderItems[0].inventoryItemId", is(1)))
            .andExpect(jsonPath("$.orderItems[0].quantity", is(100)))
            .andExpect(jsonPath("$.orderItems[1].id", is(3)))
            .andExpect(jsonPath("$.orderItems[1].inventoryItemId", is(2)))
            .andExpect(jsonPath("$.orderItems[1].quantity", is(50)));

        ArgumentCaptor<Order> orderCaptor = ArgumentCaptor.forClass(Order.class);
        verify(orderRepositoryMock, times(1)).save(orderCaptor.capture());
        verifyNoMoreInteractions(orderRepositoryMock);

        Order orderArgument = orderCaptor.getValue();
        assertNull(orderArgument.getId());
        assertThat(orderArgument.getCustomerId(), is(1L));
        assertEquals(orderArgument.getOrderItems().size(), 2);
    }
}

Hopefully most of this code speaks for itself. Here are some pointers:

  • The WebMvcTest(OrderController.class) annotation ensures that you can test the OrderController in isolation. With this guy you can autowire a MockMvc instance that basically has all you need to unit test a controller;
  • The controller has a dependency on the OrderRepository, which we will mock in this unit test using the @MockBean annotation;
  • We first use some helper builder classes to fluently build our test Order instances;
  • Next we configure our mock repository to return a full fledged Order object when the save method is called with an Order argument;
  • Now we can actually POST an Order object to our controller and test the JSON being returned;
  • Next check is whether the mock repository was called and ensure that is was called only once;
  • Finally we check the Order POJO that was sent to our mock repository.

Running the test will show us we build a high quality controller here. There’s also a unit test available for the GET method. You can view it on GitHub. The GET method is a lot easier to unit test, so let’s skip it to keep this blog post from getting too verbose.

Testing the Camel route

Now for the most interesting part. We want to test the Camel route we built in our previous blog post. Let’s first revisit it again:

from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("http://localhost:8080/hello-camel/1.0/order");

There’s a lot going on in this route. Ideally I would like to perform two tests:

  • One to check if the XML consumed from the ftp endpoint is being properly unmarshalled to an Order POJO;
  • One to check the quality of the subsequent marshalling of said POJO to JSON and also to check if it’s being sent to our REST controller.

So let’s split our route into two routes to reflect this:

from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
    .routeId("ftp-to-order")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .to("direct:new-order").id("new-order");

from("direct:new-order")
    .routeId("order-to-order-controller")
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("http://localhost:8080/hello-camel/1.0/order").id("new-order-controller");

Note that we added ids to our routes as well as our producer endpoints. You’ll see later on – when we’re gonna replace the producer endpoints with mock endpoints – why we need these. Also note that we’ve set up direct endpoints in the middle of our original route. This will allow us to split the route in two.

Testing camel routes requires one additional dependency:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-test-spring</artifactId>
    <version>${camel.version}</version>
    <scope>test</scope>
</dependency>

Alright now let’s get straight down to business and unit test those routes:

@RunWith(CamelSpringBootRunner.class)
@SpringBootTest
public class FtpOrderToOrderControllerTest {

    private static boolean adviced = false;
    @Autowired
    private CamelContext camelContext;
    @EndpointInject(uri = "direct:input")
    private ProducerTemplate ftpEndpoint;
    @EndpointInject(uri = "direct:new-order")
    private ProducerTemplate orderEndpoint;
    @EndpointInject(uri = "mock:new-order")
    private MockEndpoint mockNewOrder;
    @EndpointInject(uri = "mock:new-order-controller")
    private MockEndpoint mockNewOrderController;

    @Before
    public void setUp() throws Exception {
        if (!adviced) {
            camelContext.getRouteDefinition("ftp-to-order")
                .adviceWith(camelContext, new AdviceWithRouteBuilder() {
                    @Override
                    public void configure() {
                        replaceFromWith(ftpEndpoint.getDefaultEndpoint());
                        weaveById("new-order").replace().to(mockNewOrder.getEndpointUri());
                    }
                });

            camelContext.getRouteDefinition("order-to-order-controller")
                .adviceWith(camelContext, new AdviceWithRouteBuilder() {
                    @Override
                    public void configure() {
                         weaveById("new-order-controller").replace().to(mockNewOrderController.getEndpointUri());
                    }
                });

            adviced = true;
        }
    }

    @Test
    public void ftpToOrder() throws Exception {
        String requestPayload = TestUtil.inputStreamToString(getClass().getResourceAsStream("/data/inbox/newOrder.xml"));
        ftpEndpoint.sendBody(requestPayload);

        Order order = mockNewOrder.getExchanges().get(0).getIn().getBody(Order.class);
        assertNull(order.getId());
        assertThat(order.getCustomerId(), is(1L));
        assertNull(order.getOrderItems().get(0).getId());
        assertThat(order.getOrderItems().get(0).getInventoryItemId(), is(1L));
        assertThat(order.getOrderItems().get(0).getQuantity(), is(100L));
        assertNull(order.getOrderItems().get(1).getId());
        assertThat(order.getCustomerId(), is(1L));
        assertThat(order.getOrderItems().get(1).getInventoryItemId(), is(2L));
        assertThat(order.getOrderItems().get(1).getQuantity(), is(50L));
    }

    @Test
    public void orderToController() {
        OrderItem orderItem1 = new OrderItemBuilder().setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem orderItem2 = new OrderItemBuilder().setInventoryItemId(2L).setQuantity(50L).build();
        Order order = new OrderBuilder().setCustomerId(1L).addOrderItems(orderItem1, orderItem2).build();
        orderEndpoint.sendBody(order);

        String jsonOrder = mockNewOrderController.getExchanges().get(0).getIn().getBody(String.class);
        assertThat(jsonOrder, hasNoJsonPath("$.id"));
        assertThat(jsonOrder, hasJsonPath("$.customerId", is(1)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[0].id"));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[0].inventoryItemId", is(1)));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[0].quantity", is(100)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[1].id"));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[1].inventoryItemId", is(2)));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[1].quantity", is(50)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[1].id"));
    }
}

Again a few pointers to the code above:

  • We’re using the recommended CamelSpringBootRunner here;
  • We autowire an instance of the CamelContext. This context is needed in order to alter the route later on;
  • Next we inject the Consumer and Producer endpoints we’re gonna use in our unit tests;
  • The Setup is the most important part of the puzzle. It is here we replace our endpoints with mocks (and our ftp consumer endpoint with a direct endpoint). It is also here we will use the ids we placed in our routes. They let us point to the endpoints (and the routes they’re in) we wish to replace;
  • Ideally we would have annotated this setUp code with the @BeforeClass annotation to let it run only once. Unfortunately that guy can only be placed on a static method. And static methods don’t play well with our autowired camelContext instance variable. So we use a static boolean to run this code only once (you can’t run it twice because the second time it’ll try to replace stuff that isn’t there anymore);
  • In the ftpToOrder unit test we shove an Order xml into the first route (using the direct endpoint) and check our mockNewOrder endpoint to see if a proper Order POJO has arrived there;
  • In the orderToController unit test we shove an Order POJO in the second route (again using a direct endpoint) and check our mockNewOrderController endpoint to see if a proper Order JSON String has arrived there.

Please note that the json assertion code in the OrderToController Test has a dependency on the json-path-assert library:

<dependency>
    <groupId>com.jayway.jsonpath</groupId>
    <artifactId>json-path-assert</artifactId>
    <version>2.4.0</version>
    <scope>test</scope>
</dependency>

This library is not really necessary. As an alternative you could write expressions like:

assertThat(JsonPath.read(jsonOrder,"$.customerId"), is("1"));

I think the json-path-assert notation is a bit more readable, but that’s just a matter of taste, I guess.

You can run the tests now (mvn clean test) and you will see that all tests are passing.

Externalizing properties

Alright we’re almost there. Only one last set of changes left to make the route a bit more flexible. Let’s introduce Camel properties to replace those hardcoded URIs in the endpoints. Camel and Spring Boot play along quite nicely here and Camel properties work out-of-the-box without further configuration.

So let’s introduce a property file (application-dev.properties) for the development environment and put those two endpoint URIs in it:

endpoint.order.ftp = ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error
endpoint.order.http = http://localhost:8080/hello-camel/1.0/order

Add one line to the application.properties file to set development as the default Spring profile.

spring.profiles.active=dev

And here’s the final route after putting those endpoint properties in place:

from("{{endpoint.order.ftp}}")
    .routeId("ftp-to-order")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .to("direct:new-order").id("new-order");

from("direct:new-order")
    .routeId("order-to-order-controller")
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("{{endpoint.order.http}}").id("new-order-controller");

And that’s it. You can run the application again to see that everything works like before.

Summary

This blog post was all about quality. We showed you how to setup testing in a Spring Boot Camel application and we built a couple of unit tests, one to test our Spring Boot REST controller and one to test our Camel route. As a small bonus we also externalized the endpoint URIs in our Camel route with the help of Camel properties.

Now all that’s left is to grab a beer and think about our next blog post.

References

HelloBeer’s first Camel ride

HelloBeerTM recently got some complaints from the Alcoholics Anonymous community. As it turns out, it’s very difficult to order a fine collection of craft beers online without ones wife finding out about it. Browser histories were scanned and some particularly resourceful spouses even installed HTTP sniffers to confront their husbands with their drinking problem. So in order to keep on top of the beer selling game, HelloBeer needs an obscure backdoor where orders can be placed lest it risks losing an important part of its clientele.

One of HelloBeer’s founding fathers has an old server residing in the attic of his spacious condo. He suggested to use that guy to serve as an old school FTPS server where customers can upload their orders to without their wives finding out about it.

In this blogpost we’re gonna build the integration between an FTPS server and our OrderService REST API (implemented in Spring Boot). To build the integration we’ll be relying on Apache Camel. It’s a great way for embedding Enterprise Integration Patterns in a Java based application, it’s lightweight and it’s very easy to use. Camel also plays nicely with Spring Boot as this blogpost will show.

To keep our non-hipster customers on board (and to make this blogpost a little more interesting), the order files placed on the FTP server, will be in plain old XML and hence have to be transformed to JSON. Now that we have a plan, let’s get to work!

Oh and as always, the finished code has been published on GitHub here.

Installing FTP

I’m gonna build the whole contraption on my Ubuntu-based laptop and I’m gonna use vsftpd to acts as an FTPS server. As a first prototype I’m gonna make the setup as simple as possible and allow anonymous users to connect and do everything they shouldn’t be able to do in any serious production environment.

These are the settings I had to tweak in the vsftpd.conf file after default installation:

# Enable any form of FTP write command.
write_enable=YES
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=YES
# Allow the anonymous FTP user to upload files.
anon_upload_enable=YES
# Files PUT by anonymous users will be GETable
anon_umask=022
# Allow the anonymous FTP user to move files
anon_other_write_enable=YES

Also make sure the permissions on the directory where the orders will be PUT are non-restrictive enough:

Contents of /srv directory:

Contents of /srv/ftp directory:

Contents of /srv/ftp/hello-beer directory:

The .done and .error directories are where the files will be moved to after processing.

Spring Booting the OrderService

The OrderService implementation is just plain old Spring Boot. For a good tutorial, check one of my previous blog posts here. The REST controller exposes a GET method for retrieving a list of orders and a POST method for adding a new order:

@RestController
@RequestMapping("/hello-camel/1.0")
public class OrderController {

    private final OrderRepository orderRepository;

    @Autowired
    public OrderController(OrderRepository orderRepository) {
        this.orderRepository = orderRepository;
    }

    @RequestMapping(value = "/order", method = RequestMethod.POST, produces = "application/json")
    public Order saveOrder(@RequestBody Order order) {
        return orderRepository.save(order);
    }

    @RequestMapping(value = "/orders", method = RequestMethod.GET, produces = "application/json")
    public List<Order> getAllOrders() {
        return orderRepository.findAll();
    }
}

Most of the heavy lifting is done in the domain classes. I wanted the Order to be one coherent entity including its Order Items, so I’m using a bidirectional OneToMany relationship here. To get this guy to play nicely along with the REST controller and the generated Swagger APIs by springfox-swagger2 plugin I had to annotate the living daylights out of the entities. I consulted a lot of tutorials to finally get the configuration right. Please check the references section for some background material. These are the finalized classes that worked for me (please note that I’ve omitted the getters and the setters for brevity):

The Order class:

@Entity
@Table(name = "hb_order")
public class Order {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @ApiModelProperty(readOnly = true)
    @JsonProperty(access = JsonProperty.Access.READ_ONLY)
    private Long id;

    @NotNull
    private Long customerId;

    @OneToMany(
        mappedBy = "order",
        cascade = CascadeType.ALL,
        orphanRemoval = true)
    @JsonManagedReference
    private List<OrderItem> orderItems;
}

The ApiModelProperty is used by the generated Swagger definitions and takes care that the id field only pops up in the response messages of the GET and POST methods, not in the POST request message (since the id is generated). The JsonProperty takes care that no id fields sent to the API aren’t unmarshalled from the JSON message to the entity POJO instance. In the OneToMany annotation the mappedBy attribute is crucial for the bidirectional setup to work properly (again: check the references!). The JsonManagedReference annotation is needed to avoid circular reference errors. It goes hand in hand with the JsonBackReference annotation on the Order Item (stay tuned!).

The OrderItem class:

@Entity
@Table(name = "hb_order_item")
public class OrderItem {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @JsonProperty(access = JsonProperty.Access.READ_ONLY)
    @ApiModelProperty(readOnly = true)
    private Long id;

    @ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
    @JoinColumn(name = "order_id")
    @JsonBackReference
    private Order order;

    @NotNull
    private Long inventoryItemId;

    @NotNull
    private Long quantity;
}

Again here the id field is made read-only for the API and for the Swagger definition. The ManyToOne and JoinColumn annotations are key to properly implement the bidirectional OneToMany relationship between the Order and OrderItem. And equally key is the JsonBackReference annotation on the Order field. Without this guy (and its corresponding JsonManagedReference annotation on the Order.orderItems field) you get errors when trying to POST a new Order (one last time: check the references!).

The rest of the code is available on the aforementioned GitHub location. If you give it a spin, you can check out the API on the Swagger page (http://localhost:8080/swagger-ui.html) and test it a bit. You should be able to POST and GET orders to and from the in-memory database.

Camelling out the integration

Now that we have a working OrderService running, let’s see if we can build a flow from the FTP server to the OrderService using Camel.

First step is adding the necessary dependencies to our pom.xml:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-spring-boot-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-core-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-ftp-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jacksonxml-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jackson-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-http-starter</artifactId>
    <version>${camel.version}</version>
</dependency>

The camel-spring-boot-starter is needed when you’re gonna work with camel in a Spring Boot application. For the other dependencies. It’s not that different from building a non-Spring Boot Camel application. For every camel component you need, you have to add the necessary dependency. The big difference is that you use the corresponding dependency with the -starter suffix.

Alright so what are all these dependencies needed for:

  • camel-core-starter: used for core functionality, you basically always need this guy;
  • camel-ftp-starter: contains the ftp component;
  • camel-jacksonxml-starter: used to unmarshal the XML in the Order to our Order POJO;
  • camel-jackson-starter: used to marshal the Order POJO to JSON;
  • camel-http-starter: used to issue a POST request to the OrderController REST API.

Believe it or not, now the only thing we have to take care of is to build our small Camel integration component utilizing all these dependencies:

@Component
public class FtpOrderToOrderController extends RouteBuilder {

    @Override
    public void configure() throws Exception {

    JacksonDataFormat jacksonDataFormat = new JacksonDataFormat();
    jacksonDataFormat.setInclude("NON_NULL");
    jacksonDataFormat.setPrettyPrint(true);

    from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
        .log("${body}")
        .unmarshal().jacksonxml(Order.class)
        .marshal(jacksonDataFormat)
        .log("${body}")
        .setHeader(Exchange.HTTP_METHOD, constant("POST"))
        .to("http://localhost:8080/hello-camel/1.0/order");
    }
}

Some pointers to the above code:

  • The .done and .error directories are where successfully and unsuccessfully processed Orders end up. If you don’t take care of moving the orders, they will be processed again and again;
  • The NON_NULL clause added to the JacksonDataFormat, filters out the id fields when marshalling the POJO to JSON;
  • The XML and JSON will be logged so you can verify that the transformations are working as expected.

The rest of the route imho is self-explanatory.

Oh and one more thing. I like my XML elements to be capitalized. So our Order XML element contains a CustomerId element, not a customerId element. This only works is you give the jacksonxml mapper some hint in the form of annotations on the Order (and OrderItem) POJO (note that I’ve omitted the other annotations in the code below):

public class Order {
    
    private Long id;

    @JacksonXmlProperty(localName="CustomerId")
    private Long customerId;

    @JacksonXmlProperty(localName="OrderItems")
    private List<OrderItem> orderItems;
}

The same applies for the OrderItem, see Github for the definitive code.

Testing the beasty

Now as always the proof is in the tasting of the beer. Time to fire up the Spring Boot application and place our first Order on the FTP server.

I’ve created a small newOrder.xml file and put it in a local directory. It looks like this:

<?xml version="1.0" encoding="UTF-8" ?>
<Order>
    <CustomerId>1</CustomerId>
    <OrderItems>
        <OrderItem>
            <InventoryItemId>1</InventoryItemId>
            <Quantity>100</Quantity>
        </OrderItem>
        <OrderItem>
            <InventoryItemId>2</InventoryItemId>
            <Quantity>50</Quantity>
        </OrderItem>
    </OrderItems>
</Order>

Now when i connect to my local FTP server, change to the hello-beer directory and issue a PUT of that local newOrder.xml file, I can see the logging of the Camel component appearing in my IntelliJ IDE:

As you can see the first log statement has been executed and the XML content of the file is displayed. The second log statement has been executed as well and nicely displays the message body after it has been transformed into JSON.

You will also notice that the file has been moved to the .done directory. You can also do this test with an invalid xml file and notice that it ends up in the .error directory.

One last test needed. Let’s issue a GET against the hello-camel/1.0/orders endpoint with the help of the Swagger UI. And lo and behold the response:

Great, so our newOrder.xml that arrived on our FTP server has been nicely stored in our database. Our first prototype is working. Our AA customers will be delighted to hear this.

Summary

In this blog post we’ve seen how easy it is to integrate with Apache Camel in a Spring Boot application. We coded an FTP-to-REST integration flow in no time and even put some XML-to-JSON transformation into the mix. I like the fact that we can keep the integration code nice and clean and separated from the rest of the application.

Testing is still a bit of trial and error though. Let’s see if we can put some proper unit tests in place in the next blog post. For now: happy drinking!

References

JHipster – Making things a little less hip

Just like a good old Belgian beer can make for a nice change of pace after you’ve filled up on all those crafty IPAs and Stouts, it’s not always necessary to go for the latest and greatest. Last post saw us using Kafka as a message broker. In this blog post we’ll put a more traditional broker in between our thirsty beer clients and our brewery pumping out the happy juice! This blog is all about RabbitMQ! So let’s end this introduction and get started!
rabbitmq_logo
The final version of the code can be found here. Instead of building the whole thing from scratch like we did in the Kafka blog, we’ll be using a JHipster generator module this time.

JHipster Spring Cloud Stream generator

The JHipster Spring Cloud Stream generator can add RabbitMQ/Spring Cloud Stream support to our HelloBeer application. It uses the Yeoman Generator to do this.

Installation

Installation and running the generator is pretty straightforward. The steps are explained in the page’s README.md:

  • First install the generator
yarn global add generator-jhipster-spring-cloud-stream
  • Next run the generator (from the directory of our JHipster application) and accept the defaults
yo jhipster-spring-cloud-stream
  • Finally spin up the generated RabbitMQ docker file to start the
    RabbitMQ message broker
docker-compose -f src/main/docker/rabbitmq.yml up -d

Generated components

You can actually run the application now and see the queue in action. But before we do that let’s first take a look at what the generator did to our JHipster application:

  • application-dev.yml/application-prod.yml: modified to add RabbitMQ topic configuration;
  • pom.xml: modified to add the Spring Cloud Stream dependencies;
  • rabbitmq.yml: the docker file to spin up the RabbitMQ broker;
  • CloudMessagingConfiguration: configures a RabbitMQ ConnectionFactory;
  • JhiMessage: domain class to represent a message (with a title and a body) to be put on the RabbitMQ topic;
  • MessageResource: REST controller to POST a message onto the RabbitMQ topic and GET the list of posted messages;
  • MessageSink: Service class subscribes to the topic and puts received message in a List variable (the variable that gets read when issuing a GET via the MessageResource).

Running and testing

Alright, let’s test the RabbitMQ broker the generator set up for us. Run the JHipster application, login as admin user and go to the API page. You’ll see that a new message-resource REST service has been added to the list of services:

Screenshot from 2018-06-16 21-35-14

Call the POST operation a few times to post some messages to the RabbitMQ topic (which fills up the jhiMessages list):

Screenshot from 2018-06-16 21-37-48

Now, issue the GET operation to retrieve all the messages you POSTed in the previous step:

Screenshot from 2018-06-16 21-56-04

Cool! Working as expected. Now let’s get to work to put another RabbitMQ topic in place to decouple our OrderService (like we did with Kafka in our previous blog) again.

Replacing Kafka with RabbitMQ

rabbit-binder

Now we’re gonna put another RabbitMQ topic in between the Order REST service and the Order Service, just like we did with Kafka in our previous blogpost. Let’s leave the topic that the generator created in place. Since that guy is using the default channels, we’ll have to add some custom channels for our new topic that will handle the order processing.

First add a channel for publishing to a new RabbitMQ topic – we’ll be configuring the topic in a later step – and call it orderProducer:

public interface OrderProducerChannel {
  String CHANNEL = "orderProducer";

  @Output
  MessageChannel orderProducer();
}

We also need a channel for consuming orders for our topic. Let’s call that one orderConsumer:

public interface OrderConsumerChannel {
  String CHANNEL = "orderConsumer";

  @Input
  SubscribableChannel orderConsumer();
}

Now link those two channels to a new topic called topic-order in the application-dev.yml configuration file:

spring:
    cloud:
        stream:
            default:
                contentType: application/json
            bindings:
                input:
                    destination: topic-jhipster
                output:
                    destination: topic-jhipster
                orderConsumer:
                    destination: topic-order
                orderProducer:
                    destination: topic-order

The changes needed to be made in the OrderResource controller are similar to the changes we made for the Kafka setup. The biggest difference is in the channel names, since the default channels are already taken by the generated example code.
Another difference is that we put the EnableBinding annotation directly on this class instead of on a Configuration class. This way the Spring DI Framework can figure out that the injected MessageChannel should be of type orderProducer. If you put the EnableBinding on the Configuration class – like we did in our Kafka setup – you need to use Qualifiers or inject the interface – OrderProducerChannel – instead, else Spring won’t know what Bean to inject, since there are more MessageChannel Beans now.

@RestController
@RequestMapping("/api/order")
@EnableBinding(OrderProducerChannel.class)
public class OrderResource {

  private final Logger log = LoggerFactory.getLogger(OrderResource.class);
  private static final String ENTITY_NAME = "order";
  private MessageChannel orderProducer;

  public OrderResource (final MessageChannel orderProducer) {
    this.orderProducer = orderProducer;
  }

  @PostMapping("/process-order")
  @Timed
  public ResponseEntity<OrderDTO> processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
        throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }
    orderProducer.send(MessageBuilder.withPayload(order).build());

    return ResponseEntity.ok(order);
  }
}

Again in our OrderService we also added the EnableBinding annotation. And again we use the StreamListener annotation to consume orders from the topic but this time we direct the listener to our custom orderConsumer channel:

@Service
@Transactional
@EnableBinding(OrderConsumerChannel.class)
public class OrderService {
  ....
  @StreamListener(OrderConsumerChannel.CHANNEL)
  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    ....
  }
  ....
}

Building unit/integration tests for the RabbitMQ setup is not much different from the techniques we’ve used in the Kafka setup. Check my previous blog post for the examples.

Testing the setup

Alright, let’s test our beast again. These are the stock levels before:

Screenshot-2018-6-17 Item Stock Levels

Now let’s call the OrderResource and place an order of 20 Small bottles of Dutch Pilsner:

Screenshot from 2018-06-17 20-42-48

Check the stock levels again:

Screenshot-2018-6-17 Item Stock Levels(1)

Notice the new item stock level line! The inventory item went down from 90 to 70. Our RabbitMQ setup is working! Cheers!

Summary

In this blog post we saw how easy it is to switch from Kafka to RabbitMQ. The Spring Cloud Stream code mostly abstracts away the differences and didn’t change much. We also used a generator this time to do most of the heavy lifting. Time for a little vacation in which I’m gonna think about my next blog post. Again JHipster, check Spring Cloud Stream’s error handling possibilities or should I switch to some posts about other Spring Cloud modules? Let’s drink a few HelloBeerTM crafts and ponder about that!

References

JHipster – Streaming beer with Kafka and Spring Cloud Stream

Now that our OrderService is up and running, it’s time to make it a little more robust and decoupled. In this blog post we’re gonna put Kafka in between the OrderResource controller and our Spring Boot back-end system and use Spring Cloud Stream to ease development:

Blank Diagram

Upon creation of a JHipster application you will be given an option to select the Asynchronous messages using Apache Kafka option. After generation your pom file and application.yml will be all set up for using Kafka and Spring Cloud Stream. You’ll also get a docker file to spin up Kafka (and Zookeeper) and a MessageConfiguration class will be generated. There you need to declare your input and output channels (channels are Spring Cloud Stream abstractions, they’re the connection between the application and the message broker). If you follow the JHipster documentation on Kafka here – right after generating a virgin JHipster app – you should get a working flow up in no time.

Now, I wanna further improve upon the current HelloBeerTM application we finished in my previous blog post, and I didn’t check the Asynchronous messages option when I initially created the application. It’s not possible to add the option afterwards via the CLI, but luckily it’s not really that hard to add the necessary components manually. So let’s get started and makes those beer orders flow through a Kafka topic straight into our back-end application.
As always the finished code can be found on GitHub.

Kafka Docker image

Alright this guy I just ripped from a new JHipster app with the messaging option enabled. Add this kafka.yml file to the src/main/docker directory:

version: '2'
services:
    zookeeper:
        image: wurstmeister/zookeeper:3.4.6
        ports:
          - 2181:2181
    kafka:
        image: wurstmeister/kafka:1.0.0
        environment:
            KAFKA_ADVERTISED_HOST_NAME: localhost
            KAFKA_ADVERTISED_PORT: 9092
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_CREATE_TOPICS: "topic-jhipster:1:1"
        ports:
            - 9092:9092

You can spin up Kafka now with this file by issuing the following command:

docker-compose -f src/main/docker/kafka.yml up -d

Adding the dependencies

The following dependencies are needed to enable Spring Cloud Stream and have it integrate with Kafka:

<!-- Kafka support -->
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-test-support</artifactId>
  <scope>test</scope>
</dependency>

Configuring the channels

Since we only need one Kafka Topic, we can use the default channels that Spring Cloud Stream has to offer. We need one input and one output channel, so we can use the combined Processor interface. For a more complex setup with multiple topics, you can write your own custom interfaces for the channels (this is also the practice in the JHipster documentation example). For more information about channels check the Spring Cloud Stream Reference Guide.

MessagingConfiguration

First add the configuration for the Processor channel. This is done in the
MessagingConfiguration class. We’ll add this guy to the config package, the place where JHipster stores all Spring Boot configuration.

package nl.whitehorses.hellobeer.config;

import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Processor;

@EnableBinding(value = Processor.class)
public class MessagingConfiguration {
}

application-dev.yml

The overall configuration needs a few properties to let the application know where to find Kafka and to let Spring Cloud Stream channels bind to a Kafka topic. Let’s call the topic hb-orders. I’ll only put the configuration in the development configuration – application-dev.yml – for now:

spring:
    cloud:
        stream:
            kafka:
                binder:
                    brokers: localhost
                    zk-nodes: localhost
            bindings:
                output:
                    destination: hb-orders
                    content-type: application/json
                input:
                    destination: hb-orders

Note that input and output are the default channel names when working with the default channel interfaces.
That’s it for the channel configuration. Now we can use them in our back-end code.

OrderResource – Publishing to Kafka

Let’s alter our OrderResource so it publishes the OrderDTO object to the output channel instead of calling the OrderService directly:

@RestController
@RequestMapping("/api/order")
public class OrderResource {

  private static final String ENTITY_NAME = "order";
  private final Logger log = LoggerFactory.getLogger(OrderResource.class);
  private MessageChannel channel;

  public OrderResource(final Processor processor) {
    this.channel = processor.output();
  }

  @PostMapping("/process-order")
  @Timed
  public ResponseEntity processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
      throw new BadRequestAlertException("Error processing order", ENTITY_NAME, "orderfailure");
    }
    channel.send(MessageBuilder.withPayload(order).build());

    return ResponseEntity.ok(order);
  }
}

Not much going on here. Just inject the Processor and its channel and send the OrderDTO object through it.

OrderService – Subscribing to Kafka

@Service
@Transactional
public class OrderService {
  ....
  @StreamListener(Processor.INPUT)
  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    ....
  }
  ....
}

Even simpler. The only change is adding the StreamListener annotation to the registerOrder method, making sure that guy sets off every time an order arrives at the topic.

Testing code

The spring-cloud-stream-test-support dependency (test-scoped) enables testing without a connected messaging system. Messages published to topics can be inspected via the MessageCollector class. I’ve rewritten the OrderResourceTest class to check if the OrderDTO is published to the message channel when calling the OrderResource:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = HelloBeerApp.class)
public class OrderResourceTest {

  @SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection")
  @Autowired
  private Processor processor;

  @SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection")
  @Autowired
  private MessageCollector messageCollector;

  private MockMvc restMockMvc;

  @Before
  public void setUp() {
    MockitoAnnotations.initMocks(this);

    OrderResource orderResource = new OrderResource(processor);
    restMockMvc = MockMvcBuilders
      .standaloneSetup(orderResource)
      .build();
  }

  @Test
  public void testProcessOrder() throws Exception {
    OrderItemDTO orderItem1 = new OrderItemDTO(1L, 50L);
    OrderItemDTO orderItem2 = new OrderItemDTO(2L, 50L);
    OrderDTO order = new OrderDTO();
    order.setCustomerId(1L);
    order.setOrderId(1L);
    order.setOrderItems(Arrays.asList(orderItem1, orderItem2));

    restMockMvc.perform(
      post("/api/order/process-order")
        .contentType(TestUtil.APPLICATION_JSON_UTF8)
        .content(TestUtil.convertObjectToJsonBytes(order)))
        .andExpect(status().isOk());

    Message<?> received = messageCollector.forChannel(processor.output()).poll();
    assertNotNull(received);
    assertEquals(received.getPayload(), order);

  }

}

In the OrderServiceIntTest I changed one of the test methods so it publishes an OrderDTO message on the (test) channel where the OrderService is subscribed to:

@Test
@Transactional
public void assertOrderOK() throws InvalidOrderException {
  ....
  //orderService.registerOrder(order);
  Message<OrderDTO> message = new GenericMessage<OrderDTO>(order);
  processor.input().send(message);
  ....
}

More information about Spring Cloud Stream testing can be found here.

Wiring it all up

Now let’s see if our beers will flow. So here are our stock levels before:
Screenshot-2018-5-23 Item Stock Levels

Now post a new (valid) order with Postman:
Screenshot from 2018-05-23 20-39-00

And behold our new stock levels:
Screenshot-2018-5-23 Item Stock Levels(1)

It still works! So our new setup with a Kafka topic in the middle is working like a charm! Note that this is a very simplistic example. To make it more robust – for one what about failed orders?! – the first step would be to move the topic consumer code away from the OrderService and put it in a separate class. That consumer class can delegate processing to an injected OrderService and deal with possible errors, eg. by moving the order to another topic. And with another topic you need custom interfaces for your channels as well.

Summary

In this blog post we introduced a Kafka topic to separate our Order clients from our Order processing. With the help of Spring Cloud Stream this is very easy to do. We also looked at a few ways to test messaging with Spring Cloud Stream.
The plan was to say goodbye to JHipster for now, but maybe I’ll do one more blog post. I wanna find out how easy it is to switch from Kafka to RabbitMQ or maybe improve upon this version and introduce a failed-order topic. I also wanna test how easy it is to upgrade this JHipster app to the latest version. So much ideas, so little time! Anyhow, let’s grab a beer first and think about that next blog!

References