Deploying a Camel Spring DSL integration on Red Hat JBoss EAP

The guys from HelloBeer are still recuperating from a massive hangover after recently celebrating their 15th blog post. Massive amounts of quality beers were consumed and great new ideas for future blog posts were discussed during the festivities..

Time to take a small pause! Before continuing our HelloBeer series, let’s do a quick tutorial on running a Camel integration on a standalone Red Hat JBoss EAP server.

So, in this blog post we’re gonna build a simple File-to-File integration with some content-based-routing added for fun. The integration is based on the default project generated from the maven archetype camel-archetype-spring. For deployment we’re gonna use some integrated features in IntelliJ provided by the JBoss Integration plugin.

Final code can be downloaded from GitHub and is available here.

EAP 7 installation

For this blog I’m using EAP 7.1.0. Which can be downloaded here.

Installation instructions are available here.

For simplicity sake I just installed the server in a directory in my local home, i.e. ~/EAP-7.1.0.

The project

Like I already said, the project is a simple file-to-file camel integration with a little content-based routing using the Spring DSL. Some pointers for setting up the project can be found here and here. Let’s break it down.

pom.xml

The most interesting part of the maven pom file are the camel and spring dependencies.

<dependencyManagement>
  <dependencies>
    <!-- Camel BOM -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-parent</artifactId>
      <version>2.22.2</version>
      <scope>import</scope>
      <type>pom</type>
    </dependency>
  </dependencies>
</dependencyManagement>

<dependencies>
  <dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-core</artifactId>
    <version>${camel.version}</version>
  </dependency>
  <dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-spring</artifactId>
    <version>${camel.version}</version>
  </dependency>
  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.1.2.RELEASE</version>
</dependency>

We’re importing the camel-parent and are only using camel-core – the File component is part of the core -. The dependency for camel-spring is needed to use the Spring DSL and the spring-web dependency is needed to run the application as a Spring web application.

web.xml

Apart from the spring-web dependency you also need to add a listener to the web.xml (present in the src/main/webapp/WEB-INF folder) file to enable Spring:

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
         version="2.5">
  <listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
  </listener>
</web-app>

applicationContext.xml

Now the last piece of the puzzle is the actual integration. The route is coded in Spring DSL and has been put in the (default) applicationContext.xml file present in the src/main/webapp/WEB-INF folder.

<camelContext xmlns="http://camel.apache.org/schema/spring">
  <!-- here is a sample which processes the input files
       (leaving them in place - see the 'noop' flag)
       then performs content based routing on the message using XPath →
  <route>
  <from uri="file:{{env:HOME}}/Work/data?noop=true"/>
    <choice>
      <when>
        <xpath>/person/city = 'London'</xpath>
        <log message="UK message"/>
        <to uri="file:{{env:HOME}}/Work/data/messages/uk"/>
      </when>
      <otherwise>
        <log message="Other message"/>
        <to uri="file:{{env:HOME}}/Work/data/messages/others"/>
      </otherwise>
    </choice>
  </route>
</camelContext>

This is the default route generated by the maven archetype camel-archetype-spring. The route hopefully speaks for itself. I’ve adjusted it so it checks the $HOME/Work/data directory for new xml files to process instead of a data directory present in the project.

jetty

To enable quick testing without having to deploy to EAP every time, I’ve also put the jetty plugin in the maven pom. So with all the pieces now in place, we can verify if our basic integration actually works.:

mvn jetty:run

Now if you copy the example xml files (available in the data directory) in your $HOME/Work/data directory you can see the magic happen:

Deploying

For deployment to the EAP server we’re gonna use the JBoss Integration Plugin in IntelliJ

After we’ve installed the plugin, we can add a configuration to the project to fire up the server and deploy the war to EAP. For pointers see here and here.

First let’s add an artifact to the project pointing to a deployable (File > Project Structure > Artifacts).

Select Web Application Exploded > From Modules…, select the project and, after creation, change the Output directory to a subdirectory of the target directory (so a mvn clean will clean it up):

Now that we have a deployable artifact defined, the last step is to add a configuration to the project to deploy the artifact to EAP as part of the build.

Click on Add Configuration… and add a configuration based on the JBoss Server Local template.

Give the configuration a meaningful name, configure the Application Server to point to the EAP installation home and add the artifact created in the previous step to the Deployment tab:

Testing

Now let’s run our new configuration and see what happens:

Looking at the last three log entries, we can see that our integration is working nicely after deployment to EAP.

The management console being opened automatically, also shows our exploded archive in the Deployments section:

Summary

In this blog post we’ve built a simple camel integration using the Spring DSL, packaged it as a Spring Web application and deployed it to a local JBoss EAP server with a little help from IntelliJ.

References

Advertisements

Hello Beer Camel Quality Control

In our previous blog post we saw our Camels smuggling along their craft beer contraband to our thirsty customers. We can expect them to expand our craft business quite rapidly in the near future and open up new black and white markets (hopefully this will keep our shareholders happy and quiet for the time being!). For all this expansion to succeed however, we need to get our quality control in order pronto! The last thing we want is for dromedaries disguised as camels to deliver imitation crafts to our customers and thereby compromise our highly profitable trade routes. So, high time we put some unit testing in place and make our implementation a little more flexible and maintainable.

In this blog post we’ll unit test our Spring REST controller and our Camel route. We also get rid of those hardcoded endpoints and replace them with proper environment-specific properties. So buckle up, grab a beer and let’s get started!

Oh and as always, final code can be viewed online.

Unit testing the controller

Though technically this has nothing to do with Camel, it’s good practice to unit test all important classes, so let’s first tackle and unit test our Spring Boot REST controller.

We only need the basic Spring Boot Starter Test dependency for this guy:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Let’s test the most interesting part of the controller, i.e. the saveOrder method.

@RunWith(SpringRunner.class)
@WebMvcTest(OrderController.class)
public class OrderControllerTest {

    @Autowired
    private MockMvc mockMvc;

    @MockBean
    private OrderRepository orderRepositoryMock;

    @Test
    public void saveOrder() throws Exception {
        OrderItem orderItem1 = new OrderItemBuilder().setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem orderItem2 = new OrderItemBuilder().setInventoryItemId(2L).setQuantity(50L).build();
        Order order = new OrderBuilder().setCustomerId(1L).addOrderItems(orderItem1, orderItem2).build();

        OrderItem addedItem1 = new OrderItemBuilder().setId(2L).setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem addedItem2 = new OrderItemBuilder().setId(3L).setInventoryItemId(2L).setQuantity(50L).build();
        Order added = new OrderBuilder().setId(1L).setCustomerId(1L).addOrderItems(addedItem1, addedItem2).build();

        when(orderRepositoryMock.save(any(Order.class))).thenReturn(added);

        mockMvc.perform(post("/hello-camel/1.0/order")
            .contentType(TestUtil.APPLICATION_JSON_UTF8)
            .content(TestUtil.convertObjectToJsonBytes(order)))
            .andExpect(status().isOk())
            .andExpect(content().contentType(TestUtil.APPLICATION_JSON_UTF8))
            .andExpect(jsonPath("$.id", is(1)))
            .andExpect(jsonPath("$.customerId", is(1)))
            .andExpect(jsonPath("$.orderItems[0].id", is(2)))
            .andExpect(jsonPath("$.orderItems[0].inventoryItemId", is(1)))
            .andExpect(jsonPath("$.orderItems[0].quantity", is(100)))
            .andExpect(jsonPath("$.orderItems[1].id", is(3)))
            .andExpect(jsonPath("$.orderItems[1].inventoryItemId", is(2)))
            .andExpect(jsonPath("$.orderItems[1].quantity", is(50)));

        ArgumentCaptor<Order> orderCaptor = ArgumentCaptor.forClass(Order.class);
        verify(orderRepositoryMock, times(1)).save(orderCaptor.capture());
        verifyNoMoreInteractions(orderRepositoryMock);

        Order orderArgument = orderCaptor.getValue();
        assertNull(orderArgument.getId());
        assertThat(orderArgument.getCustomerId(), is(1L));
        assertEquals(orderArgument.getOrderItems().size(), 2);
    }
}

Hopefully most of this code speaks for itself. Here are some pointers:

  • The WebMvcTest(OrderController.class) annotation ensures that you can test the OrderController in isolation. With this guy you can autowire a MockMvc instance that basically has all you need to unit test a controller;
  • The controller has a dependency on the OrderRepository, which we will mock in this unit test using the @MockBean annotation;
  • We first use some helper builder classes to fluently build our test Order instances;
  • Next we configure our mock repository to return a full fledged Order object when the save method is called with an Order argument;
  • Now we can actually POST an Order object to our controller and test the JSON being returned;
  • Next check is whether the mock repository was called and ensure that is was called only once;
  • Finally we check the Order POJO that was sent to our mock repository.

Running the test will show us we build a high quality controller here. There’s also a unit test available for the GET method. You can view it on GitHub. The GET method is a lot easier to unit test, so let’s skip it to keep this blog post from getting too verbose.

Testing the Camel route

Now for the most interesting part. We want to test the Camel route we built in our previous blog post. Let’s first revisit it again:

from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("http://localhost:8080/hello-camel/1.0/order");

There’s a lot going on in this route. Ideally I would like to perform two tests:

  • One to check if the XML consumed from the ftp endpoint is being properly unmarshalled to an Order POJO;
  • One to check the quality of the subsequent marshalling of said POJO to JSON and also to check if it’s being sent to our REST controller.

So let’s split our route into two routes to reflect this:

from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
    .routeId("ftp-to-order")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .to("direct:new-order").id("new-order");

from("direct:new-order")
    .routeId("order-to-order-controller")
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("http://localhost:8080/hello-camel/1.0/order").id("new-order-controller");

Note that we added ids to our routes as well as our producer endpoints. You’ll see later on – when we’re gonna replace the producer endpoints with mock endpoints – why we need these. Also note that we’ve set up direct endpoints in the middle of our original route. This will allow us to split the route in two.

Testing camel routes requires one additional dependency:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-test-spring</artifactId>
    <version>${camel.version}</version>
    <scope>test</scope>
</dependency>

Alright now let’s get straight down to business and unit test those routes:

@RunWith(CamelSpringBootRunner.class)
@SpringBootTest
public class FtpOrderToOrderControllerTest {

    private static boolean adviced = false;
    @Autowired
    private CamelContext camelContext;
    @EndpointInject(uri = "direct:input")
    private ProducerTemplate ftpEndpoint;
    @EndpointInject(uri = "direct:new-order")
    private ProducerTemplate orderEndpoint;
    @EndpointInject(uri = "mock:new-order")
    private MockEndpoint mockNewOrder;
    @EndpointInject(uri = "mock:new-order-controller")
    private MockEndpoint mockNewOrderController;

    @Before
    public void setUp() throws Exception {
        if (!adviced) {
            camelContext.getRouteDefinition("ftp-to-order")
                .adviceWith(camelContext, new AdviceWithRouteBuilder() {
                    @Override
                    public void configure() {
                        replaceFromWith(ftpEndpoint.getDefaultEndpoint());
                        weaveById("new-order").replace().to(mockNewOrder.getEndpointUri());
                    }
                });

            camelContext.getRouteDefinition("order-to-order-controller")
                .adviceWith(camelContext, new AdviceWithRouteBuilder() {
                    @Override
                    public void configure() {
                         weaveById("new-order-controller").replace().to(mockNewOrderController.getEndpointUri());
                    }
                });

            adviced = true;
        }
    }

    @Test
    public void ftpToOrder() throws Exception {
        String requestPayload = TestUtil.inputStreamToString(getClass().getResourceAsStream("/data/inbox/newOrder.xml"));
        ftpEndpoint.sendBody(requestPayload);

        Order order = mockNewOrder.getExchanges().get(0).getIn().getBody(Order.class);
        assertNull(order.getId());
        assertThat(order.getCustomerId(), is(1L));
        assertNull(order.getOrderItems().get(0).getId());
        assertThat(order.getOrderItems().get(0).getInventoryItemId(), is(1L));
        assertThat(order.getOrderItems().get(0).getQuantity(), is(100L));
        assertNull(order.getOrderItems().get(1).getId());
        assertThat(order.getCustomerId(), is(1L));
        assertThat(order.getOrderItems().get(1).getInventoryItemId(), is(2L));
        assertThat(order.getOrderItems().get(1).getQuantity(), is(50L));
    }

    @Test
    public void orderToController() {
        OrderItem orderItem1 = new OrderItemBuilder().setInventoryItemId(1L).setQuantity(100L).build();
        OrderItem orderItem2 = new OrderItemBuilder().setInventoryItemId(2L).setQuantity(50L).build();
        Order order = new OrderBuilder().setCustomerId(1L).addOrderItems(orderItem1, orderItem2).build();
        orderEndpoint.sendBody(order);

        String jsonOrder = mockNewOrderController.getExchanges().get(0).getIn().getBody(String.class);
        assertThat(jsonOrder, hasNoJsonPath("$.id"));
        assertThat(jsonOrder, hasJsonPath("$.customerId", is(1)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[0].id"));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[0].inventoryItemId", is(1)));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[0].quantity", is(100)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[1].id"));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[1].inventoryItemId", is(2)));
        assertThat(jsonOrder, hasJsonPath("$.orderItems[1].quantity", is(50)));
        assertThat(jsonOrder, hasNoJsonPath("$.orderItems[1].id"));
    }
}

Again a few pointers to the code above:

  • We’re using the recommended CamelSpringBootRunner here;
  • We autowire an instance of the CamelContext. This context is needed in order to alter the route later on;
  • Next we inject the Consumer and Producer endpoints we’re gonna use in our unit tests;
  • The Setup is the most important part of the puzzle. It is here we replace our endpoints with mocks (and our ftp consumer endpoint with a direct endpoint). It is also here we will use the ids we placed in our routes. They let us point to the endpoints (and the routes they’re in) we wish to replace;
  • Ideally we would have annotated this setUp code with the @BeforeClass annotation to let it run only once. Unfortunately that guy can only be placed on a static method. And static methods don’t play well with our autowired camelContext instance variable. So we use a static boolean to run this code only once (you can’t run it twice because the second time it’ll try to replace stuff that isn’t there anymore);
  • In the ftpToOrder unit test we shove an Order xml into the first route (using the direct endpoint) and check our mockNewOrder endpoint to see if a proper Order POJO has arrived there;
  • In the orderToController unit test we shove an Order POJO in the second route (again using a direct endpoint) and check our mockNewOrderController endpoint to see if a proper Order JSON String has arrived there.

Please note that the json assertion code in the OrderToController Test has a dependency on the json-path-assert library:

<dependency>
    <groupId>com.jayway.jsonpath</groupId>
    <artifactId>json-path-assert</artifactId>
    <version>2.4.0</version>
    <scope>test</scope>
</dependency>

This library is not really necessary. As an alternative you could write expressions like:

assertThat(JsonPath.read(jsonOrder,"$.customerId"), is("1"));

I think the json-path-assert notation is a bit more readable, but that’s just a matter of taste, I guess.

You can run the tests now (mvn clean test) and you will see that all tests are passing.

Externalizing properties

Alright we’re almost there. Only one last set of changes left to make the route a bit more flexible. Let’s introduce Camel properties to replace those hardcoded URIs in the endpoints. Camel and Spring Boot play along quite nicely here and Camel properties work out-of-the-box without further configuration.

So let’s introduce a property file (application-dev.properties) for the development environment and put those two endpoint URIs in it:

endpoint.order.ftp = ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error
endpoint.order.http = http://localhost:8080/hello-camel/1.0/order

Add one line to the application.properties file to set development as the default Spring profile.

spring.profiles.active=dev

And here’s the final route after putting those endpoint properties in place:

from("{{endpoint.order.ftp}}")
    .routeId("ftp-to-order")
    .log("${body}")
    .unmarshal().jacksonxml(Order.class)
    .to("direct:new-order").id("new-order");

from("direct:new-order")
    .routeId("order-to-order-controller")
    .marshal(jacksonDataFormat)
    .log("${body}")
    .setHeader(Exchange.HTTP_METHOD, constant("POST"))
    .to("{{endpoint.order.http}}").id("new-order-controller");

And that’s it. You can run the application again to see that everything works like before.

Summary

This blog post was all about quality. We showed you how to setup testing in a Spring Boot Camel application and we built a couple of unit tests, one to test our Spring Boot REST controller and one to test our Camel route. As a small bonus we also externalized the endpoint URIs in our Camel route with the help of Camel properties.

Now all that’s left is to grab a beer and think about our next blog post.

References

HelloBeer’s first Camel ride

HelloBeerTM recently got some complaints from the Alcoholics Anonymous community. As it turns out, it’s very difficult to order a fine collection of craft beers online without ones wife finding out about it. Browser histories were scanned and some particularly resourceful spouses even installed HTTP sniffers to confront their husbands with their drinking problem. So in order to keep on top of the beer selling game, HelloBeer needs an obscure backdoor where orders can be placed lest it risks losing an important part of its clientele.

One of HelloBeer’s founding fathers has an old server residing in the attic of his spacious condo. He suggested to use that guy to serve as an old school FTPS server where customers can upload their orders to without their wives finding out about it.

In this blogpost we’re gonna build the integration between an FTPS server and our OrderService REST API (implemented in Spring Boot). To build the integration we’ll be relying on Apache Camel. It’s a great way for embedding Enterprise Integration Patterns in a Java based application, it’s lightweight and it’s very easy to use. Camel also plays nicely with Spring Boot as this blogpost will show.

To keep our non-hipster customers on board (and to make this blogpost a little more interesting), the order files placed on the FTP server, will be in plain old XML and hence have to be transformed to JSON. Now that we have a plan, let’s get to work!

Oh and as always, the finished code has been published on GitHub here.

Installing FTP

I’m gonna build the whole contraption on my Ubuntu-based laptop and I’m gonna use vsftpd to acts as an FTPS server. As a first prototype I’m gonna make the setup as simple as possible and allow anonymous users to connect and do everything they shouldn’t be able to do in any serious production environment.

These are the settings I had to tweak in the vsftpd.conf file after default installation:

# Enable any form of FTP write command.
write_enable=YES
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=YES
# Allow the anonymous FTP user to upload files.
anon_upload_enable=YES
# Files PUT by anonymous users will be GETable
anon_umask=022
# Allow the anonymous FTP user to move files
anon_other_write_enable=YES

Also make sure the permissions on the directory where the orders will be PUT are non-restrictive enough:

Contents of /srv directory:

Contents of /srv/ftp directory:

Contents of /srv/ftp/hello-beer directory:

The .done and .error directories are where the files will be moved to after processing.

Spring Booting the OrderService

The OrderService implementation is just plain old Spring Boot. For a good tutorial, check one of my previous blog posts here. The REST controller exposes a GET method for retrieving a list of orders and a POST method for adding a new order:

@RestController
@RequestMapping("/hello-camel/1.0")
public class OrderController {

    private final OrderRepository orderRepository;

    @Autowired
    public OrderController(OrderRepository orderRepository) {
        this.orderRepository = orderRepository;
    }

    @RequestMapping(value = "/order", method = RequestMethod.POST, produces = "application/json")
    public Order saveOrder(@RequestBody Order order) {
        return orderRepository.save(order);
    }

    @RequestMapping(value = "/orders", method = RequestMethod.GET, produces = "application/json")
    public List<Order> getAllOrders() {
        return orderRepository.findAll();
    }
}

Most of the heavy lifting is done in the domain classes. I wanted the Order to be one coherent entity including its Order Items, so I’m using a bidirectional OneToMany relationship here. To get this guy to play nicely along with the REST controller and the generated Swagger APIs by springfox-swagger2 plugin I had to annotate the living daylights out of the entities. I consulted a lot of tutorials to finally get the configuration right. Please check the references section for some background material. These are the finalized classes that worked for me (please note that I’ve omitted the getters and the setters for brevity):

The Order class:

@Entity
@Table(name = "hb_order")
public class Order {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @ApiModelProperty(readOnly = true)
    @JsonProperty(access = JsonProperty.Access.READ_ONLY)
    private Long id;

    @NotNull
    private Long customerId;

    @OneToMany(
        mappedBy = "order",
        cascade = CascadeType.ALL,
        orphanRemoval = true)
    @JsonManagedReference
    private List<OrderItem> orderItems;
}

The ApiModelProperty is used by the generated Swagger definitions and takes care that the id field only pops up in the response messages of the GET and POST methods, not in the POST request message (since the id is generated). The JsonProperty takes care that no id fields sent to the API aren’t unmarshalled from the JSON message to the entity POJO instance. In the OneToMany annotation the mappedBy attribute is crucial for the bidirectional setup to work properly (again: check the references!). The JsonManagedReference annotation is needed to avoid circular reference errors. It goes hand in hand with the JsonBackReference annotation on the Order Item (stay tuned!).

The OrderItem class:

@Entity
@Table(name = "hb_order_item")
public class OrderItem {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    @JsonProperty(access = JsonProperty.Access.READ_ONLY)
    @ApiModelProperty(readOnly = true)
    private Long id;

    @ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
    @JoinColumn(name = "order_id")
    @JsonBackReference
    private Order order;

    @NotNull
    private Long inventoryItemId;

    @NotNull
    private Long quantity;
}

Again here the id field is made read-only for the API and for the Swagger definition. The ManyToOne and JoinColumn annotations are key to properly implement the bidirectional OneToMany relationship between the Order and OrderItem. And equally key is the JsonBackReference annotation on the Order field. Without this guy (and its corresponding JsonManagedReference annotation on the Order.orderItems field) you get errors when trying to POST a new Order (one last time: check the references!).

The rest of the code is available on the aforementioned GitHub location. If you give it a spin, you can check out the API on the Swagger page (http://localhost:8080/swagger-ui.html) and test it a bit. You should be able to POST and GET orders to and from the in-memory database.

Camelling out the integration

Now that we have a working OrderService running, let’s see if we can build a flow from the FTP server to the OrderService using Camel.

First step is adding the necessary dependencies to our pom.xml:

<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-spring-boot-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-core-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-ftp-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jacksonxml-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-jackson-starter</artifactId>
    <version>${camel.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.camel</groupId>
    <artifactId>camel-http-starter</artifactId>
    <version>${camel.version}</version>
</dependency>

The camel-spring-boot-starter is needed when you’re gonna work with camel in a Spring Boot application. For the other dependencies. It’s not that different from building a non-Spring Boot Camel application. For every camel component you need, you have to add the necessary dependency. The big difference is that you use the corresponding dependency with the -starter suffix.

Alright so what are all these dependencies needed for:

  • camel-core-starter: used for core functionality, you basically always need this guy;
  • camel-ftp-starter: contains the ftp component;
  • camel-jacksonxml-starter: used to unmarshal the XML in the Order to our Order POJO;
  • camel-jackson-starter: used to marshal the Order POJO to JSON;
  • camel-http-starter: used to issue a POST request to the OrderController REST API.

Believe it or not, now the only thing we have to take care of is to build our small Camel integration component utilizing all these dependencies:

@Component
public class FtpOrderToOrderController extends RouteBuilder {

    @Override
    public void configure() throws Exception {

    JacksonDataFormat jacksonDataFormat = new JacksonDataFormat();
    jacksonDataFormat.setInclude("NON_NULL");
    jacksonDataFormat.setPrettyPrint(true);

    from("ftp://localhost/hello-beer?username=anonymous&move=.done&moveFailed=.error")
        .log("${body}")
        .unmarshal().jacksonxml(Order.class)
        .marshal(jacksonDataFormat)
        .log("${body}")
        .setHeader(Exchange.HTTP_METHOD, constant("POST"))
        .to("http://localhost:8080/hello-camel/1.0/order");
    }
}

Some pointers to the above code:

  • The .done and .error directories are where successfully and unsuccessfully processed Orders end up. If you don’t take care of moving the orders, they will be processed again and again;
  • The NON_NULL clause added to the JacksonDataFormat, filters out the id fields when marshalling the POJO to JSON;
  • The XML and JSON will be logged so you can verify that the transformations are working as expected.

The rest of the route imho is self-explanatory.

Oh and one more thing. I like my XML elements to be capitalized. So our Order XML element contains a CustomerId element, not a customerId element. This only works is you give the jacksonxml mapper some hint in the form of annotations on the Order (and OrderItem) POJO (note that I’ve omitted the other annotations in the code below):

public class Order {
    
    private Long id;

    @JacksonXmlProperty(localName="CustomerId")
    private Long customerId;

    @JacksonXmlProperty(localName="OrderItems")
    private List<OrderItem> orderItems;
}

The same applies for the OrderItem, see Github for the definitive code.

Testing the beasty

Now as always the proof is in the tasting of the beer. Time to fire up the Spring Boot application and place our first Order on the FTP server.

I’ve created a small newOrder.xml file and put it in a local directory. It looks like this:

<?xml version="1.0" encoding="UTF-8" ?>
<Order>
    <CustomerId>1</CustomerId>
    <OrderItems>
        <OrderItem>
            <InventoryItemId>1</InventoryItemId>
            <Quantity>100</Quantity>
        </OrderItem>
        <OrderItem>
            <InventoryItemId>2</InventoryItemId>
            <Quantity>50</Quantity>
        </OrderItem>
    </OrderItems>
</Order>

Now when i connect to my local FTP server, change to the hello-beer directory and issue a PUT of that local newOrder.xml file, I can see the logging of the Camel component appearing in my IntelliJ IDE:

As you can see the first log statement has been executed and the XML content of the file is displayed. The second log statement has been executed as well and nicely displays the message body after it has been transformed into JSON.

You will also notice that the file has been moved to the .done directory. You can also do this test with an invalid xml file and notice that it ends up in the .error directory.

One last test needed. Let’s issue a GET against the hello-camel/1.0/orders endpoint with the help of the Swagger UI. And lo and behold the response:

Great, so our newOrder.xml that arrived on our FTP server has been nicely stored in our database. Our first prototype is working. Our AA customers will be delighted to hear this.

Summary

In this blog post we’ve seen how easy it is to integrate with Apache Camel in a Spring Boot application. We coded an FTP-to-REST integration flow in no time and even put some XML-to-JSON transformation into the mix. I like the fact that we can keep the integration code nice and clean and separated from the rest of the application.

Testing is still a bit of trial and error though. Let’s see if we can put some proper unit tests in place in the next blog post. For now: happy drinking!

References

JHipster – Making things a little less hip

Just like a good old Belgian beer can make for a nice change of pace after you’ve filled up on all those crafty IPAs and Stouts, it’s not always necessary to go for the latest and greatest. Last post saw us using Kafka as a message broker. In this blog post we’ll put a more traditional broker in between our thirsty beer clients and our brewery pumping out the happy juice! This blog is all about RabbitMQ! So let’s end this introduction and get started!
rabbitmq_logo
The final version of the code can be found here. Instead of building the whole thing from scratch like we did in the Kafka blog, we’ll be using a JHipster generator module this time.

JHipster Spring Cloud Stream generator

The JHipster Spring Cloud Stream generator can add RabbitMQ/Spring Cloud Stream support to our HelloBeer application. It uses the Yeoman Generator to do this.

Installation

Installation and running the generator is pretty straightforward. The steps are explained in the page’s README.md:

  • First install the generator
yarn global add generator-jhipster-spring-cloud-stream
  • Next run the generator (from the directory of our JHipster application) and accept the defaults
yo jhipster-spring-cloud-stream
  • Finally spin up the generated RabbitMQ docker file to start the
    RabbitMQ message broker
docker-compose -f src/main/docker/rabbitmq.yml up -d

Generated components

You can actually run the application now and see the queue in action. But before we do that let’s first take a look at what the generator did to our JHipster application:

  • application-dev.yml/application-prod.yml: modified to add RabbitMQ topic configuration;
  • pom.xml: modified to add the Spring Cloud Stream dependencies;
  • rabbitmq.yml: the docker file to spin up the RabbitMQ broker;
  • CloudMessagingConfiguration: configures a RabbitMQ ConnectionFactory;
  • JhiMessage: domain class to represent a message (with a title and a body) to be put on the RabbitMQ topic;
  • MessageResource: REST controller to POST a message onto the RabbitMQ topic and GET the list of posted messages;
  • MessageSink: Service class subscribes to the topic and puts received message in a List variable (the variable that gets read when issuing a GET via the MessageResource).

Running and testing

Alright, let’s test the RabbitMQ broker the generator set up for us. Run the JHipster application, login as admin user and go to the API page. You’ll see that a new message-resource REST service has been added to the list of services:

Screenshot from 2018-06-16 21-35-14

Call the POST operation a few times to post some messages to the RabbitMQ topic (which fills up the jhiMessages list):

Screenshot from 2018-06-16 21-37-48

Now, issue the GET operation to retrieve all the messages you POSTed in the previous step:

Screenshot from 2018-06-16 21-56-04

Cool! Working as expected. Now let’s get to work to put another RabbitMQ topic in place to decouple our OrderService (like we did with Kafka in our previous blog) again.

Replacing Kafka with RabbitMQ

rabbit-binder

Now we’re gonna put another RabbitMQ topic in between the Order REST service and the Order Service, just like we did with Kafka in our previous blogpost. Let’s leave the topic that the generator created in place. Since that guy is using the default channels, we’ll have to add some custom channels for our new topic that will handle the order processing.

First add a channel for publishing to a new RabbitMQ topic – we’ll be configuring the topic in a later step – and call it orderProducer:

public interface OrderProducerChannel {
  String CHANNEL = "orderProducer";

  @Output
  MessageChannel orderProducer();
}

We also need a channel for consuming orders for our topic. Let’s call that one orderConsumer:

public interface OrderConsumerChannel {
  String CHANNEL = "orderConsumer";

  @Input
  SubscribableChannel orderConsumer();
}

Now link those two channels to a new topic called topic-order in the application-dev.yml configuration file:

spring:
    cloud:
        stream:
            default:
                contentType: application/json
            bindings:
                input:
                    destination: topic-jhipster
                output:
                    destination: topic-jhipster
                orderConsumer:
                    destination: topic-order
                orderProducer:
                    destination: topic-order

The changes needed to be made in the OrderResource controller are similar to the changes we made for the Kafka setup. The biggest difference is in the channel names, since the default channels are already taken by the generated example code.
Another difference is that we put the EnableBinding annotation directly on this class instead of on a Configuration class. This way the Spring DI Framework can figure out that the injected MessageChannel should be of type orderProducer. If you put the EnableBinding on the Configuration class – like we did in our Kafka setup – you need to use Qualifiers or inject the interface – OrderProducerChannel – instead, else Spring won’t know what Bean to inject, since there are more MessageChannel Beans now.

@RestController
@RequestMapping("/api/order")
@EnableBinding(OrderProducerChannel.class)
public class OrderResource {

  private final Logger log = LoggerFactory.getLogger(OrderResource.class);
  private static final String ENTITY_NAME = "order";
  private MessageChannel orderProducer;

  public OrderResource (final MessageChannel orderProducer) {
    this.orderProducer = orderProducer;
  }

  @PostMapping("/process-order")
  @Timed
  public ResponseEntity<OrderDTO> processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
        throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }
    orderProducer.send(MessageBuilder.withPayload(order).build());

    return ResponseEntity.ok(order);
  }
}

Again in our OrderService we also added the EnableBinding annotation. And again we use the StreamListener annotation to consume orders from the topic but this time we direct the listener to our custom orderConsumer channel:

@Service
@Transactional
@EnableBinding(OrderConsumerChannel.class)
public class OrderService {
  ....
  @StreamListener(OrderConsumerChannel.CHANNEL)
  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    ....
  }
  ....
}

Building unit/integration tests for the RabbitMQ setup is not much different from the techniques we’ve used in the Kafka setup. Check my previous blog post for the examples.

Testing the setup

Alright, let’s test our beast again. These are the stock levels before:

Screenshot-2018-6-17 Item Stock Levels

Now let’s call the OrderResource and place an order of 20 Small bottles of Dutch Pilsner:

Screenshot from 2018-06-17 20-42-48

Check the stock levels again:

Screenshot-2018-6-17 Item Stock Levels(1)

Notice the new item stock level line! The inventory item went down from 90 to 70. Our RabbitMQ setup is working! Cheers!

Summary

In this blog post we saw how easy it is to switch from Kafka to RabbitMQ. The Spring Cloud Stream code mostly abstracts away the differences and didn’t change much. We also used a generator this time to do most of the heavy lifting. Time for a little vacation in which I’m gonna think about my next blog post. Again JHipster, check Spring Cloud Stream’s error handling possibilities or should I switch to some posts about other Spring Cloud modules? Let’s drink a few HelloBeerTM crafts and ponder about that!

References

JHipster – Streaming beer with Kafka and Spring Cloud Stream

Now that our OrderService is up and running, it’s time to make it a little more robust and decoupled. In this blog post we’re gonna put Kafka in between the OrderResource controller and our Spring Boot back-end system and use Spring Cloud Stream to ease development:

Blank Diagram

Upon creation of a JHipster application you will be given an option to select the Asynchronous messages using Apache Kafka option. After generation your pom file and application.yml will be all set up for using Kafka and Spring Cloud Stream. You’ll also get a docker file to spin up Kafka (and Zookeeper) and a MessageConfiguration class will be generated. There you need to declare your input and output channels (channels are Spring Cloud Stream abstractions, they’re the connection between the application and the message broker). If you follow the JHipster documentation on Kafka here – right after generating a virgin JHipster app – you should get a working flow up in no time.

Now, I wanna further improve upon the current HelloBeerTM application we finished in my previous blog post, and I didn’t check the Asynchronous messages option when I initially created the application. It’s not possible to add the option afterwards via the CLI, but luckily it’s not really that hard to add the necessary components manually. So let’s get started and makes those beer orders flow through a Kafka topic straight into our back-end application.
As always the finished code can be found on GitHub.

Kafka Docker image

Alright this guy I just ripped from a new JHipster app with the messaging option enabled. Add this kafka.yml file to the src/main/docker directory:

version: '2'
services:
    zookeeper:
        image: wurstmeister/zookeeper:3.4.6
        ports:
          - 2181:2181
    kafka:
        image: wurstmeister/kafka:1.0.0
        environment:
            KAFKA_ADVERTISED_HOST_NAME: localhost
            KAFKA_ADVERTISED_PORT: 9092
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_CREATE_TOPICS: "topic-jhipster:1:1"
        ports:
            - 9092:9092

You can spin up Kafka now with this file by issuing the following command:

docker-compose -f src/main/docker/kafka.yml up -d

Adding the dependencies

The following dependencies are needed to enable Spring Cloud Stream and have it integrate with Kafka:

<!-- Kafka support -->
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-test-support</artifactId>
  <scope>test</scope>
</dependency>

Configuring the channels

Since we only need one Kafka Topic, we can use the default channels that Spring Cloud Stream has to offer. We need one input and one output channel, so we can use the combined Processor interface. For a more complex setup with multiple topics, you can write your own custom interfaces for the channels (this is also the practice in the JHipster documentation example). For more information about channels check the Spring Cloud Stream Reference Guide.

MessagingConfiguration

First add the configuration for the Processor channel. This is done in the
MessagingConfiguration class. We’ll add this guy to the config package, the place where JHipster stores all Spring Boot configuration.

package nl.whitehorses.hellobeer.config;

import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Processor;

@EnableBinding(value = Processor.class)
public class MessagingConfiguration {
}

application-dev.yml

The overall configuration needs a few properties to let the application know where to find Kafka and to let Spring Cloud Stream channels bind to a Kafka topic. Let’s call the topic hb-orders. I’ll only put the configuration in the development configuration – application-dev.yml – for now:

spring:
    cloud:
        stream:
            kafka:
                binder:
                    brokers: localhost
                    zk-nodes: localhost
            bindings:
                output:
                    destination: hb-orders
                    content-type: application/json
                input:
                    destination: hb-orders

Note that input and output are the default channel names when working with the default channel interfaces.
That’s it for the channel configuration. Now we can use them in our back-end code.

OrderResource – Publishing to Kafka

Let’s alter our OrderResource so it publishes the OrderDTO object to the output channel instead of calling the OrderService directly:

@RestController
@RequestMapping("/api/order")
public class OrderResource {

  private static final String ENTITY_NAME = "order";
  private final Logger log = LoggerFactory.getLogger(OrderResource.class);
  private MessageChannel channel;

  public OrderResource(final Processor processor) {
    this.channel = processor.output();
  }

  @PostMapping("/process-order")
  @Timed
  public ResponseEntity processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
      throw new BadRequestAlertException("Error processing order", ENTITY_NAME, "orderfailure");
    }
    channel.send(MessageBuilder.withPayload(order).build());

    return ResponseEntity.ok(order);
  }
}

Not much going on here. Just inject the Processor and its channel and send the OrderDTO object through it.

OrderService – Subscribing to Kafka

@Service
@Transactional
public class OrderService {
  ....
  @StreamListener(Processor.INPUT)
  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    ....
  }
  ....
}

Even simpler. The only change is adding the StreamListener annotation to the registerOrder method, making sure that guy sets off every time an order arrives at the topic.

Testing code

The spring-cloud-stream-test-support dependency (test-scoped) enables testing without a connected messaging system. Messages published to topics can be inspected via the MessageCollector class. I’ve rewritten the OrderResourceTest class to check if the OrderDTO is published to the message channel when calling the OrderResource:

@RunWith(SpringRunner.class)
@SpringBootTest(classes = HelloBeerApp.class)
public class OrderResourceTest {

  @SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection")
  @Autowired
  private Processor processor;

  @SuppressWarnings("SpringJavaInjectionPointsAutowiringInspection")
  @Autowired
  private MessageCollector messageCollector;

  private MockMvc restMockMvc;

  @Before
  public void setUp() {
    MockitoAnnotations.initMocks(this);

    OrderResource orderResource = new OrderResource(processor);
    restMockMvc = MockMvcBuilders
      .standaloneSetup(orderResource)
      .build();
  }

  @Test
  public void testProcessOrder() throws Exception {
    OrderItemDTO orderItem1 = new OrderItemDTO(1L, 50L);
    OrderItemDTO orderItem2 = new OrderItemDTO(2L, 50L);
    OrderDTO order = new OrderDTO();
    order.setCustomerId(1L);
    order.setOrderId(1L);
    order.setOrderItems(Arrays.asList(orderItem1, orderItem2));

    restMockMvc.perform(
      post("/api/order/process-order")
        .contentType(TestUtil.APPLICATION_JSON_UTF8)
        .content(TestUtil.convertObjectToJsonBytes(order)))
        .andExpect(status().isOk());

    Message<?> received = messageCollector.forChannel(processor.output()).poll();
    assertNotNull(received);
    assertEquals(received.getPayload(), order);

  }

}

In the OrderServiceIntTest I changed one of the test methods so it publishes an OrderDTO message on the (test) channel where the OrderService is subscribed to:

@Test
@Transactional
public void assertOrderOK() throws InvalidOrderException {
  ....
  //orderService.registerOrder(order);
  Message<OrderDTO> message = new GenericMessage<OrderDTO>(order);
  processor.input().send(message);
  ....
}

More information about Spring Cloud Stream testing can be found here.

Wiring it all up

Now let’s see if our beers will flow. So here are our stock levels before:
Screenshot-2018-5-23 Item Stock Levels

Now post a new (valid) order with Postman:
Screenshot from 2018-05-23 20-39-00

And behold our new stock levels:
Screenshot-2018-5-23 Item Stock Levels(1)

It still works! So our new setup with a Kafka topic in the middle is working like a charm! Note that this is a very simplistic example. To make it more robust – for one what about failed orders?! – the first step would be to move the topic consumer code away from the OrderService and put it in a separate class. That consumer class can delegate processing to an injected OrderService and deal with possible errors, eg. by moving the order to another topic. And with another topic you need custom interfaces for your channels as well.

Summary

In this blog post we introduced a Kafka topic to separate our Order clients from our Order processing. With the help of Spring Cloud Stream this is very easy to do. We also looked at a few ways to test messaging with Spring Cloud Stream.
The plan was to say goodbye to JHipster for now, but maybe I’ll do one more blog post. I wanna find out how easy it is to switch from Kafka to RabbitMQ or maybe improve upon this version and introduce a failed-order topic. I also wanna test how easy it is to upgrade this JHipster app to the latest version. So much ideas, so little time! Anyhow, let’s grab a beer first and think about that next blog!

References

JHipster – Adding some service

In our last blog post we focused on the Angular side of the generated application. This blog post is all about the Spring Boot server side part. In this post we’ll be adding some service to our HelloBeerTM app.
We’ll be developing on the app we’ve built in our previous JHipster blogs. Code can be found here.

But first let’s take a look at what’s in the server side part of our JHipster app.

Spring Boot architecture

05 - Spring boot architecture

All the entities we’ve added to our domain model will be exposed via REST operations. JHipster generates a layered architecture that corresponds to the hamburger in the picture.

The domain (or entity) object will be placed in the domain package. The corresponding repository will serve as the DAO and is placed in the repository package. Now if you’ve stuck to the defaults during generation, like I did, there will be no service and DTO layer for your entities (you can override this during generation). JHipster’s makers have the philosophy of omitting redundant layers. The service (and DTO) layers should be used for building complex (or composite) services that – for example – combine multiple repositories. The REST controllers by default just expose the domain objects directly and are placed in the web.rest package. JHipster calls them resources.

JHipster – Adding an OrderService

Eventually we wanna push our HelloBeer enterprise to the next level and start selling beers over the internet. So we need a REST service our customers can use for placing orders.

Order Service

So let us begin by adding an Order Service. The JHipster CLI has a command for this:

jhipster spring-service Order

Accept the defaults and in no time JHipster has generated the skeleton for our new OrderService:

package nl.whitehorses.hellobeer.service;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;

@Service
@Transactional
public class OrderService {

    private final Logger log = LoggerFactory.getLogger(OrderService.class);

}

Order DTO

We also need an OrderDTO object to capture the JSON supplied to the exposed REST controller (which we’ll build in one of the next steps). Let’s keep it simple for now. Our OrderDTO contains an order reference (order id), a reference to our customer (customerId), and a list of inventory items (inventoryItemId and quantity) that the customer wants to order.

public class OrderDTO {
  private Long orderId;
  private Long customerId;
  private List<OrderItemDTO> orderItems;
  ....
}

public class OrderItemDTO {
  private Long inventoryItemId;
  private Long quantity;
  ..
}

Order Service implementation

For our OrderService implementation, we’re just gonna add a few checks making sure our inventory levels won’t run in the negative. If all checks are passed, our item stock levels are updated (i.e. new levels with new stock dates are inserted) according to the order.

@Service
@Transactional
public class OrderService {

  private final ItemStockLevelRepository itemStockLevelRepository;
  private final InventoryItemRepository inventoryItemRepository;

  private static final String ENTITY_NAME = "order";

  private final Logger log = LoggerFactory.getLogger(OrderService.class);

  public OrderService(ItemStockLevelRepository itemStockLevelRepository, InventoryItemRepository inventoryItemRepository) {
    this.itemStockLevelRepository = itemStockLevelRepository;
    this.inventoryItemRepository = inventoryItemRepository;
  }

  public void registerOrder(OrderDTO order) throws InvalidOrderException {
    // Map to store new item stock levels
    List<ItemStockLevel> itemStockLevelList = new ArrayList<>();

    for (OrderItemDTO orderItem : order.getOrderItems()) {
      ItemStockLevel itemStockLevelNew = processOrderItem(orderItem.getInventoryItemId(), orderItem.getQuantity());
      itemStockLevelList.add(itemStockLevelNew);
    }

    itemStockLevelRepository.save(itemStockLevelList);
    log.debug("Order processed");
  }

  // validate order items before processing
  // - assuming there are no multiple entries for one inventory item in the order
  // - if one order item entry fails, the whole order fails.
  private ItemStockLevel processOrderItem(Long inventoryItemId, Long qtyOrdered) {

    final InventoryItem inventoryItem = inventoryItemRepository.findOne(inventoryItemId);
    if (inventoryItem == null) {
      throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }

    // find item stock level
    final Optional<ItemStockLevel> itemStockLevel = itemStockLevelRepository.findTopByInventoryItemOrderByStockDateDesc(inventoryItem);
    if (!itemStockLevel.isPresent()) {
      throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }

    // check if quantity available
    Long qtyCurrent = itemStockLevel.get().getQuantity();
    Long newqty = qtyCurrent - qtyOrdered;
    if (newqty < 0L) {
      throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }

    // construct new item stock level
    ItemStockLevel itemStockLevelNew = new ItemStockLevel();
    itemStockLevelNew.setInventoryItem(inventoryItem);
    itemStockLevelNew.setQuantity(newqty);
    itemStockLevelNew.setStockDate(ZonedDateTime.now(ZoneId.systemDefault()));
    return itemStockLevelNew;
  }

}

The code hopefully speaks for itself. In a nutshell: for every order item we first get the inventory item belonging to the inventory item id and check if it exists. Next we get the current item stock level for the inventory item. For this we’ve had to add the findTopByInventoryItemOrderByStockDateDesc method to the ItemStockLevelRepository first:

@SuppressWarnings("unused")
@Repository
public interface ItemStockLevelRepository extends JpaRepository<ItemStockLevel, Long> {

  Optional<ItemStockLevel> findTopByInventoryItemOrderByStockDateDesc(InventoryItem inventoryItem);
}

This gets us the item stock level at the most recent stock date (note that we get the implementation for free thanks to Spring). If such a level exists, we deduct the quantity ordered from the current quantity and if the current quantity is sufficient, we construct a new item stock level entry. After all order items are processed without validation errors, we store the new set of stock levels.
Not shown here are the OrderServiceIntTest in the nl.whitehorses.hellobeer.service package to test the new service and the new InvalidOrderException. Please check the GitHub code for the details.

Order Controller

Now let us add the controller for the Order Service. The JHipster CLI also has a command for this one:

jhipster spring-controller Order

Just add one POST action called processOrder, and see this controller (and a corresponding test class) being generated:

package nl.whitehorses.hellobeer.web.rest;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/**
 * Order controller
 */
@RestController
@RequestMapping("/api/order")
public class OrderResource {

  private final Logger log = LoggerFactory.getLogger(OrderResource.class);

  /**
  * POST processOrder
  */
  @PostMapping("/process-order")
  public String processOrder() {
    return "processOrder";
  }

}

Order Controller implementation

Alright. Find the code for the implementation below. Not much going on in here. The OrderService will be doing the heavy lifting and the controller just delegates to it.
We let the POST method return the supplied Order object after processing along with a status 200 code:

@RestController
@RequestMapping("/api/order")
public class OrderResource {

  private final Logger log = LoggerFactory.getLogger(OrderResource.class);

  private static final String ENTITY_NAME = "order";

  private final OrderService orderService;

  public OrderResource (final OrderService orderService) {
    this.orderService = orderService;
  }

  /**
  * POST processOrder
  */
  @PostMapping("/process-order")
  @Timed
  public ResponseEntity<OrderDTO> processOrder(@Valid @RequestBody OrderDTO order) {
    log.debug("REST request to process Order : {}", order);
    if (order.getOrderId() == null) {
      throw new InvalidOrderException("Invalid order", ENTITY_NAME, "invalidorder");
    }
    orderService.registerOrder(order);

    return ResponseEntity.ok(order);
  }

}

I’ve also implemented the generated test class OrderResourceTest. I’ve renamed it from OrderResourceIntTest. Since I’m integration testing the OrderService, I can suffice by unit testing the controller and just mocking the OrderService dependency. See the GitHub code for the test class implementation.

Beer tasting

All right. All pieces are in place to test our new service. If you fire up the application again and check the API menu (admin login required!), you’ll see the Swagger definition of our new service being displayed quite nicely:

Screenshot-2018-5-12 API

You can test the APIs by using the API page, but I prefer to user Postman for testing. You can get the urls from the Swagger definition. Please note that all APIs are secured and you need to send a header parameter along with the request. The parameter (called Authorization) can be found in the CURL section of the definition (click the Try it out! button first to see it):

Screenshot from 2018-05-12 20-34-59

Note that this parameter will change every time you reboot the server part of the application.

Now let’s first check our initial item stock levels:

Screenshot from 2018-05-12 20-45-04

As you can see, there is a stock level for inventory item 1 of 100 and a stock level for inventory item 2 of 100.

Let’s order 5 item 1 items and 10 item 2 items and see what happens!

Screenshot from 2018-05-12 20-54-44

Looking good so far: a response with a 200 Status Code containing the Order we put in our request.

Now for the final check: call the item stock level GET again:

Screenshot from 2018-05-12 20-56-23

And there you have it: two new item stock level entries with quantities of 95 for inventory item 1 and 90 for inventory item 2.

For completeness, I’ll also show you what happens when you try to order to much of one inventory item:

Screenshot from 2018-05-12 21-12-25

Great! So our validation also kicks in when needed.

JHipster – Contract-First

Now what about API-First development? As it turns out, JHipster supports that as well. If you paid attention during setup, you will have noticed the API-First option you can select. I didn’t select it when I generated the hb-jhipster application, but if I did, I would have seen a familiar entry in the generated Maven pom:

<plugin>
  <!--
    Plugin that provides API-first development using swagger-codegen to
    generate Spring-MVC endpoint stubs at compile time from a swagger definition file
  -->
  <groupId>io.swagger</groupId>
  <artifactId>swagger-codegen-maven-plugin</artifactId>
  <version>${swagger-codegen-maven-plugin.version}</version>
  <executions>
    <execution>
      <goals>
        <goal>generate</goal>
      </goals>
      <configuration>
        <inputSpec>${project.basedir}/src/main/resources/swagger/api.yml</inputSpec>
        <language>spring</language>
        <apiPackage>nl.whitehorses.hellobeer.web.api</apiPackage>
        <modelPackage>nl.whitehorses.hellobeer.web.api.model</modelPackage>
        <generateSupportingFiles>false</generateSupportingFiles>
        <configOptions>
          <interfaceOnly>true</interfaceOnly>
          <java8>true</java8>
        </configOptions>
      </configuration>
    </execution>
  </executions>
</plugin>

Yes that’s right: the swagger-codegen-maven-plugin. If you put your Swagger definition in a src/main/resources/swagger/api.yml file, you can generate the Spring Boot code according to the contract by running the following Maven command:

./mvnw generate-sources

See the JHipster documentation for more information.

Feel free to experiment, I’ve already done that in one of my previous blogs here.

Swagger codegenThe main difference between mine and the JHipster approach is that they’re using the interfaceOnly option, while I used the delegate option.
It’s a subtle difference and mainly a matter of taste. With the interfaceOnly option, you need to implement the API interface and build a controller class complete with annotations. With the delegateOnly option, the ApiController is generated for you and you only need to implement a simple interface. See the picture for the difference (the yellow objects are generated, the blue ones you need to implement yourself).

Summary

In this blog post we’ve explored the server layer of our JHipster app a bit. We used the CLI to generate a new Service and Controller for our HelloBeerTM application. It’s nice to see JHipster helping us as much as possible along the way.
Before exploring the glorious world of microservices and Spring Cloud, I’ll put out 1 last JHipster blog for now (I’ll probably come back to check out their microservice options). In that blog I’m gonna check out the Kafka integration option JHipster provides. So for now: happy drinking and stay tuned!

References

Tweaking the JHipster App – Show me you IDs please, ow wait, don’t!

Like every craft has to have a cool name, a flashy etiquette and needs to be poured into the right glass to please the hipster drinking it, the same applies to serving them a JHipster application: it’s all about presentation!
So let’s dig right into it and shave of some of ’em rough edges of our JHipster app. We’re building on the application we generated in the previous blog post. Code can be found here.

images

Presenting the relationships

Alright. One of the most annoying things that can happen when you’re in the middle of ordering a great craft on a warm Summer day, is some big dude demanding your ID right then and there and you discovering that you left the darn thing at home. So let us get rid of those IDs! Like the ones on the Beer page for example:
Screenshot-2018-4-2 Beers
Those brewery ids don’t mean squat to your average refined beer drinker, so let’s tackle them first. We wanna swap the displayed ids with the corresponding brewery names. But before diving into the code let us take a look at the generated components to see what we’re dealing with.

For every entity JHipster generates a folder into the webapp/app/entities folder. For the Beer entity, for example, we’ve got a beer subfolder. Within we find the beer.component.html that serves as our overview page. The beer-detail.component.html is what is displayed when you press View, the beer-dialog.component.html when you press Create or Edit and the beer-delete-dialog.component.html when you press Delete. They all have their corresponding TypeScript classes.

Screenshot from 2018-04-02 16-14-26

The beer.model.ts handles the model classes, the beer.service.ts and beer-popup.service.ts classes handle the REST calls to the lower Spring Boot layer, beer.route.ts tackles all routes regarding the Beer entity (think of menu items, bookmark urls, foreign key hyperlinks and the Create, View, Edit and Delete links). Everything is packed in a separate Angular module, i.e. beer.module.ts and index.ts just exports all typescript classes in the Beer entity folder upwards in the Angular hierarchy.

Alright. So for changing the overview page displaying our beers with brewery ids, the beer.component.html page is the guy we need. Since the relationship between Beer and Brewery is represented by the entire Brewery class (so not only by the Brewery id) in the Beer class, we have the name for the taking.

Let’s first change the BaseEntity interface (all entities in a relationship are derived from this one) and add an (optional) name to it. The BaseEntity interface is available in the src/main/webapp/shared/model folder.

export interface BaseEntity {
  // using type any to avoid methods complaining of invalid type
  id?: any;
  name?: any;
}

This change is mainly so the IDE won’t complain when we try to use the name property of a relationship somewhere in our html pages.

Now change the beer.component.html so it uses the name of the Brewery instead of the id. First change the table header (for sorting):

<th jhiSortBy="brewery.name">
  <span jhiTranslate="helloBeerApp.beer.brewery">Brewery</span>
  <span class="fa fa-sort"></span>
</th>

Now change the table body (for display):

<td>
  <div *ngIf="beer.brewery">
    <a [routerLink]="['../brewery', beer.brewery?.id ]" >{{beer.brewery?.name}}</a>
  </div>
</td>

Note that we didn’t change the link as it would break the navigation from Brewery to Brewery detail (those bold links are clickable) . That’s it. Refresh the Beer page and revel in the magic! Screenshot-2018-4-2 Beers(3) The view page (beer-detail.component.html) is even simpler, just replace the one line that is displaying the brewery:

<dd>
  <div *ngIf="beer.brewery">
    <a [routerLink]="['/brewery', beer.brewery?.id]">{{beer.brewery?.name}}</a>
  </div>
</dd>

And voilà the detail page is displaying brewery names now instead of useless ids: Screenshot-2018-4-2 Beers(4)

Creating and editing the relationships

The functionality for creating and editing entities are shared on the same html page (beer-dialog.component.html). We want to change the select item linking the beer to the brewery, so that it displays brewery names instead of ids: Screenshot-2018-4-2 Beers(1) This one couldn’t have been easier. Just head over to the div displaying the select item, keep the code that handles displaying/selecting the right relationship based on the brewery id and only change the displayed brewery id into the brewery name:

<div class="form-group">
  <label class="form-control-label" jhiTranslate="helloBeerApp.beer.brewery" for="field_brewery">Brewery</label>
  <select class="form-control" id="field_brewery" name="brewery" [(ngModel)]="beer.brewery" >
    <option [ngValue]="null"></option>
    <option [ngValue]="breweryOption.id === beer.brewery?.id ? beer.brewery : breweryOption" *ngFor="let breweryOption of breweries; trackBy: trackBreweryById">{{breweryOption.name}}</option>
  </select>
</div>

Check out the Edit page now: Screenshot-2018-4-2 Beers(5) See how the brewery name is being displayed instead of the id. How cool is that?!

Autosuggesting

Let’s take this one step further. The inventory item page is still displaying the id for the Beers: Screenshot-2018-4-2 Inventory Items We could change this, like we did in the previous steps and display a list of Beer names. But the list of Beer names could become huge, certainly bigger than the list of breweries. So what if we replaced this guy with an auto-complete item? Sounds great, doesn’t it?! But how do we do that? Enter PrimeNG. PrimeNG is a set of UI components for Angular applications.

Installation

First add the PrimeNG lib to your JHipster application

npm install primeng --save

Next, add the auto-complete component to the module where we’re gonna use it, i.e. inventory-item.module.ts:

...
import { AutoCompleteModule } from 'primeng/autocomplete';
...
@NgModule({
    imports: [
        AutoCompleteModule,
        ...
    ],
    ...
})

Typescript code

For this blog post, we’re just gonna filter the complete Beer list already retrieved by the REST call in the NgOnInit() method – another option would be to omit this initial retrieval and add a REST method that can handle a filter. Then, every time you make a change in the auto-complete item, an instant REST call is made retrieving a list based on the then present filter.

These are the changes needed for the inventory-item-dialog-component.ts:

export class InventoryItemDialogComponent implements OnInit {
...
  beers: Beer[];
  beerOptions: any[];
  ...
  search(event) {
    this.beerOptions = this.beers.filter((beer) => beer.name.startsWith(event.query));
  }
  ...
}

So basically we just add a method filtering the beers starting with the string matching our query. This method will be called every time the input in the auto-complete item changes and will update the selectable options accordingly.

HTML page

Now lets add the auto-complete item on the inventory-item-dialog.component.html page (overwrite the select item):

<p-autoComplete  id="field_beer" name="beer" [(ngModel)]="inventoryItem.beer"  [suggestions]="beerOptions" (completeMethod)="search($event)" field="name" placeholder="Beer"></p-autoComplete>

Check the PrimeNG manuals for more information. The most important piece is adding the field attribute so the auto-complete item can work with a Beer object.

Styling

When you test the JHipster app at this point, you’ll notice the auto-complete functionality actually working already, albeit that the styling looks horrible. Luckily you can get PrimeNG to play nicely along with JHipster’s styling – which is based on the popular Bootstrap CSS library. Just add a few lines to the vendor.css file:

@import '~bootstrap/dist/css/bootstrap.min.css';
@import '~font-awesome/css/font-awesome.css';
@import '~primeng/resources/primeng.css';
@import '~primeng/resources/themes/bootstrap/theme.css';

This is a major improvement. One last optimization is to expand the auto-complete item to a width of 100% just like all the other items on the dialog pages. Add these line to the global.css file:

.ui-autocomplete {
    width: 100%;
}
.ui-autocomplete-input {
    width: 100%;
}

Now, testing the inventory item Edit page you’ll see a nicely integrated auto-complete item: Screenshot-2018-4-11 Inventory Items In the overview and detail pages of the Inventory Item entity we’ll just make the same changes we made for the Beer pages, i.e. exchanging the displayed ids for names: Screenshot-2018-4-11 Inventory Items(1)

Calendar

Alright this beer’s on the house! As a small extra, we’ll add in a calendar item to beautify the item stock level page (and we’ll also change that ugly id). Screenshot-2018-4-16 Item Stock Levels As you can see the Stock Date field could use a good calendar to select the date time, and now we’re on it that ugly xml date presentation we could use without as well.

As we did for the auto-complete item, we’ll be using PrimeNG here again. PrimeNG supports a calendar item. It’s dependent on Angular’s animations module. So let’s first install that guy into our project:

npm install @angular/animations --save

And add the necessary imports to the item-stock-level.module.ts (the module where we’re gonna add the calendar item to).

...
import {CalendarModule} from 'primeng/calendar';
import {BrowserAnimationsModule} from '@angular/platform-browser/animations';
...
@NgModule({
    imports: [
        CalendarModule,
        BrowserAnimationsModule,
        ...
    ],
    ...
})

Next add the PrimeNG calendar item itself (replace the input item representing the Stock Date) to the item-stock-level-dialog.component.html page:

<p-calendar id="field_stockDate" type="datetime-local" [showIcon]="true" name="stockDate" [(ngModel)]="itemStockLevel.stockDate" showTime="true" hourFormat="24" dateFormat="yy-mm-dd"></p-calendar>

Now, the date format used by JHipster isn’t compatible with this calendar item date format. So let’s change that. Format the stock date returned by the REST service in the item-stock-level-popup.service.ts to a format that the calender item understands, i.e. not the XML date format:

itemStockLevel.stockDate = this.datePipe
  .transform(itemStockLevel.stockDate, 'yyyy-MM-dd HH:mm');

And of course we also need to change the formatting of the stock date when we send it back to the back-end. Alter the item-stock-level.service.ts for this. Just comment out the formatting line (since where sending a plain javascript Date back):

/**
 * Convert a ItemStockLevel to a JSON which can be sent to the server.
 */
private convert(itemStockLevel: ItemStockLevel): ItemStockLevel {
    const copy: ItemStockLevel = Object.assign({}, itemStockLevel);

    // copy.stockDate = this.dateUtils.toDate(itemStockLevel.stockDate);
    return copy;
}

That’s it! Now look at the dialog page when editing an item stock level row. Looks pretty neat (I’ve also changed that id reference into an description reference (not visible in the picture), I’ll not explain it, you can look it up in the code):

Screenshot-2018-4-17 Item Stock Levels

Summary

In this fairly long blog post, we’ve tweaked the front-end of a generated JHipster application. We made quite a few changes to make the application a bit more presentable. We performed the following changes:

  • Changing relationships, replacing ids with meaningful strings;
  • Adding an auto-complete item;
  • Adding a calendar item.

For the last two step we used some PrimeNG components.

In the next blog post we’ll take a closer look at the server side of a JHipster application. So grab yourself a fine craft beer and stay tuned!

References