Continuous Integration and Delivery for Maven projects with Jenkins and Docker.

In this article we look at continuous integration for Maven projects using Jenkins and Docker. The project from my previous post Multiple Databases in Spring Boot will be built and deployed and run by Jenkins. The codebase for Multiple Databases in Spring Boot is https://github.com/gabrieljeremiahcampbell/multipledatabases

First I will discuss the methodology I use to build the Maven project and create the Docker image. There are normally two approaches:

1. Using a maven plugin for Docker that can run the Docker commands as tasks and goals.

2.Let Docker build the Maven project.

The problem I see in the first methodology is that it can create a dependency of ensuring that your Maven plugins are up to date especially if the Docker API changes. I choose to go with the second option because this way we have decoupled Maven from Docker. It is a type of Inversion Of Control(IoC) – Maven should not know about Docker as it’s focus is the project. In other words Docker is one of many environments that can run our app. If we had other platforms to run that means our POM would have to have plugins for everyone. So I will build the project using Docker.

In our IDE we open our project, we create the following Dockerfile:

image

This Dockerfile is a multistage build. First I use a maven container to build my image, maven3.5.2-jdk-8-apline. 

The important aspect to note is that Docker is running the Maven commands for building and packaging the application.

After this I use an openjdk:8-jdk-apline image to build to create the container for my project. Lastly I expose the port 8080, that my application will listen on.

In Jenkins we create the build plan with the following steps. I list all the screenshots so you can see the commands involved:

image

image

image

image

The last command prune will delete any temporal containers used in the multistage build and created by Docker. It is like a clean up operation. If our container fails to start and is in error it would also be removed. This reduces the manual effort of cleaning up the container and image registries in Docker. However it should be used with caution.

The Dockefile is the GitHub repos link shared above.

Advertisements

Multiple Databases with shared entity classes in Spring Boot and Java.

Hello everyone, It has been a few months since my last post. I have been busy travelling and relocating . In this post I want to illustrate how a Spring Boot application can have multiple data sources with shared entity classes. The need for this arose in my current project where an in-memory database was needed for the high performance and a persistent database for storage.

In this blog post, I will use H2 for the in-memory database and Postgres for the persistent storage. I will setup up the application and show how the entities can be passed from one data source to another.

The application will be a Spring Boot application using Java 8 with REST enabled.  I will highlight the important parts and classes of the application. The link to the Github code is given below.

The application will save a Person entity to an in-memory database and to a persistent database via a REST POST call. We create our entity, Person.java:

image

For the sake of brevity I leave out the Getters and Setters.

Next we setup the two connections for the two databases. I will leave out the reading of the properties but highlight the important parts of the classes,  InMemoryDatabaseConfiguration.java and PeresistentDatabaseConfiguration.java:

image

image

As you can see from the two classes the same data model package is used com.gabriel.multipledatabaseconnection.model. This means that the two databases will share the same Entities and within the application they can be passed to and fro between the data sources.

Next we configure our respositories. These are very simple and just extend Spring’s JpaRepository:

image

image

The last step is to setup a controller which I call the MutlipleDatabaseController. We will create a new person in both databases but simple passing the new person entity from the in-memory database to the persistent database:

image

Note that the transaction managers are defined for the respective data sources and annotate the respective methods.

Last we test. For this I will use SoapUI the opensource version, To make a REST call to create John Connor:

image

If we check in our databases using pgAdmin4 and H2 web console:

image

image

There you have it, one REST call two data sources one entity persisted in both. The Github link is https://github.com/gabrieljeremiahcampbell/multipledatabases. Note have also added the .idea files if using Intellij( This is for the open source version of Intellij).

Cheers,

Gabriel

Stanford NLP and Java 9–Creating an Email Spam Filter

In this article I would like to illustrate how Stanford’s Natural Linguistic Processing and Java 9 can be used to create a spam filter for an email account.

The goal is that all incoming messages will scanned and if they contain any spam information this will result in them being moved to a spam folder.

Firstly we download the following:

  1. Stanford NLP 3.9.1
  2. Jsop-1.11.2
  3. JavaEE- we will use the javamail api for the connection and manipulation of the email account.
  4. Eclipse Oxygen 4.7.2

Next we create a project called emailspamfilter in Eclipse Oxygen. We will create an application with the following architecture:

emailspamfilter-1

This is similar to the MVC pattern only instead of a model and view we have the Emaillistener and MessageNLP. In the source code below the class that represents the MessageNLP is the EmailTextClassifier. The controller enables pure separation of concerns. The controller carries out all the orchestration.

The project structure is as follows:

image

The EmailController will run in a infinite loop reading the inbox for new mails at a given interval. Here I have set it to five seconds. For personal use one can set the interval to be much larger like every few hours.

image

Note that the since we are using Java 8 or 9 the “stream” can be changed to a parallel stream for optimized performance when using multicore systems. The beauty is a threading or concurrency model can be super imposed on the controller as it delegates functionality to the EmailTextClassifier and EmailListener classes.

Next we train our application to be able to detect spam. In order to do this we will implement Named Entity Recognition (NER). All we need is to use a Sentence from the package edu.stanford.nlp.simple.Sentence;

image

The commented out code illustrates an extension to the mailLanguageClassifier where we can process the email subject, email body and email text attachments. We can then pass this around as a list of triple strings by using flatMaps to create spamEmails For the example I then just analyse the text in the email body.

The emailspamfilter_ner.txt contains the spam items that we will look out for in emails. Here is an example:

image

If I get any mail with Buy peanuts or Sale on biscuits I can now classify it as spam. Note this can be extended with the use of Stanford’s NLC where you can train it to look out for certain phrases or words. In addition you could just look for NERs like “Sale on”, “Get discounted”. Also as you get more mails that you don’t like you can add the NERs to this list and as time goes on your spamfilter becomes more intelligent.

The EmailListener contains the methods getEmails where it retrieves all new emails and these are sent by the controller to the emailtextclassifier. The second important method is the move to spam. If the mail is spam it will then be moved from the inbox folder to a spam folder that I call MySpam(I created a new folder called MySpam in my gmail inbox for this article).

image

Note not all the code in Emaillistener utilizes Java 8 or 9’s capabilities. This is because some of the methods I have are from Java 6. So I implemented some good code reuse. However moveEmailToSpamFolder implements the use of Optionals which is a java capability.

The code can all be found on github in the repository: https://github.com/gabrieljeremiahcampbell/emailspamfilter

Happy coding!

Creating a Cloud application that runs on Bluemix and consumes a NoSQL Database and Watson services.

Hello everyone – It has been a long time since I wrote an article. A few years in fact and work has kept me very busy. I am now the Cloud Architect at IBM for the Bluemix Platform. I wrote a sample application that runs on Bluemix using a web application on a Tomcat Server that has web pages written in JavaScript and Java for the backend. I consume a DBaaS- Database-as-a-Service Cloudant DB. I also use the Watson language translation service.

The problem that I created for the above application to be the solution is as follows:

World renowned Chef Gabriel is one of the most famous chefs on the planet. His recipes are well sort after by everyone from every part of the world.

The demand for his amazing recipes is very high and because of this he would like to able to share them with everyone on earth regardless of the language that they speak. Furthermore Chef Gabriel constantly changes an updates his recipes as new ideas come to him. Chef Gabriel approach IBM looking for a solution to his problem.

Proposed solution:

IBM Architect Gabriel proposes that Chef Gabriel can store all his recipes in the SoftLayer Cloud using Cloudant a NoSQL DB(the exact details of the DB were not shared with the Chef as Chefs wouldn’t understand technical terms).

Chef Gabriel’s website can also be hosted within Bluemix(SoftLayer). The website will connect to the DB that houses all of his recipes. Since visitors to his site speak different languages we will make use of Watson Language Translator. This will enable all the visitors to choose the language that they want to view the recipe in. The Website application will then retrieve the language form the DB and then make a translation to the language that the visitor has chosen. The visitor to the website can then view the recipe in his preferred language.

Architecture diagram:

chefg_architecture

Sequence diagram:

chefg_seq

The web application will retrieve the list of languages from Watson language Translator and the list of stored recipes in the Cloudant DB. This will then appear on screen and the visitor can select the recipe and the language he wants and the Web Application will retrieve that recipe and then have it translated in the relevant language.

My end result is a website that stores recipes. You can request a recipe in a language of your choice as long as that language is offered by Watson Language Translator.

There was one work around I had to implement because Cloudant DB did not seem to allow one to, via the Java SDK, write the ids for the documents in the database. It only allowed them to be auto generated. Hence I had to retrieve all the recipe documents in order to get the name of a particular recipe. This would be processor intensive and affect performance for large scale systems. Fortunately this is a demo. However please note I am aware of this facets.

The technologies I used were:

  • REST based HTTP/HTTPS calls,
  • JQuery- for the webpage
  • Gson for JSON.
  • CSS- for stylesheets
  • Java – for the backend
  • Cloudant noSQL- for the database
  • Watson Language translator – for the language translation.

I include a diagram of the UML that shows the relationship between the RecipeRequest and Recipe and Language objects. Note the RecipeRequest is really an association class that connects a recipe to a language that is to be translated. This is because a Recipe cannot have a property of language. A recipe is purely a method. In theory even a drawing ,or a picture, could be a recipe. A language is a concept in its own right. Hence I used an association RecipeRequest to connect the Recipe to a Language.

chefg_architecture

The website url is https://gabrieljcampbell.mybluemix.net/ Please try it out. The code resides at https://github.com/gabrieljeremiahcampbell/chefgabrielrecipes. You can clone the repository. Remember you need to use the Cloud Foundry commands to deploy to the Bluemix runtime. Below is are screenshots of the website.

A screen shot:

chefg_screenshot

When we change the language the recipe is translated:

chefg_screenshot

That ends my post. Happy coding.

Catching Expected And Unexpected Faults In BPEL

In BPELs in WebSphere Integration Developer we can catch exceptions – faults from a BPEL, like during an invoke for example.

image

What happens when we want to catch an exception that is not necessarily a fault and could stem from the underlying  system or the application behind the invoke(in the diagram it is InvokeSomeOperation)?

As you can see in the above excerpt of a BPEL, we have a known Fault called MyApplicationFault in the catch and then in the Catch All we deal with any unknown faults that are not MyApplicationFault and just exceptions that are not BPEL faults at all.

So if we have a NumberFormatException for example it will pass the MyApplicationFault catch and go the Catch All.

Now if we want to transform any exception that comes through in a BPEL exception that will be like a fault we can use an anonymous class in a snippet:

 

new BpelException(){

 

public String getFaultName() {

return “MyFaultName”;

}

 

public String getMessage() {

return caughtException.getMessage();

}

};

 

This way we can handle the exception elsewhere in the BPEL Process using:

com.ibm.bpe.api.BpelException bpelexception =

getCurrentFaultAsException();

logger.debug(“Fault Name” +

bpelexception.getFaultName());

bpelexception.printStackTrace( System.out);

Throwable rootCause = bpelexception.getRootCause();

 

So the Catch All can accept normal java exceptions as well as BPEL faults Smile

Securing a BPEL in WebSphere Integration Developer 7 difference over WebSphere Integration 6

 

Hi, another day in the windy city Smile Today we look at how to add a role-based security to secure a BPEL.

In WebSphere Integration Developer(WID) 6 there was no direct way to add a role to a BPEL at assembly level.

In WID 7 we can assign a role to the whole BPEL this could not be done in WID 6. It makes security much easier.

All you have to do is open your assembly diagram as in our example project below:

image

 

 

Click anywhere on the assembly diagram. Then click Properties on the BPEL and then click All Qualifiers.

Expand ExtractionProcess. As you can see above the is Security Identity. Here you can place a security role:

image

 

There you have it the entire Process is secured. On deployment make sure that role is allocated to a user that should have access to the BPEL application:

image

That’s all there is to it! Enjoy Smile

LTPA Tokens For JAX-RPC and JAX-WS in IBM WebSphere Server Version 7. Client and Provider communication.

 

HAPPY NEW YEAR!!!! To all the followers of this blog. I changed jobs in the later half of last year 2011 and as a result neglected my blog. I hope to turn things around this year Smile

Today we are going to look at JAX-RPC and JAX-WS communication in IBM WebSphere Server 7.  We will look at communication using LTPA tokens and passing LTPA version 1 and LTPA version 2 tokens for authentication.

We will show how to setup the client bindings and provider bindings to enable this communication. This will be of particular importance when legacy applications for WebSphere 6 and earlier need to communicate with Websphere 7 clients.

This will revolve around Provider Policy Set Bindings and Client Policy Set Bindings .

In this article we assume that we have two applications the first is a legacy provider application from IBM WebSphere Server 6 that uses JAX-RPC, (we will call this application LegacyProviderApp1) and we have a new client application from IBM WebSphere Server 7 that uses JAX-WS.

Now LTPA tokens version 1 are compatible with JAX-RPC so here we will show how to setup a client binding that uses LTPA version 1 tokens as seen below:

image

image

Go to Services> Policy Sets> General client policy bindings> New:

image

 

From Add select WS-Security.  Under WS-Security we have various options:

image

We are interest in Authentication and protection. Note you will add the necessary information for Keys and certificates, Message expiration and Custom Properties according to your design specifications.

After clicking Authentication and protection we look at Authentication tokens:

image

Here we have created two Authentication tokens: gen_signltpaproptoken and gen_signltpatoken

gen_signltpatoken is configured as follows:

image

The Namespace URI ending with 5.0.2 is LTPA version 1 which is compatible with JAX-RPC.

Now if you want to ensure that only LTPA version 2 tokens are supported and accepted then select Token type>LTPA Token v2.0

image

The gen_signltpaproptoken is configured as follows:

image

We can actually setup multiple client bindings. So we can have two. One for LTPA version1 tokens and another for LTPA version 2 tokens.

Now we look at the setup of the provider.

Go to General provider policy set bindings>New and create a new binding:

image

Name it to LegacyProviderApp1Provider and then Add> WS-Security. Once again we are interest in Authentication and protection. Note you will add the necessary information for Keys and certificates, Caller, Message expiration and Custom Properties according to your design specifications.

image

Now click on Authentication and protection. Again we are interested only in Authentication tokens :

image

image

Click on con_ltpatoken (this means consumer Smile)

image

If you select LTPA Token v2.0 but you do not check Enforce token version. Then this provider will be able to generate tokens that are LTPA version 1 and LTPA version 2 compatible. This will also aid in JAX-RPC communication for applications designed under WebSphere 6 and below.

The details for con_ltpaproptoken are as follows:

image

 

There you have it we have setup Provider Bindings and Client Bindings that will enable communication using LTPA version 1 and version2  tokens. Smile