More advanced stuff with JMS and AWS SQS

aws_icon-sqs_white-320x320As you might know SQS in AWS SQS stands for ‘Simple Queue Service’. While playing around with it I recently found one of the reasons why it may be called ‘simple’. In two previous posts (here and here) I showed to use SQS as a JMS queue provider in combination with the Spring Framework. With this basic setup I decided to take it a step further and started to experiment with the request-response pattern in combination with JMS (making use of the JMS Property ‘JMSReplyTo’ and temporary queues). In this rather classic article it is nicely explained how it works and why it works that way.
To show how it should work I first show the setup that I used with Apache ActiveMQ. Let me show the bean that picks the message from a queue, performs an action on the content and send back the reply to the JMSReplyTo in the JMS Header. Since I make use of Spring this sounds harder than it really is. First the Java code: Continue reading

Posted in ActiveMQ, AWS, Spring Framework | Tagged , , , , | Leave a comment

Creating a Message Driven Bean with AWS SQS in Spring

In my previous post I showed a simple example how to use AWS SQS with Spring Framework to put messages on a queue and to read them from the queue. In this post I go one step further and use Spring to create a ‘Message Driven Bean’ so each message that is put on the queue is picked up and processed ‘automatically’. This is called the asynchronous way by AWS on their documentation page. To do this I will define a MessageListener in Spring and configure it to listen to my queue as described here. To see the initial project setup please see my previous post as I won’t show it again here. Continue reading

Posted in AWS, Spring Framework | Tagged , , | 2 Comments

Using AWS SQS as JMS provider with Spring

Recently AWS published a new client library that implements the JMS 1.1 specification and uses their Simple Queue Service (SQS) as the JMS provider (see Jeff Barr’s post here). In my post I will show you how to set up your Maven project to use the Spring Framework to use this library.
We will perform the following steps:

  • Create the queue in the AWS Management Console
  • Set up your AWS credentials on your machine
  • Set up your Maven project
  • Create the Spring configuration
  • Create the Java files to produce and receive messages

This post will only show some basic use of the SQS possibilities but should be good enough to get you started. I assume you have already created your AWS Account and you are familiair with Maven and basic Spring setup. Continue reading

Posted in AWS, Spring Framework | Tagged , , , | 4 Comments

Finished the course “Mining Massive Data Sets”

As I mentioned in a previous post I have been following the Coursera course ‘Mining Massive Datasets‘. Anyone who is not familiair with Coursera should have a look, as they offer a lot of (free) courses that you can follow remotely. This specific course is created by three instructors with a Stanford background: Jure Leskovec, Anand Rajaraman and Jeffrey Ullman. These three are also the authors of the corresponding book ‘Mining of Massive Datasets’ which you can find here. Continue reading

Posted in Hadoop, MapReduce | Tagged , | Leave a comment

Message Content Filtering with WSO2 ESB

ContentFilterEvery integration architect or developer should be familiair with Enterprise Integration Patterns (EIP) as described by Gregor Hohpe and Bobby Woolf. One of the patterns is the ‘Content Message Filter’ (not to be confused with the Message Filter pattern).
There are multiple ways to achieve this in WSO2 with different Mediator. One way is using the XSLT Mediator where you can simply use an XSLT to do the filtering. The other one (not so obvious based on the name of it) is the Enrich Mediator.
Continue reading

Posted in WSO2 ESB, XML/ XSD/ XSLT | Tagged , , | 1 Comment

Running PageRank Hadoop job on AWS Elastic MapReduce

aws-emrIn a previous post I described an example to perform a PageRank calculation which is part of the Mining Massive Dataset course with Apache Hadoop. In that post I took an existing Hadoop job in Java and modified it somewhat (added unit tests and made file paths set by a parameter). This post shows how to use this job on a real-life Hadoop cluster. The cluster is a AWS EMR cluster of 1 Master Node and 5 Core Nodes, each being backed by a m3.xlarge instance.
The first step is to prepare the input for the cluster. I make use of AWS S3 since this is a convenient way when working with EMR. I create a new bucket, ’emr-pagerank-demo’, and made the following subfolders:

  • in: the folder containing the input files for the job
  • job: the folder containing my executable Hadoop jar file
  • log: the folder where EMR will put its log files

Continue reading

Posted in AWS, Hadoop | Tagged , , , | 4 Comments

Calculate PageRanks with Apache Hadoop

hadoopCurrently I am following the Coursera training ‘Mining Massive Datasets‘. I have been interested in MapReduce and Apache Hadoop for some time and with this course I hope to get more insight in when and how MapReduce can help to fix some real world business problems (another way to do so I described here). This Coursera course is mainly focussing on the theory of used algorithms and less about the coding itself. The first week is about PageRanking and how Google used this to rank pages. Luckily there is a lot to find about this topic in combination with Hadoop. I ended up here and decided to have a closer look at this code. Continue reading

Posted in Hadoop, MapReduce | Tagged , , | 1 Comment

Sharing your sources stored in a Git repository

logo-gitI have been using Git for some time now and so far like it a lot. Especially the set up as described by Vincent Driessen in combination with the git-flow (and perhaps even better for lots of Java projects) the Maven implementation of it make using it easy.
However you might always end up in a situation as I did lately that you have to share your sources with someone who doesn’t use Git. In that case there is a simple Git command to help you out. It is the ‘git archive‘ command. You can use it like this:
git archive --format zip --output develop
In this case a zip file ‘’ is created containing all the sources that are in the ‘develop’ branch.
For lots more Git commands see this page and for more general background info about the way Git works see this book.

Posted in Git | Tagged

Making use of the open sources of WSO2 ESB

wso2-logo-e1412323639751When implementing services using the WSO2 stack (or any other open source Java framework) you will sooner or later run into a situation that the framework behaviour doesn’t do what you expect it should do. Or you just want to verify the way a product works. I lately had several of these experiences and I got around it to setup a remote debug session so I could go through the code step-by-step to see what exactly was happening. Of course this only makes sense if you have the source code available (long live open source :-)).
In this post an example with the WSO2 ESB (v 4.8.1) in combination with IntelliJ IDEA. Continue reading

Posted in IntelliJ IDEA, WSO2, WSO2 ESB | Tagged ,

Base64 encoding of binary file content

For testing a base64Binary XML type at one of my projects I needed an example of a base64 encoded file content. There is a simple command for that (at least when you are working on a Mac). For a file called ‘abc.pdf’ the command is:

openssl base64 -in abc.pdf -out encoded.txt

The result is a file ‘encoded.txt’ with a base64 decoded string:
Continue reading

Posted in XML/ XSD/ XSLT | Tagged ,