Since a few months I am back working with WSO2 products. In the upcoming posts I describe some of the (small) issues I ran into and how to solve them.
In this post an XSLT function that can be used to right-pad the value of an element with a chosen character to a certain length. No rocket science but this might become handy again so by putting it down here I don’t have to reinvent it later. The function itself looks like:
<xsl:stylesheet version="2.0" xmlns:functx="http://my/functions" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:function name="functx:pad-string-to-length" as="xsd:string"> <xsl:param name="stringToPad" as="xsd:string?"/> <xsl:param name="padChar" as="xsd:string"/> <xsl:param name="length" as="xsd:integer"/> <xsl:sequence select=" substring( string-join ( ($stringToPad, for $i in (1 to $length) return $padChar) ,'') ,1,$length) "/> </xsl:function> </xsl:stylesheet>
One of the better books I read so far about MapReduce is ‘MapReduce Design Patterns‘ as I mentioned in my previous post. In this post I describe the steps to get started with running the Hadoop source code that goes with the book on Cloudera’s latest Hadoop distribution CDH5. I decided to be making use of HDFS and YARN for testing the patterns. Take the following steps to get it all up and running:
- Get CDH5 and run it
- Install IntelliJ IDEA
- Upgrade GIT client
- Create local directory
- Checkout source code
- Install source data
- Run the job
Recently I finished my last project in which I was implementing Mule ESB. This gives me some room in my schedule to dive into the world of Big Data again (more specifically the Hadoop ecosystem). I have looked into this subject before which resulted into several blog posts. This time I started with a refresh by taking the online training of AWS: Big Data Technology Fundamentals. It is about MapReduce, Hadoop, Pig and Hive. After this nice online training I started with the Hadoop training of core-servlets. I had to get used to the form and layout of the training but now I have been working with it for a while I realise it contains a lot of information about the way Hadoop works. It comes with a (working!) virtual machine (based on Cloudera’s CDH4) on which Hadoop and the necessary tooling is installed including all training and exercise materials (and solutions).
Paralel to this (low level) training I am going through the book MapReduce Design Patterns. With this book you get a good idea which problems you can manage/solve with MapReduce framework and in what way. Especially the recommendation when not to use a certain pattern can be very handy while working with MapReduce.
In this post I show a simple Mule ESB flow to see the DLQ feature of Active MQ in action.
I assume you have a running Apache ActiveMQ instance available (if not you can download a version here). In this example I make use of Mule ESB 3.4.2 and ActiveMQ 5.9.0. We can create a simple Mule project based on the following pom file: Continue reading
I recently was invited to follow an online training at Udemy to get up to speed with Spring 4. Now most of the time I am not really interested in the subject of the offered trainings but as an open source Java developer you sooner or later will run into the Spring Framework. In my case I have been dealing a lot with the Mule ESB the last few years which is based on the Spring Framework. So deciding it wouldn’t harm to gather some new information about the Spring Framework I decided to give it a try.
I have been following up to 70% of the course now and so far I like ir. The course has found the right balance between going into depth about some of the subjects or explain them globally and leave it up to the trainee to find out more about it in detail. Also using video’s of approx. 10 minutes makes it easy to watch a few of them one night and leave the rest for later.
So if you are working with an older version of Spring or you want an introduction to the framework you should give this training a try. If you use
this link this link you will receive over a 70% reduction so you can get access to the training for 39 USD 21 USD (only valid till 5th of July!). You can also see a few free video’s to see if it will fit your need.
After completing the course you even receive a certificate :-)
Recently I started to implement our release process in Jenkins. Until then I just ran the release plugin of Maven on my local machine which did the job. As I said we decided to move this task to Jenkins. The build/release toolstack was:
- Subversion as source control
- Artifactory as internal Maven repository
- Jenkins for Continious integration
- Java source code as Maven projects
To show my Jenkins configuration I have setup a very basic Maven module named ‘myapp’ with the following pom: Continue reading
While developing flows with a recent Mule ESB there is a big chance you will make use of MEL in your configuration. Although this feature has added great benefits while developing Mule flows it sometimes drives me crazy. In this post I will show two examples which took me some time to get it working.
The first was when I was using an expression-transformer to get a part of an XML document as payload. The expression I tried first was: Continue reading
A good practice in the Mule ESB is to supply properties by using a property file. Most of the time you will end up adding passwords to the properties file. In that case you might want to encrypt the passwords so it is not visible for every one who has access to the property file. Mulesoft described how to do this in combination with the Mule ESB. Although it is a good starting point I thought it may help to create a complete example so I put all the steps in this post. There are two environments that have to be modified: the development environment and the runtime environment.