In a previous post I described how to setup a VPC with both private and public subnets with AWS. In the post I showed a basic configuration in which we saw that the instances in the private subnet didn’t have internet access, which was necessary to run ‘yum update’, for instance. In this post I will show you one way to solve that by adding a NAT instance to your VPC and have your EC2 instances in the private subnet use that to access internet resources. Here is the diagram I used in the previous post:
When you are using AWS resources like EC2 instances they will be assigned to a default VPC. However, by using AWS it is quite easy to setup your own VPC. In this post I describe how to setup a basic configuration. This is a setup in which we have a public subnet for our servers that have to be accessible from the outside world (DMZ) and we have a private subnet for our (EC2) servers that we would like to keep away from the outside world. However, to be able to access the internet from our private subnet we will setup a NAT instance, which I will show in a separate post. Continue reading
Lately I have created several posts about different areas of what AWS offers. The reason behind these posts is that I was preparing myself for the exam ‘AWS Certified Developer-Associate’. Although I have been working with AWS for several years now I didn’t take the time to sit down and test my knowledge about it. The best way for me to learn something is to show others how it works, hence these AWS posts.
This morning I took the exam and passed it with a 78% score. So this one is in the pocket and I will start working on the next one on my list: ‘AWS Certified Solutions Architect‘. Now you know what posts you can expect the coming period :-)
Recently AWS announced that using IAM Roles with their EMR service will be mandatory as of June 30 this year. In this post I will show you how to setup the IAM basics when you are starting with AWS.
When you are starting from scratch with your new AWS account then you will see the following management console. Choose the option Identity and Access Management so we can get started creating users, roles, etc.:
Posted in AWS
Tagged AWS, AWS IAM, Security
As you might know SQS in AWS SQS stands for ‘Simple Queue Service’. While playing around with it I recently found one of the reasons why it may be called ‘simple’. In two previous posts (here and here) I showed to use SQS as a JMS queue provider in combination with the Spring Framework. With this basic setup I decided to take it a step further and started to experiment with the request-response pattern in combination with JMS (making use of the JMS Property ‘JMSReplyTo’ and temporary queues). In this rather classic article it is nicely explained how it works and why it works that way.
To show how it should work I first show the setup that I used with Apache ActiveMQ. Let me show the bean that picks the message from a queue, performs an action on the content and send back the reply to the JMSReplyTo in the JMS Header. Since I make use of Spring this sounds harder than it really is. First the Java code: Continue reading
In my previous post I showed a simple example how to use AWS SQS with Spring Framework to put messages on a queue and to read them from the queue. In this post I go one step further and use Spring to create a ‘Message Driven Bean’ so each message that is put on the queue is picked up and processed ‘automatically’. This is called the asynchronous way by AWS on their documentation page. To do this I will define a MessageListener in Spring and configure it to listen to my queue as described here. To see the initial project setup please see my previous post as I won’t show it again here. Continue reading
Recently AWS published a new client library that implements the JMS 1.1 specification and uses their Simple Queue Service (SQS) as the JMS provider (see Jeff Barr’s post here). In my post I will show you how to set up your Maven project to use the Spring Framework to use this library.
We will perform the following steps:
- Create the queue in the AWS Management Console
- Set up your AWS credentials on your machine
- Set up your Maven project
- Create the Spring configuration
- Create the Java files to produce and receive messages
This post will only show some basic use of the SQS possibilities but should be good enough to get you started. I assume you have already created your AWS Account and you are familiair with Maven and basic Spring setup. Continue reading
As I mentioned in a previous post I have been following the Coursera course ‘Mining Massive Datasets‘. Anyone who is not familiair with Coursera should have a look, as they offer a lot of (free) courses that you can follow remotely. This specific course is created by three instructors with a Stanford background: Jure Leskovec, Anand Rajaraman and Jeffrey Ullman. These three are also the authors of the corresponding book ‘Mining of Massive Datasets’ which you can find here. Continue reading
Every integration architect or developer should be familiair with Enterprise Integration Patterns (EIP) as described by Gregor Hohpe and Bobby Woolf. One of the patterns is the ‘Content Message Filter’ (not to be confused with the Message Filter pattern).
There are multiple ways to achieve this in WSO2 with different Mediator. One way is using the XSLT Mediator where you can simply use an XSLT to do the filtering. The other one (not so obvious based on the name of it) is the Enrich Mediator.
In a previous post I described an example to perform a PageRank calculation which is part of the Mining Massive Dataset course with Apache Hadoop. In that post I took an existing Hadoop job in Java and modified it somewhat (added unit tests and made file paths set by a parameter). This post shows how to use this job on a real-life Hadoop cluster. The cluster is a AWS EMR cluster of 1 Master Node and 5 Core Nodes, each being backed by a m3.xlarge instance.
The first step is to prepare the input for the cluster. I make use of AWS S3 since this is a convenient way when working with EMR. I create a new bucket, ’emr-pagerank-demo’, and made the following subfolders:
- in: the folder containing the input files for the job
- job: the folder containing my executable Hadoop jar file
- log: the folder where EMR will put its log files