Sunday, January 1, 2017

Composable Architecture Patterns for Serverless Computing Applications on IBM Bluemix with OpenWhisk

This post is the 4th in a series on serverless computing (see Part 1Part 2, and Part 3) and will focus on the differences between serverless architectures and the more widely known Platform-as-a-Service (PaaS) and Extract-Transform-Load (ETL) architectures. If you are unsure about what is serverless computing, I strongly encourage you to go back to the earlier parts of the series to learn about the definition and to review concrete examples of microflows, which are applications based on serverless architecture. This post will also use the applications developed previously to illustrate a number of serverless architecture patterns.

How's serverless different?

Serverless and cloud. At a surface, the promise of serverless computing sounds similar to the original promise of cloud computing, namely helping developers to abstract away from servers, focus on writing code, avoid issues related to under/over provisioning of capacity, operating system patches, and so on. So what's new in serverless computing? To answer this question, it is important to remember that cloud computing defines three service models: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Since serverless fits the definition of a PaaS[1], it offers many of the same benefits. However, unlike Cloud Foundry, OpenShift, Heroku, and other traditional PaaSes focused on supporting long running applications and services, serverless frameworks offer a new kind of a platform for running short lived processes and functions, also called microflows.
The distinction between the long running processes and microflows is subtle but important. When started, long running processes wait for an input, execute some code when an input is received, and then continue waiting. In contrast, microflows are started once an input is received, execute some code, and are terminated by the platform after the code in the microflow finishes executing. One way to describe microflows is to say that they are reactive, in the sense that they react to incoming requests and release resources after finishing work.
Serverless and microservices. Today, an increasing number of cloud based applications that are built to run on PaaSes follow a cloud native, microservices architecture[2]. Unlike microflows, microservices are long running processes designed to continuously require server capacity (see microservices deployment patterns[3]) while waiting for a request.
For example, a microservice deployed to a container based PaaS (e.g. Cloud Foundry) consumes memory and some fraction of the CPU even when not servicing requests. In most public cloud PaaSes, this continuous claim on CPU and memory resources directly translates to account charges. Further, microservices implementations may have memory leak bugs that result in increasing memory usage depending on how long a microservice has been running and how many requests it serviced. In contrast, microflows are designed to be launched on demand, upon the arrival of a specific type of a request to the PaaS hosting the microflow. After the code in the microflow finishes executing, the PaaS is responsible for releasing any resources allocated to the microflow during runtime, including memory. Although in practice the hosting PaaS may not fully release its memory resources to preserve a reusable, "hot" copy of a microflow for better performance, the PaaS can prevent runaway memory leaks by monitoring its memory usage and restarting the microflow.
Microflows naturally compliment microservices by providing means for microservices to communicate asynchronously as well as to execute one-off tasks, batch jobs, and other operations described later in this post based on serverless architecture patterns.
Serverless and ETL. Some may argue that serverless architecture is a new buzzword for the well known Extract Transform Load (ETL) technologies. The two are related, in fact, AWS advertises its serverless computing service, Lambda, as a solution for ETL-type of problems. However, unlike microflows, ETL applications are implicitly about data: they focus on a variety of data specific tasks, like import, filtering, sorting, transformations, and persistence. Serverless applications are broader in scope: they can extract, transform, and load data (see Part 3), but they are not limited to these operations. In practice, microflows (serverless applications), are as much about data operations as they are about calling services to execute operations like sending a text message (see Part 1 and Part 2) or changing temperature on an Internet-of-Things enabled thermostat. In short, serverless architecture patterns are not the same as ETL patterns.

Serverless architecture patterns

The following is a non-exhaustive and a non-mutually exclusive list of serverless computing patterns. The patterns are composable, in the sense that a serverless application may implement just one of the patterns, as in the examples in Part 1 and Part 2, or alternatively an application may be based on any number of the patterns, as in the example in Part 3.
The command pattern describes serverless computing applications designed to orchestrate service requests to one or more services. The requests, which may be handled by microservices, can target a spectrum ranging from business services that can send text messages to customers to application services, such as those that handle webhook calls, and to infrastructure services, for example those responsible for provisioning additional virtual servers to deploy an application.
The enrich pattern is described in terms of the V's framework popularized by Gartner and IT vendors to describe qualities of Big Data[4]. The framework provides a convenient way to describe the different features of the serverless computing applications (microflows) that are focused on data processing. The enrich pattern facilitates increase in the Value of the microflow's input data by performing one or more of the following:
  • improving data Veracity, by verifying or validating the data
  • increasing the Volume of the data by augmenting it with additional, relevant data
  • changing the Variety of the data by transforming or transcoding it
  • accelerating the Velocity of the data by splitting it into chunks and forwarding the chunks, in parallel, to other services
The persist pattern describes applications that more closely resemble the traditional ETL apps than is the case with the other two patterns. When a microflow is based solely on this pattern, the application acts as an adapter or a router, transforming input data arriving to the microflow's service endpoint into records in one or more external data stores, which can be relational databases, NoSQL databases, or distributed in-memory caches. However, as illustrated by the example in Part 3, applications use this pattern in conjuction with other patterns, processing input data through enrich or command patterns, and then persisting the data to a data store.

References

IBM Bluemix Node.JS App with OpenWhisk for serverless geocoding of postal addresses and saving to Compose Postgres

Until recently, platform as a service (PaaS) clouds offered competing approaches on how to implement traditional Extract Transform Load (ETL) style workloads in cloud computing environments. Vendors like IBM, AWS, Google, are starting support serverless computing in their clouds as a way to support ETL and other stateless, task-oriented applications. Building on the examples from Part 1 and Part 2 which described serverless applications for sending text messages, this post demonstrates how an OpenWhisk action can be used to validate unstructured data, add value to the data using 3rd party services / APIs, and to persist the resulting higher value data in an IBM Compose managed database server. The stateless, serverless action executed by OpenWhisk is implemented as a Node.JS app, packaged in a Docker container.
Serverless computing

Getting started

To run the code below you will need to sign up for the following services
NOTE: before proceeding, configure the following environment variables from your command line. Use the Docker Hub username for the USERNAME variable and the Pitney Bowes application ID for the PBAPPID variable.
export USERNAME=''
export PBAPPID=''

Create a Postgres database in IBM Compose

When signing up for a Compose trial, make sure that you choose Postgres as your managed database.
Once you are done with the Compose sign up process and your Postgres database deployment is completed, open the deployments tab of the Compose portal and click on the link for your Postgres instance. You may already have a default database called compose in the deployment. To check that this database exists, click on a sub-tab called Browser and verify that there is a link to a database called compose. If the database does not exist, you can create one using a corresponding button on the right.
Next, open the database by clicking on the compose database link and choose the sub-tab named SQL. At the bottom of the SQL textbox add the following CREATE TABLE statement and click the Run button.
CREATE TABLE "address" ("address" "text", "city" "text", "state" "text", "postalCode" "text", "country" "text", "lat" "float", "lon" "float");
The output at the bottom of the screen should contain a "Command executed successfully" response.
You also need to export the connection string for your database as an enviroment variable. Open the Deployments tab, the Overview sub-tab, and copy the entire connection string with the credentials included. You can reveal the credentials by clicking on the Show / Change link next to the password.
Insert the full connection string between the single quotes below and execute the command.
export CONNSTRING=''
NOTE: This connection string will be needed at a later step when configuring your OpenWhisk action.

Create a Cloudant document database in IBM Bluemix

Download a CF command line interface for your operating system using the following link
and then install it.
From your command line type in
cf login -a api.ng.bluemix.net
to authenticate with IBM Bluemix and then enter your Bluemix email, password, as well as the deployment organization and space as prompted.
To export your selection of the deployment organization and space as environment variables for the future configuration of the OpenWhisk action:
export ORG=`cf target | grep 'Org:' | awk '{print $2}'`
export SPACE=`cf target | grep 'Space:' | awk '{print $2}'`
To create a new Cloudant database, run the following commands from your console
cf create-service cloudantNoSQLDB Shared cloudant-deployment

cf create-service-key cloudant-deployment cloudant-key

cf service-key cloudant-deployment cloudant-key
The first command creates a new Cloudant deployment in your IBM Bluemix account, the second assigns a set of credentials for your account to the Cloudant deployment. The third command should output a JSON document similar to the following.
{
 "host": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
 "password": "5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4",
 "port": 443,
 "url": "https://"d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix:5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4@d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
 "username": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix"
}
You will need to put these Cloudant credentials in environment variables to create a database and populate the database with documents. Insert the values from the returned JSON document in the corresponding environment variables in the code snippet below.
export USER=''
export PASSWORD=''
export HOST=''
After the environment variables are correctly configured you should be able to create a new Cloudant database by executing the following curl command
curl https://$USER:$PASSWORD@$HOST/address_db -X PUT
On successful creation of a database you should get back a JSON response that looks like this:
{"ok":true}

Clone the OpenWhisk action implementation

The OpenWhisk action is implemented as a Node.JS based application that will be packaged as a Docker image and published to Docker Hub. You can clone the code for the action from github by running the following from your command line
git clone https://github.com/osipov/compose-postgres-openwhisk.git
This will create a compose-postgres-openwhisk folder in your current working directory.
Most of the code behind the action is in the server/service.js file in the functions listed below. As evident from the function names, once the action is triggered with a JSON object containing address data, the process is to first query the Pitney Bowes geolocation data to validate the address and to obtain the latitude and the longitude geolocation coordinates. Next, the process retrieves a connection to the Compose Postgres database, runs a SQL insert statement to put the address along with the coordinates into the database, and returns the connection back to the connection pool.
queryPitneyBowes
connectToCompose
insertIntoCompose
releaseComposeConnection
The code to integrate with the OpenWhisk platform is in the server/app.js file. Once executed, the code starts a server on port 8080 and listens for HTTP POST requests to the server's _init_ and _run_ endpoints. Each of these endpoints delegates to the corresponding method implementation in server/service.js. The init method simply logs its invocation and returns an HTTP 200 status code as expected by the OpenWhisk platform. The run method executes the process described above to query for geocoordinates and to insert the retrieved data to Compose Postgres.

Build and package the action implementation in a Docker image

If you don't have Docker installed, it is available per the instructions provided in the link below. Note that if you are using Windows or OSX, you will want to install Docker Toolbox.
Make sure that your Docker Hub account is working correctly by trying to login using
docker login -u $USERNAME
You will be prompted and will need to enter your Docker Hub password.
Change to the compose-postgres-openwhisk as your working directory and execute the following commands to build the Docker image with the Node.JS based action implementation and to push the image to Docker Hub.
docker build -t $USERNAME/compose .
docker push $USERNAME/compose


Use your browser to login to https://hub.docker.com after the docker push command is done. You should be able to see the compose image in the list of your Docker Hub images.

Create a stateless, Docker-based OpenWhisk action

To get started with OpenWhisk, download and install a command line interface using the instructions from the following link
Configure OpenWhisk to use the same Bluemix organization and space as your Cloudant instance by executing the following from your command line
wsk property set --namespace $ORG\_$SPACE
If your $ORG and $SPACE environment variables are not set, refer back to the section on creating a Cloudant database.
Next update the list of packages by executing
wsk package refresh
One of the bindings listed in the output should be named Bluemix_cloudant-deployment_cloudant-key
The following commands need to be executed to configure your OpenWhisk instance to run the action in case if a new document is placed in the Cloudant database.
The first command sets up a Docker-based OpenWhisk action called composeInsertAction that is implemented using the $USERNAME/compose image from Docker Hub.
wsk action create --docker composeInsertAction $USERNAME/compose
wsk action update composeInsertAction --param connString "$CONNSTRING" --param pbAppId "$PBAPPID"
wsk trigger create composeTrigger --feed /$ORG\_$SPACE/Bluemix_cloudant-deployment_cloudant-key/changes --param includeDoc true --param dbname address_db
wsk rule create --enable composeRule composeTrigger composeInsertAction

Test the serverless computing action by creating a document in the Cloudant database

Open a separate console window and execute the following command to monitor the result of running the OpenWhisk action
wsk activation poll

In another console, create a document in Cloudant using the following curl command
curl https://$USER:$PASSWORD@$HOST/address_db -X POST -H "Content-Type: application/json" -d '{"address": "1600 Pennsylvania Ave", "city": "Washington", "state": "DC", "postalCode": "20006", "country": "USA"}'
On success you should see in the console running the wsk activation poll a response similar to following
[run] 200 status code result
{
  "command": "SELECT",
  "rowCount": 1,
  "oid": null,
  "rows": [
    {
      "address": "1600 Pennsylvania Ave",
      "city": "Washington",
      "state": "DC",
      "postalcode": "20006",
      "country": "USA",
      "lat": 38.8968999990778,
      "lon": -77.0408
    }
  ],
  "fields": [
    {
      "name": "address",
      "tableID": 16415,
      "columnID": 1,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "city",
      "tableID": 16415,
      "columnID": 2,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "state",
      "tableID": 16415,
      "columnID": 3,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "postalcode",
      "tableID": 16415,
      "columnID": 4,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "country",
      "tableID": 16415,
      "columnID": 5,
      "dataTypeID": 25,
      "dataTypeSize": -1,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "lat",
      "tableID": 16415,
      "columnID": 6,
      "dataTypeID": 701,
      "dataTypeSize": 8,
      "dataTypeModifier": -1,
      "format": "text"
    },
    {
      "name": "lon",
      "tableID": 16415,
      "columnID": 7,
      "dataTypeID": 701,
      "dataTypeSize": 8,
      "dataTypeModifier": -1,
      "format": "text"
    }
  ],
  "_parsers": [
    null,
    null,
    null,
    null,
    null,
    null,
    null
  ],
  "rowAsArray": false
}

Use IBM Bluemix, NoSQL Cloudant, OpenWhisk, Docker, and Twilio to Send text messages based on a Cloudant feed.

This post is a Part 2 in a series on serverless computing. The last post described how to build a simple but useful text messaging application written in Python, packaged in a Docker image on Docker Hub, and launched using the OpenWhisk serverless computing framework. The app was implemented to be entirely stateless, which is common in serverless computing but can be limiting for many practical use cases.
For example, applications that send text messages may need to make a record about the text message contents, the date and time when the message was sent, and other useful state information. This post will describe how to extend the application built in Part 1 to persist the text message metadata in Cloudant, a PouchDB based JSON document database available from IBM Bluemix. Since OpenWhisk integrates with Cloudant, it is possible to setup OpenWhisk to automatically trigger a Docker-based action to send the SMS once the text message contents are in Cloudant. An overview of the process is described in the following diagram.
Serverless2

Before you start

Make sure that you have completed the steps in the Part 1 of the series and have a working textAction in OpenWhisk that can send text messages using Twilio. You will also need to make sure you are registered for IBM Bluemix. To sign up for a 30 day trial Bluemix account register here: https://console.ng.bluemix.net/registration/
Next, download a Cloud Foundry command line interface for your operating system using the following link
https://github.com/cloudfoundry/cli/releases
and then install it.

Create a Cloudant deployment in IBM Bluemix

In your console, type in

cf login -a api.ng.bluemix.net

to authenticate with IBM Bluemix and then enter your Bluemix email, password, as well as the deployment organization and space as prompted.
To export your selection of the deployment organization and space as environment variables for configuration of the OpenWhisk action:

export ORG=`cf target | grep 'Org:' | awk '{print $2}'`
export SPACE=`cf target | grep 'Space:' | awk '{print $2}'`
To create a new Cloudant database, run the following commands from your console

cf create-service cloudantNoSQLDB Shared cloudant-deployment
cf create-service-key cloudant-deployment cloudant-key
cf service-key cloudant-deployment cloudant-key
The first command creates a new Cloudant deployment in your IBM Bluemix account, the second assigns a set of credentials for your account to the Cloudant deployment. The third command should output a JSON document similar to the following.

{
"host": "d5555abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
"password": "5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4",
"port": 443,
"url": "https://"d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix:5555ee55555a555555c8d559e248efce2aa9187612443cb8e0f4a2a07e1f4@d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix.cloudant.com",
"username": "d5695abd-d00e-40ef-1da6-1dc1e1111f63-bluemix"
}
You will need to put these Cloudant credentials in environment variables to create a database and populate the database with documents. Insert the values from the returned JSON document in the corresponding environment variables in the code snippet below.

export USER=''
export PASSWORD=''
export HOST=''
After the environment variables are correctly configured you should be able to create a new Cloudant database by executing the following curl command

curl https://$USER:$PASSWORD@$HOST/sms -X PUT
On successful creation of a database you should get back a JSON response that looks like this:

{"ok":true}

Integrate Cloudant with OpenWhisk rules and triggers

Configure OpenWhisk to use the same Bluemix organization and space as your Cloudant instance by executing the following from your command line

wsk property set --namespace $ORG\_$SPACE
If your $ORG and $SPACE environment variables are not set, refer back to the section on creating the Cloudant database.
Next update the list of packages by executing

wsk package refresh
One of the bindings listed in the output should be named Bluemix_cloudant-deployment_cloudant-key
Run following commands to configure OpenWhisk to start the action in case if a new document is placed in the Cloudant sms database.

wsk trigger create textTrigger --feed /$ORG\_$SPACE/Bluemix_cloudant-deployment_cloudant-key/changes --param includeDoc true --param dbname sms
wsk rule create --enable textRule textTrigger textAction
The first command creates a trigger that listens to changes to the Cloudant database. The second command is a rule that indicates that whenever the trigger is activated with a document in Cloudant, then the text messaging action (textAction created in the previous post) needs to be invoked.

Test the OpenWhisk trigger by logging the text message to the Cloudant database

Open a separate console window and execute the following command to monitor the OpenWhisk log

wsk activation poll
In another console, create a document in Cloudant using the following curl command, replacing the to value to specify the phone number and the msg value to specify the text message contents:

curl https://$USER:$PASSWORD@$HOST/sms -X POST -H "Content-Type: application/json" -d '{"from": "$TWILIO_NUMBER", "to": "867-5309", "msg":"Jenny I got your number"}'
On success, you should see in the console running the wsk activation poll a response similar to following
{
    "status": [
        {
            "success": "true"
        },
        {
            "message_sid": "SM5ecc4ee8c73b4ec29e79c0f1ede5a4c8"
        }
    ]
}

IBM Bluemix, OpenWhisk, Docker, Python, and Twilio app to start with serverless computing on IBM Cloud

Once Forbes starts to cover serverless computing[1] you know that it is time to begin paying attention. Today, there are many frameworks that can help you get started with serverless computing, for example OpenWhisk[2], AWS Lambda, and Google Cloud Functions.
This post will help you build a simple but useful serverless computing application with OpenWhisk on IBM Cloud. The app is implemented using Python with Flask and can help you send text messages via a Twilio SMS API[3].
If you would like to skip the introductions and geek out with the code, you can access it from the following github repository: https://github.com/osipov/openwhisk-python-twilio Otherwise, read on.
So why OpenWhisk? One reason is that it stands out based on its elegant, Docker-based architecture that enables a lot more flexibility than competing frameworks from AWS and Google. For example, AWS Lambda forces developers to choose between Python, Java, or JavaScript[4] for the implementation of the serverless computing functions. Google Cloud Functions are JavaScript only and must be packaged as Node.js modules[5].
OpenWhisk's use of Docker means that any server side programming language supported by Docker can be used for serverless computing. This is particularly important for organizations that target hybrid clouds, environments where legacy, on-premise code needs to be integrated with code running in the cloud. Also, since Docker is a de facto standard for containerizing applications, serverless computing developers don't need to learn yet another packaging mechanism to build applications on IBM Cloud.
You can use the sample app described in this post to figure out whether OpenWhisk works for you.
Overview
The post will walk you through the steps to clone existing Python code and package it as a Docker image. Once the image is in Docker Hub, you will create an OpenWhisk action[6] that knows how to launch a Docker container with your code. To send a text message, you will use OpenWhisk's command line interface to pass it the text message contents. In response, OpenWhisk instantiates the Docker container holding the Python app which connects to Twilio's text messaging service and sends an SMS.

Before you start

OpenWhisk serverless computing environment is hosted on IBM Bluemix. To sign up for a 30 day trial Bluemix account register here: https://console.ng.bluemix.net/registration/
This app uses Twilio for text messaging capabilities. To sign up for a Twilio account visit: https://www.twilio.com/try-twilio Make sure that once you have a Twilio account, you also obtain the account SID, authentication token, and register a phone number with an SMS capability.
OpenWhisk uses Docker Hub to execute Docker based actions. You will need a Docker Hub account; to sign up for one use: https://hub.docker.com
NOTE: To make it easier to use the instructions, export your various account settings as environment variables:
  • your Docker Hub username as DOCKER_USER
  • your Twilio Account SID as TWILIO_SID
  • your Twilio Auth Token as TWILIO_TOKEN
  • your Twilio SMS capable phone number as TWILIO_NUMBER
export DOCKER_USER=''
export TWILIO_SID=''
export TWILIO_TOKEN=''
export TWILIO_NUMBER=''

Clone the OpenWhisk action implementation

The OpenWhisk action is implemented as a Python Flask application which is packaged as a Docker image and published to Docker Hub. You can clone the code for the action from github by running the following from your command line

git clone https://github.com/osipov/openwhisk-python-twilio.git
This will create an openwhisk-python-twilio folder in your current working directory.
All of the code for the OpenWhisk action is in the py/service.py file. There are two functions, called init and run that correspond to Flask app routes /init and /run. The init function is called on an HTTP POST request and returns an HTTP 200 status code as expected by the OpenWhisk platform. The run function verifies that an incoming HTTP POST request is a JSON document containing Twilio configuration parameters and the content of the text message. After configuring a Twilio client and sending the text message, the function returns back an HTTP 200 status code and a JSON document with a success status message.

Build and package the action implementation in a Docker image

If you don't have Docker installed, it is available per the instructions provided in the link below. Note that if you are using Windows or OSX, you will want to install Docker Toolbox from:
Make sure that your Docker Hub account is working correctly by trying to login using
docker login -u $DOCKER_USER
You will be prompted to enter your Docker Hub password.
Run the following commands to build the Docker image with the OpenWhisk action implementation and to push the image to Docker Hub.

cd openwhisk-python-twilio
docker build -t $DOCKER_USER/openwhisk .
docker push $DOCKER_USER/openwhisk
Use your browser to login to https://hub.docker.com after the docker push command is done. You should be able to see the openwhisk image in the list of your Docker Hub images.

Create a stateless, Docker-based OpenWhisk action

To get started with OpenWhisk, download and install a command line interface using the instructions from the following link:
The following commands need to be executed to configure your OpenWhisk action instance:

wsk action create --docker textAction $DOCKER_USER/openwhisk
wsk action update textAction --param account_sid "$TWILIO_SID" --param auth_token "$TWILIO_TOKEN"
The first command sets up a Docker-based OpenWhisk action called textAction that is implemented using the $DOCKER_USER/openwhisk image from Docker Hub. The second command configures the textAction with the Twilio account SID and authentication token so that they don't need to be passed to the action execution environment on every action invocation.

Test the serverless computing action

Open a dedicated console window and execute

wsk activation poll
to monitor the result of running the OpenWhisk action.
In a separate console, execute the following command, replacing the to value to specify the phone number and the msg value to specify the text message contents:

wsk action invoke --blocking --result -p from "$TWILIO_NUMBER" -p to "867-5309" -p msg "Jenny I got your number" textAction
Upon successful action execution your to phone number should receive the text message and you should be able to see an output similar to the following:
{
  "status": [
    {
      "success": "true"
    },
    {
      "message_sid": "SM5ecc4ee8c73b4ec29e79c0f1ede5a4c8"
    }
  ]
}

Monday, December 15, 2014

Choose IBM’s Docker-based Container Service on Bluemix for your I/O intensive code

Roughly a week ago IBM announced a new Docker-based service[1] as part of Bluemix PaaS[2]. It is still early but the service looks very promising, especially for I/O heavy workloads like databases and analytics. This post will help you create your own container instance running on Bluemix and provide some pointers on how you can evaluate whether the I/O performance of the instances matches your application’s needs. It will also describe nuances of using boot2docker[5] if you are running Mac OSX or Windows.

Even if you are not familiar with Docker, chances are you know about virtual machines. When you order a server hosted in a cloud, in most cases you get a virtual machine instance (a guest) running on a physical server (a host) in your cloud provider’s data center. There are many advantages in getting a virtual machine (as opposed to a physical server) from a cloud provider and arguably the top one is quicker delivery. Getting access to a physical server hosted in a data center usually takes hours while you can get a virtual machine in a matter of minutes. However, many workloads like databases and analytics engines are still running on physical servers because virtual machine hypervisors introduce a non-trivial penalty on I/O operations in guest instances.

Enter Linux Containers(LXC) and Docker. Instead of virtualizing the hardware (as the case with traditional virtual machines) containers virtualize the operating system. Start up time for containers is as good or better (think seconds not minutes) than for virtual machines and the I/O overhead all but disappears. In addition, Docker makes it easier to manage both containers and their contents. Containers are not a panacea and there are situations where virtual machines make more sense but that’s a topic for another post.

In this post, you can follow along with the examples to learn whether I/O performance of Docker containers in IBM Bluemix matches your application’s needs. Before starting, make sure that you have access to Bluemix[3] and to the Container Service[4]. In short, once you have provisioned an instance of the Container Service, you should have received an email notifying you that you have been approved for access and you should be able to see an API key as described in the example here[1].

Getting Started


The instructions below describe how to use boot2docker[5] for access to a Docker installation. boot2docker deploys a VirtualBox guest with Tiny Core Linux to OSX and you’ll ssh into this guest to access docker CLI. The same approach should also work on Windows although I am yet to try it.

Start of boot2docker specific instructions

Install boot2docker as described here[5]. Make sure that the following commands work correctly

boot2docker init
boot2docker start
$(boot2docker shellinit)
Use the following command to connect to your boot2docker Tiny Core Linux guest

boot2docker ssh

Install python and the ice tool in your boot2docker guest to interface to the IBM Container Environment. The approach used in these steps to install python is specific to TinyCore Linux and shouldn’t be used on other distros.

tce-load -wi python
curl https://bootstrap.pypa.io/get-pip.py -o - | sudo python
curl https://bootstrap.pypa.io/ez_setup.py -o - | sudo python
curl -O https://static-ice.ng.bluemix.net/icecli-1.0.zip
sudo pip install icecli-1.0.zip

End of boot2docker specific instructions


If you are not using boot2docker, you should follow the standard ice CLI installation instructions[6]

Before proceeding, create a public/private key pair which you’ll use to connect to your container. Replace the email address below with yours. The examples below assume that you’ll save your public key file to ~/.ssh/id_rsa.pub


ssh-keygen -t rsa -C "<your_email@example.com>"

Make sure that you have provisioned an instance of the Container Service in Bluemix and copy/paste the API key into the command below. Details on how to obtain the API key are here[1]. Also make sure that you note the registry key you specified when provisioning the Containers service. You’ll need it later in the instructions.


ice login -k <api_key> -H https://api-ice.ng.bluemix.net/v1.0/containers -R registry-ice.ng.bluemix.net

The login command should complete with a Login Succeeded message.
Next, you will pull one of the base IBM Docker images and customize it with your own Docker file:


ice --local pull registry-ice.ng.bluemix.net/ibmnode

Once the image completed downloading, you will create a Dockerfile that will customize the image with your newly created credentials (so you can ssh into it) and with sysbench scripts for performance testing.

Create a Dockerfile using your favorite editor and the following contents:


FROM registry-ice.ng.bluemix.net/ibmnode:latest
MAINTAINER Carl Osipov
COPY .ssh/id_rsa.pub /root/.ssh/
RUN cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ADD *.sh /bench/
RUN apt-get install -y sysbench

Next, create io.sh with the following contents

#!/bin/sh
SIZE="$1"
sysbench --test=fileio --file-total-size=$SIZE prepare
sysbench --test=fileio --file-total-size=$SIZE --file-test-mode=rndrw --init-rng=on --max-time=30 --max-requests=0 run
sysbench --test=fileio --file-total-size=$SIZE cleanup
And cpu.sh containing:
#!/bin/sh
PRIME="$1"
sysbench --test=cpu --cpu-max-prime=$PRIME run
add execute permissions to both scripts
chmod +x *.sh
At this point your custom Docker image is ready to be built. Run
ice --local build -t example/sysbench .
which should finish with a Successfully built message followed by an ID.

Push your custom Docker image to Bluemix


When you got access to the Container Service, you should have noticed a registry URL which is shown right above your API key. For an example, see here[1]. The registry URL should end with a postfix which you specified when provisioning the Containers service  . In the commands below, replace <registry_id> to ensure you are specifying your registry URL.


ice --local tag example/sysbench registry-ice.ng.bluemix.net/<registry_id>/sysbench
ice --local push registry-ice.ng.bluemix.net/<registry_id>/sysbench
ice run -n sysbench registry-ice.ng.bluemix.net/<registry_id>/sysbench

After you have completed executing the commands above, your container should be running. You can verify that by executing
ice ps
Request a public IP address from the Container Service and note its value.
ice ip request
Bind the provided public IP address to your container instance with

ice ip bind <public_ip_address> sysbench
Now you can go ahead and ssh into the container using
ssh -i ~/.ssh/id_rsa root@<public_ip_address>
Once there, notice it is running Ubuntu 14.04
lsb_release -a
on a 32 core server with Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz CPUs

cat /proc/cpuinfo
...
processor : 31
 vendor_id : GenuineIntel
 cpu family : 6
 model : 62
 model name : Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
Now you can also test out the I/O performance using

. /bench/io.sh 100M
Just for a comparison, I ordered a Softlayer virtual machine (running on a Xen hypervisor) and ran the same Docker container and the benchmark there. In my experience, the I/O benchmark results were roughly twice better on the Container Service than on a Softlayer VM.You can also get a sense of relative CPU performance using
. /bench/cpu.sh 5000

Conclusions


Benchmarks are an artificial way of measuring performance and better benchmark results don’t always mean that your application will necessarily run better or faster. However, benchmarks help understand if there exists potential for better performance and help you design or redesign your code accordingly.

In case of the Containers Service on IBM Bluemix, I/O benchmark performance results are significantly superior to those from a Softlayer virtual machine. This shouldn’t be surprising since Containers runs on bare metal Softlayer servers. However, unlike the hardware servers, Containers can be delivered to you in seconds compared to hours for bare metal. This level of responsiveness and workload flexibility enable Bluemix application designers to create exciting web applications built on novel and dynamic architectures.

 

References

 

[1] https://developer.ibm.com/bluemix/2014/12/04/ibm-containers-beta-docker/
[2] https://console.ng.bluemix.net
[3] http://cloud.foundry.guru/?p=14
[4] https://console.ng.bluemix.net/#/store/orgGuid=5b4b9dfb-8537-48e9-91c9-d95027e2ed77&spaceGuid=c9e6fd6b-088d-4b2f-b47f-9ee229545c09&cloudOEPaneId=store&serviceOfferingGuid=3a79bdf2-a8d3-4b72-b70d-db1650124c73&fromCatalog=true
[5] http://docs.docker.com/installation/mac/
[6] https://www.ng.bluemix.net/docs/#services/Containers/index.html#container
[7] https://www.ibm.com/developerworks/community/blogs/1ba56fe3-efad-432f-a1ab-58ba3910b073/entry/ibm_containers_on_bluemix_quickstart?lang=en&utm_content=buffer894a7&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

Monday, February 24, 2014

Top 10 reasons why you should choose IBM Bluemix for your Cloud Platform as a Service.

This post is going to add some variety to all the technical content you've been seeing on this blog. So without further ado, here are the top 10 reasons why you should choose IBM Bluemix for your cloud Platform as a Service...

1. Partner with IBM to reach into enterprise for go to market opportunities. If you are trying to grow your Software as a Service (API economy) business beyond initial phases, you know that landing an enterprise customer can really make an impact to your bottom line[1].  IBM operates in more than 170 countries around the world and has direct and inside sales forces that reach out to top companies in the world. Find a way to work with IBM to jointly pursue opportunities in cloud and drive enterprise customers to your applications and services running on IBM Bluemix. For example, OnFarm[2], a startup, worked with IBM to receive ongoing coaching, access to technology, mentorship and connections to venture capitalists, clients and partners.

2. Leverage a pipeline of talent grown by IBM through university & development outreach. Even if you have a phenomenal business plan, you will struggle to execute without the right talent in your company. Did you know that IBM University Outreach is working with US universities[3] to get students engaged with Bluemix? Combine that with developer focused conferences like Innovate@InterConnect[4] along with many other outreach efforts and you have a growing talent pool of development candidates for your company. If you are a student or even just a developer interested in growing your skillset, you will want to know that Bluemix is built on Cloud Foundry[5], an open source platform as a service technology. This means that if you pick up IBM Bluemix skills you can apply them to any other Cloud Foundry deployment, including the one that you can build yourself[6].

3. Monetize your services through the IBM Bluemix services catalog. Shortly after the launch of Bluemix, many IBM business partners, including Ustream[7], started to offer services through the IBM Cloud Marketplace. You too can get a head start on your competitors by contributing your service to the IBM Bluemix catalog. Networked marketplaces like IBM Bluemix have a strong first mover advantage[8]. Develop on Bluemix, create a service plan[9], become an IBM Cloud Marketplace business partner[10], and make your service a category leader!

4. Build on a responsive, global cloud infrastructure with world’s largest webhosting marketshare. Did you know that Softlayer, IBM's Cloud Infrastructure as a Service hosts more than websites than GoDaddy[11]? You shouldn't be surprised if you've been reading about the reliability of the IBM infrastructure. IBM uses Softlayer not just for Bluemix, but also for customers like the Australian Open[12] tennis tournament to ensure high availability and a great user experience. In contrast, offerings from Amazon Web Services has been suffering from numerous outages over the past year[13][14], including one as recently as Christmas time[15].

5. Meet enterprise class quality of service requirements in the cloud. Not all applications are created equal. Some need high availability & high performance relational databases. Others must satisfy complex regulatory compliance requirements. It is easy to become disparaged when you have to say no to a government or an enterprise customer because your service is missing a capability which you cannot afford to build yourself. Bluemix gives you a roadmap for how to evolve your application to introduce enterprise class capabilities[16] that can help you pursue lucrative opportunities in large enterprise and government space.

6. Support migration of complex legacy applications to cloud with IBM’s flexible infrastructure . IBM Softlayer, the underlying infrastructure for Bluemix has been highlighted by Gartner[17] for its range of features: from bare-metal to cloud servers, all managed through a common API. This broad supply of capabilities is why gaming companies[18] like Multiplay and KUULUU with complex demands for their resource intensive games moved to the same infrastructure as Bluemix.

7. Deliver innovative services by working with IBM Research & Development(R&D) labs. IBM Bluemix is the only Platform as a Service out there featuring IBM Watson technology -- a recent Jeopardy winner[19]. In addition to Watson, you can take advantage of other services created by IBM R&D, including a log analysis, auto-scaling, and many others. IBM labs also have customer advocates who can explore how IBM can help you on your opportunities.

8. Discover opportunities for innovation from a palette of IBM business partner services. IBM business partners have adopted Bluemix and made their services available for your use. For example, Zend Technologies provides support for their enterprise class PHP container[20] which can be deployed to IBM Bluemix. Access to IBM business partner ecosystem[21] can help you build your application faster, regardless of the technology, industry, or geography you are targeting.

9. Create trusted and secure offerings by reusing services reviewed and certified by IBM. A company can develop a service for Bluemix without any restrictions from IBM. However, for a service to be posted to the Bluemix catalog, IBM has to review and certify the service based on IBM's internal guidelines. If you are interested in this process, you can find out more about IBM's terms and conditions on our partner landing page[22]

10. Depend on IBM’s history of industry leadership for what is essential to your customers. IBM has been around for more than a century supporting technology needs of companies around the world. Some have said that the Western Civilization runs on the mainframe technology[23] which IBM has developed and has been maintaining since 1960s. With IBM commitment to Cloud Foundry[24] you can be certain that IBM Bluemix is the right foundation to develop applications for your most valuable customers.

References

[1] http://www.quora.com/Whats-the-best-way-for-a-B2B-startup-to-acquire-enterprise-customers
[2] http://www.onfarm.com/2013/10/21/onfarm-wins-ibm-smartcamp-north-america/
[3] http://www.ece.ncsu.edu/news/24872
[4] http://www-01.ibm.com/software/rational/innovate/
[5] http://en.wikipedia.org/wiki/Cloud_Foundry
[6] http://docs.cloudfoundry.org/deploying/run-local.html
[7] https://www.youtube.com/watch?v=niNz1yVr_hE
[8] http://en.wikipedia.org/wiki/First-mover_advantage
[9] http://docs.cloudfoundry.org/services/services/catalog-metadata.html
[10] https://www.marketplace.ibmcloud.com/joinnow/
[11] https://hostcabi.net/hosting_infographic
[12] http://www-03.ibm.com/press/us/en/pressrelease/42981.wss
[13] http://www.usatoday.com/story/tech/2013/09/13/amazon-cloud-outage/2810257/
[14] http://blogs.wsj.com/digits/2013/08/25/amazon-web-services-outage-cuts-off-big-names/
[15] http://aws.amazon.com/message/680587/
[16] http://www.ibm.com/cloud-computing/us/en/products/managed-cloud.html#cont-highlight
[17] http://www.gartner.com/technology/reprints.do?id=1-1UKQQA6&ct=140528&st=sb
[18] http://www-03.ibm.com/press/us/en/pressrelease/42928.wss
[19] http://www-03.ibm.com/press/us/en/presskit/27297.wss
[20] http://www.zend.com/en/products/server/
[21] https://www-304.ibm.com/partnerworld/wps/bplocator/landing
[22] http://www.ibm.com/cloud-computing/us/en/partner-landing.html
[23] https://www.youtube.com/watch?v=_0uYz75BHBg
[24] http://www-03.ibm.com/press/us/en/pressrelease/41569.wss