A Roadmap for SOA Development and Delivery

This post is part of a series on SOA Development and Delivery.

In this post I will present a roadmap and a target state for SOA Development and Delivery.  This will serve as the basis for an extended open ‘proof of concept’ that we will work through together, over a series of posts, to implement and prove this approach.

Let’s talk first about our target state – the goal we want to achieve, then we will come back and look at the steps on the journey.

The Vision – Continuous Delivery

Continuous Delivery is a set of practices, supported by tools and automation, that is focused on answering the question: ‘How much risk is associated with deploying this new change into production?’

It involves automation of everything that needs to happen between a developer committing a change and the release of the software including that change into production, including:

  • building the software
  • testing the software, and
  • managing the configuration of the environments

The outcome that we want to achieve, is that we can automate all of our:

  • build
  • component-level test
  • environment provisioning (for development, test and production environments), and
  • acceptance test

for any ‘application’ that consists of ‘projects’ that are targeted to Oracle SOA Suite runtimes (WebLogic Server, ADF, SOA, BPM, OSB, etc.) in a consistent way, using the same tools an techniques commonly applied for other styles of development.

Major Themes for the vision include:

  • simplification
  • standardization
  • adoption of convention over configuration
  • customization capability
  • intuitive/idomatic (things work the way you think they would/should)

We want to improve quality, visibility, cycle time and repeatability.  We want to enable better testing and more reporting.  We want to reduce risk and cost.

How do we get there?  The Journey

The journey that we are going to take, consists of a few major steps, some of these can be taken without the others, some need to be done together.  Let’s talk about what we need to realise continuous delivery for SOA projects:

  • continuous integration
  • provisioning and configuration management
  • deployment pipelines
  • testing and quality

We will discuss each of these in detail in the following sections.

Continuous Integration

We have talked about this before, so let’s just recap.  Continuous Integration is a set of processes and practices aimed at improving software quality – essentially by testing early and often, to identify defects while they are small and easier to correct.

Common practices adopted in continuous integration include:

  • all developers committing to the trunk regularly (at least daily)
  • build the software on every commit
  • test the software on every build
  • building and testing are automated
  • feedback is given to developers (and other stakeholders) quickly

Let’s consider some of the enabling technologies.

Version Control

We have talked about this already too, so let’s recap.  Version Control systems allow us to store a history of changes over time.  Who changed what, when and why.  They provide the ability to isolate changes by developing in multiple streams (branches).  We can trigger our builds based on commits to the version control system.  They allow teams of developers to work on the application safely, and they permit the sharing of common artifacts.

The two most common and relevant version control systems, in my opinion, that we see in SOA environments today are Subversion and git.

Subversion is arguably the de facto standard for version control today.  We have good integration with development tools like JDeveloper, build automation tools like Maven, and continuous integration tools like Hudson.  It is widely understood and has excellent platform support.  It is a logical choice.

But Subversion does present some challenges.  Increasingly, we are seeing collaborative development approaches – organizations that are outsourcing some of the development to a third party (or multiple third parties) and having teams of developers, testers, build engineers and QA people in different locations.  We also see it is much more common these days for people to work from home.  Subversion does force developers to have network connectivity to the server in order to do operations like committing and branching, and this has been seen to prove challenging in some environments.

Distributed version control systems address this new need.  Arguably the best, at least most widely used, of the new distributed version control systems is git, developed by Linux Torvalds, and used by many open source projects.

I think that it is time for organizations doing SOA development to take a good hard look at git, and what it offers above and beyond Subversion, and giving some consideration to when to migrate to git.  I know some people will not migrate, but I feel that they will be in the minority.  Just like those people who never migrated from CVS to Subversion, I think the world has moved on.

So what does git give us?  And let’s expand that to talk also about gitlab, which is a common private hosted (on premise) git environment.

git allows developers to commit, branch, etc. when they are offline.  A connection to the server is not required for these types of actions.  It also supports a lot of new workflows.  It does not enforce a single master ‘trunk’ but you can have many distributed repositories that have the ability to push and pull commits, tags, branches, etc. to/from each other.

The standard git workflow is undoubtedly more complex than the standard Subversion workflow, but I think it is worth the effort and pain to make the move.  I know from personal experience in many projects using Subversion, Perforce, and other proprietary version control systems, that git seems to provide (anecdotally) more flexibility, especially when things get messy.  The easy things are not quite as easy as in Subversion, but the hard things are easier and in some cases possible, where they would not be in Subversion or other non-distributed version control systems.

gitlab is a great tool that provides a web based environment for interacting with git.  It lets you easily visualize the projects, branches, merges, etc., to view and diff versions of artifacts, and it has integrated collaboration tools like wiki, issue tracking, and so on.  In my opinion, it dramatically increases the value of git by providing much needed visibility and visualization tools.

In our POC, we will use git and gitlab.

Dependency Management and Build Automation

Maven is the obvious choice for us here, as we have good support for it in the 12c products, it is possible to use it with the 11g products as well, and it is an area that is seeing more investment moving forward.  Maven allows us to define the dependencies between projects in a declarative way, and it manages the process of collecting those dependencies and simplifies the whole build process for us.

Most of the other tools that we want to use also have good integration with Maven – things like continuous integration, quality tools, binary managements, and so on.

We will use Maven in the POC.

Binary Management

This is an important area that we will discuss in more detail in a later post.  Once we have built binaries in our continuous integration server (using Maven), we need to put them somewhere and manage them.  This is (like most things) a little more complicated than it seems, and this is why a set of tools has grown up around binary management – things like Archiva, Artifactory and Nexus for example.

One very important tenet of continuous delivery is that you only build the binary once.  That means that we build it, test it, check its quality, etc., and then we publish it to the binary repository.  Anything that happens later in the pipeline, like acceptance tests for example, just consumes the binary from the binary repositoy – we do not go back and rebuild it.  This means that we have some certainty that we are using the exact same code that was in fact tested, etc. earlier in the pipeline.

In the POC, we will use Artifactory as our binary management tool.

Continuous Inspection

Now automation is great, but if the things we are building automatically are garbage, then automation is not going to help.  So it is important that we include a way to measure quality.  There are some excellent tools in the Java space (like PMD, Checkstyle, FindBugs for example) but not too much in the SOA space, so this is an area we are going to need to look at.  One thing we can do right now though is agree to use a framework that allows us to execute various quality tools and consolidate and report on the results and trends.  Sonar is one such framework, and that is what we will use in our POC.

Continuous Delivery

There are a number of tools that are starting to appear in this space, but it is arguably still going through its storming phase.  What we really care about in our POC is that we can build and execute flexible pipelines.  For the purposes of this activity, we are going to use Hudson, along with a couple of Hudson plugins and maybe even write a plugin or two of our own.

Provisioning

Arguably the two front runners in the DevOps provisioning space are Chef and Puppet.  Both of these offer a way to manage the configuration of a system over time, converging it to an approved model.  Both also use a pull based model (predominantly) where a client on the managed nodes connects back to a server periodically to check for updates to the model, though it is relatively straight forward to implement a push model, which is important to us because we are going to want to be able to provision new environments on the fly and not have to wait around for them.  Some of the environments that we want to create (for our acceptance tests) are going to require more than one node (machine) – for example, we might want a cluster of SOA servers with the database on another node, and a load balancer/proxy in front.  Chef and Puppet are pretty good at handling the configuration of an individual node, but they don’t provide everything we really need for multi-node environments.

There are a lot of tools in that multi-node space, and that space is also arguably well and truly in the storming phase of development.  There is really no clear front runner, though there is a tool called Marrionette Collective (MCollective) that works with both Chef and Puppet.  It was actually acquired by the Puppet folks and I believe they use it to power their dashboards in Puppet Enterprise 3.

I would argue that the choice between Chef and Puppet is less important than the choice to use one of them, or not.  So I am not going to go into a lot of detailed comparison.  When I started working in this space, it seemed that Chef had better support for using recipes across virtual, physical and cloud environments, and deeper support across Windows, Linux and Solaris (though I think Puppet has closed the gap now).  There were also some other folks that I had worked with who had a lot of experience using Chef in a pretty large cloud environment, so it seemed like a good choice.

For our POC we are going to use Chef and MCollective.

Acceptance Testing

First, let’s clear up our terminology.  I am going to use the term ‘acceptance test’ here to mean tests that we execute against artifacts that have been deployed to a (production-like) runtime environment, as opposed to tests that are executed against source code, object code, or undeployed binaries.

I am using this term on purpose for two reasons:

  1. I don’t think that it makes sense to draw the line between unit tests and integration tests – while I would argue that all unit tests could and should be executed as part of the build process, i.e. in the continuous integration environment, I don’t think that integration testing is so clear cut.  Some integration tests could be executed as part of the build, but others are more complex, or take longer to execute, or require more infrastructure than it is reasonable to provision (or wait for) in a build.  So I think that some integration tests should be executed in the acceptance test environment, i.e. after the binary has been built and published.
  2. There is a granularity mismatch between build and test.  We typically build small pieces like composite, web applications, etc. in our continuous integration environment.  But we want to test the system or the whole application – not just the pieces in isolation (we do test them in isolation during the build process obviously).  So really our acceptance tests are running against a whole set of binaries that were produced by a group of jobs in our continuous integration environment.  And over the life of the project, we are going to find that more or less of artifacts will actually exist – at the start of the project, we may be using a lot of mocks for our acceptance tests, but later in the project we may switch them out for binaries we have built, or test instances of systems that we need to interact with, that have now been provisioned.

There are also a whole bunch of different test tools that we can use to test different kinds of artifacts, and I don’t think it makes sense to try to find a single test tool to execute all of our tests.  So what is important to us then, is to have a test framework that allows us to use any tool, and provides a way to consolidate the test results for reporting and trending.

Robot Framework is one such test framework.  It is backed by Nokia-Siemens, it has a Maven plugin, it is pretty easy to use, and it supports many of the kinds of test tools we would want to use out of the box.  So in our POC, we will use Robot as our test framework.

Customization and Configuration Management

Since we are going to be building our binaries only once, customization is an important area for us to consider.  How are we going to target the binary to the actual environment we are deploying it in to?  While we have tools like Java EE Deployment Plans and SOA Configuration Plans, we arguably need something more generic and extensible.

We also need it to integrate with whatever is managing our configuration.  For example, when we provision a set of nodes for our acceptance tests to run in, how do we get all of the topological information and use it to customize the deployment of the binaries?

In this space, for the purposes of this POC, we are going to design and create a custom metadata format for holding this information, a Hudson plugin to integrate with Chef to get the configuration information, and a tool to apply customization.

So let’s recap where what tools we plan to use:

Summary of Toolsets

Category Tool
Version Control git/gitlab
Build/Dependency Management Maven
Binary Management Artifactory
Continuous Integration Hudson
Continuous Inspection Sonar
Continuous Delivery Hudson/custom
Provisioning Chef
Acceptance Test Robot
Customization (custom)

Ok, we are ready to get started!  See you in the next post, and please feel free to send comments and feedback, we love to hear from you.

About Mark Nelson

Mark Nelson is a Developer Evangelist at Oracle, focusing on microservices and messaging. Before this role, Mark was an Architect in the Enterprise Cloud-Native Java Team, the Verrazzano Enterprise Container Platform project, worked on Wercker, WebLogic and was a senior member of the A-Team since 2010, and worked in Sales Consulting at Oracle since 2006 and various roles at IBM since 1994.
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , . Bookmark the permalink.

7 Responses to A Roadmap for SOA Development and Delivery

  1. In the section on Source Control, you conclude the section with the statement that both SVN and git / gitlab are going to be used, but the table in Summary of Toolsets only lists git / gitlab. Is one section in error?

    • Mark Nelson says:

      Thanks for pointing that out Gary. When I wrote the draft, I was toying with the idea of doing the POC on two separate sets of tools, but I moved away from that 🙂

  2. Pingback: SOA Community Newsletter November 2013 | SOA Community Blog

  3. Really nice read. I have been involved in Continuous Integration & Delivery myself in terms of Oracle Fusion Middleware. Are you planning to write any more detailed blogs on the categories you mentioned?

    • Mark Nelson says:

      Thanks Martijn – yes I am working on more articles. I have built most of it already, just running behind on the paperwork…

  4. island chen says:

    Hi, I’m testing the provision FMW env now, and I found a very good puppet implementation in http://forge.puppetlabs.com/biemond/orawls, but got nothing while searching on the http://community.opscode.com/search?query=weblogic&scope=cookbook.

    Would you pls share some Chef cookbooks of FMW with me, thanks.

    • Mark Nelson says:

      Hi, I am working on cleaning up my Chef Cookbooks and adapting them from 12.1.3 to 11.1.1.x so I can publish them.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s