Orchestrating Application Delivery (part 3b)

In my last post I started to look at the “how” of orchestrating application delivery, specifically macro and micro processes.  Here, I’ll continue on with tools, integrations, interfaces and reports.

Tools
As we have seen, every application development group has a huge extant investment in technologies to support their efforts. Ripping them out and replacing them with something generic, but integrated, is not the answer.

We need to step up our requirements in the identification and selection of tools for application development. In fact, I want you to start to demand two things from each of your vendors starting today:

  1. Is the tool process-centric and does it support a flexible process model? Any tool that insists you follow its model rather than your own should be crossed off the shortlist. Insist on tools that are process-centric and that support your ability to customize the process without having to write code to do so!
  2. Is the tool open? Does the tool have an open, standards-based, API and does that include the ability to run the tool without a user interface? Does the API support both a push and pull model? Can the tools be driven from an event generator? Does the tool generate events? Only with the broadest and most open API will you ever get the depth of integration you need for Orchestrated ALM.

Make these essential requirements in your vendor RFP’s and RFI’s today. Integration at the process level is essential.

Integrations: the other great myth
Point-to-point integrations are the answer vendors say. “We can integrate our point tool to their point tool.” But when they do, the point-to-point integrations are usually limited in functionality, barely offer more than automated cut-and-paste and are all too often very brittle. Upgrade the software at one end of the integration and the integration falls apart. And you, the customer, are left trying to get vendors to fix the integration.

Interfaces
Ideally the tools one uses have role-specific user interfaces.

There has been a trend recently to create the all-singing-all-dancing IDE with every conceivable feature and function buried in them down amongst many layers of menus. But the one-size-fits-all myth applies to interfaces too. We need role-specific user interfaces.

Classes and libraries are jargon that is fine for a developer but resources and collections might be better for a user interface designer. Painting pictures with a stylus might be right for the web designer but a Java IDE is better for the web developer. One size does not fill all.

So this means it is essential to use tools that meet your UI needs not those of the vendor. And if you do not have the tool in that part of the lifecycle you can easily create one with the automation of the process steps with your automation tool.

Reports, controls and measurement
If we implement all of these features and connect them together with the automation tool we will be able to:

  • Create the kind of reporting dashboards that allow us to manage our business.
  • Implement controls so we can ensure the right stakeholders give informed consent to projects as they move through the lifecycle.
  • Get real-time data, trending over time, so we can see where we are improving development efforts and where we are making them worse.

When we look at most dashboards they are awash with charts and grids and are more colorful than Harlequin’s suit. What we need are the key indicators of performance and we need to limit this to less than ten. For the CIO they might be:

  • Percentage of projects delivered on time
  • Percentage of planned content delivered
  • Percentage of projects delivered to budget
  • Percentage of defects reported post delivery

For the VP of App. Dev. they might be:

  • Percentage of requirements changed post freeze
  • Average number of items in developer queues
  • Average number of closed tickets per day
  • Number of severity 1 issues outstanding more than 24 hours
  • Percentage of automated tests that fail
  • Percentage of automated test coverage

Whatever we choose as our key indicators we can now have this information delivered to us because of process automation. But another key benefit of automation is the guarantee that processes will be followed and the designated individuals will be able to insert themselves into the process and confirm their approval (or disapproval) at each step of the lifecycle. This is essential for accountability and traceability.

Summary
So you need to define your high-level process. And then your low-level ones.

We implement the high-level ones in our automation tool. The low-level ones are implemented in the tool of choice for the phase.  If there is no tool, we implement in the process automation tool.

We connect the low-level tools to the high-level process via web-services based integrations. We do this based on the process needs, not on point-to-point capabilities.

Where there are parts of the lifecycle that are not supported by tools we create the interfaces we need so that every stakeholder is required to participate in the process.

We develop dashboards, controls and key performance indicators based on the automation.

I know it sounds easy and it is really. It just requires effort and dedication supported by commitment and open-mindedness. Not a lot to ask.


Kevin Parker is a 30 year industry veteran, holder of three technology patents and is VP of Worldwide Marketing at Serena Software. He speaks and writes on application development methodologies, business analysis, quality assurance techniques, governance, open source issues, and tool interoperability, from the mainframe to distributed platforms to the web and mobile and embedded systems.



Post Comment
Name: 
Email: 
URL: 
Comments: 
  Subscribe by email