Kevin Parker Archive


5 deployment traps we can’t seem to avoid

In this 7-part series we’ll look at some common misconceptions about the process of deploying software in today’s unforgiving world. Over the next few posts we tackle these myths head on and show how there is a better way.

Releasing software into the wild is exciting and terrifying. When it goes well: we party. When it doesn’t: we spend the weekend without sleep, showers, food or sleep. Wait! Did I mention no sleep already?

Too often the reason our deployments fail is because we fall into the same traps over and over again. We never have time to step back and do it right so we keep on doing it the best we can and that is where the errors creep in.

Here are five common traps we fall into that are easy to avoid and inexpensive to solve.

1: Every deployment is unique

There’s your whole problem. It is true that what is being deployed is (or at least should be) different each time you deploy but how it is deployed needs to be standardized and familiar so it becomes repeatable and predictable. Each time you update an application it likely that it has the same topology, the same dependencies, the same footprint and the same risks.

2: Every target is unique

This is a common problem too. In the Internet of Things every device on every hip, in every pocket and buried on our phone is a unique configuration of versions and patches and operating systems. Every server has custom settings and distinctive considerations that need to be accommodated. It is beyond any human’s ability to track and manage the discrepancies amongst so many deployment targets. But yet we try with our spreadsheets and notebooks.

3: Emergency fixes are different

On a Sunday morning, at 3:00 am, no one wants to modify the 27 deployment scripts and the 81 server config files or follow the defined procedures to stop the 14 databases and quiesce the 8 transaction queues just to make a simple change. A skilled developer can do what’s need by writing a simple PERL script, right? But on Monday morning, at 8:00 am, no one wants to explain to the CEO why overnight trading in Tokyo and Hong Kong was down either. In many ways emergency fixes have more risk because they are usually developed under pressure, tested less, by pass approval levels and get deployed in whatever way seems quickest. There is too great a temptation (or expectation) to do what is expedient over what it right because the right way is the long way. Yet we all know the automated way is the right way and the fastest (and safest) way.

4: Each deployment needs me

I remember sitting in a meeting room. Outside I could see two developers peering at a screen and nodding their heads up and down in an erratic and random manner. I asked the client what was happening and she explained they were deploying a release and “watching the script go by in case something bad happened.” If a deployment fails many release engineers will roll up their sleeves and start to unpick the changes and the rest will try to fix it on the fly and keep going. This is why deployments need their release engineers to be close during the process. But what are they going to do if the deployment stops (or worse flags and error and doesn’t stop)? They have to find the problem and decide “fix forward” or “backout”? They have to then work out which is best and easiest, quickest, safest to do. If this was built right into the scripts from the beginning they could devote their energies to finding out why it failed and fixing that.

5: Errors happen: its software

There is a sense that errors are inevitable. That something will always get missed and we can correct for that later. If we are agile and iterating in small increments the risks are low and the impact minimal. In our hearts we know that is bogus. One change to one configuration line in a server can stop it from executing and bring our entire system to a halt.


In the next post we will address these issues and look at what is needed to be effective and efficient in deploying software. As this series continues I would welcome your thoughts and experiences. You can add them right here.

Next week: why neither deployments nor target platforms unique.

Well it’s here! The registration for xChange15 in Washington DC opened today. Visit for details.

If you are a xChange alumnus, you will have been sent a discount code giving a very special price as a thank you for being a returning attendee.

If you are new to xChange, we have a special promotional code for you that will save you $300 off the full price and this is good through August 29th, 2014. You can contact me directly at to get your discount code.

Looking forward to seeing you in DC!




Two weeks ago I had the good fortune to be at the Serena Customer Day in Frankfurt. There I was able to see the latest version of Dimensions CM demonstrated by Don Irvine, Senior Director of the Dimensions Development Team. After the event I sat down with him to ask him about the work his team had been doing on the performance of Dimensions 14.

KP: Great Demo Don. I heard you mention the great work you’ve been doing on performance of Dimensions. With super-fast computers and high-speed networks why is it still important to optimize for performance?

DI: The modern development environment has changed, not only do we need to delivery more changes faster than ever before, but we have to deal with our development teams being heavily distributed on a global scale. As an example, the Dimensions development team is split across two continents and multiple sites and several home workers for good measure.

KP: How did you determine when you were fast enough?

DI: Good question! When we started CM 14, we set ourselves a goal of being able to match the performance of simple distributed version management tools whilst, at the same time, providing the richness of features and benefits of a centrally managed repository. What we came up with was a clever caching technology that we call the Personal Library Cache Directory (PLCD) which, when coupled with a new and really innovative delta transfer technique, has literally supercharged our file transfers.

KP: That sounds impressive. Do you have metrics you can share?

DI: Earlier this week I got to see the results of these changes, and the performance is truly breathtaking! Our own development server is a Dimensions CM server of course. The produiction instance of that server is located in Oregon, on the West Coast of the United States, but my development teams are based in around the world with most being in our centers in St. Albans in the UK and in Kiev in the Ukraine. This network topology results in my teams having both limited bandwidth and high latency (ping time in excess of 200ms) to the Dimensions server. The entire source code for Dimensions CM is close to 40,000 artifacts and is just over 1.3GB in size. On a busy day when Dimensions CM version 12.2.2 was our production server the fetch of all the source code, using a Library Cache in our European data center, would take over 200 seconds. For developer who were home based and not using the Library Cache it could take in excess of 20 minutes. Now with CM 14 this same operations takes around 70 seconds.

KP: Don, that is really impressive. How does that compare to those simple versioning systems like SubVersion?

DI: We did do some benchmarks against SubVersion and GIT. In comparison the same fetch from Subversion took over 40 minutes to complete (KP: wow!), and from Git took 53 seconds but our instance of GIT was a clone of a local repository.

KP: So having a Dimensions repository hosted on the other side of the World now gives similar performance to having a distributed repository on your local machine?

DI: Exactly. But we’re not stopping there. Last week my team came to me with even more ideas for making Dimensions even faster still in the next release!

KP: Don, this is great. Congratulations to you and your exceptional team. Thanks for taking the time to chat with me today.


We are excited to announce that Serena’s xChange15 will be from Sunday March 22nd until Wednesday March 25th 2015. We are bringing xChange back to the east coast and will be in the wonderful Washington D.C. area in time for the First Day of Spring and the world-famous Cherry Blossom Festival. So mark your calendars today! We’ve selected the prestigious Ritz-Carlton at Tyson’s Corner for the event and we have secured an amazing rate for the accommodation there.

This year we will be focusing on number of critical IT issues and we will be showcasing our latest innovations in Release Management and Release Automation. We also bringing news and updates to our product line and will be launching exciting new releases first at xChange!

We will have more than 60 specialist sessions delivered by you, our customers and partners, as well as by the amazing technical teams from R&D, Customer Support and Professional Services. Deep dive, hands-on and advanced topics will, once again, lead the content making xChange the most valuable three days you can spend.

We are looking for speakers right now to deliver the intensive sessions and we invite you to contribute around the topics of:

  • Application Development
  • Software Release and Deployment
  • Human and System Automation
  • Modern Mainframe Application Development
  • Industry Trends and Hot Topics

The usual and unique features of xChange will be back including the now famous AnswerZone with its intense one-on-one consulting sessions and the Birds-of-a-Feather lunches. In addition this time we are introducing Ignite Sessions and more dynamic, customer-led discussions. You’ll hear Serena executives as well as all the technical leadership describe the direction we are forging as the leader in Application Change, Release and Configuration Management and you will be able to interact with them in private briefing sessions throughout the conference.

Check here for more details and how to register. We will be posting regular updates such as the Agenda, Breakout Sessions, Training Sessions, Hotel Info, Special Events, Expo and Exhibit Hall, and more.

If you have ideas for presentations you want to give or if you have questions you want answered please drop me a note and I’ll share them with the xChange team.

I’m really looking forward to seeing you again at xChange15.

Next month I will be presenting at the SHARE conference in Pittsburgh. The bi-annual event is the place to be to learn about the trends and tricks for developing modern applications on the mainframe.

No one knows better than the army of Change and Release Managers that guard the mainframe environment, just how risky it is to change anything on the mainframe. And no one knows better than they just how business threatening it is not to keep pace with the market and customer needs. Balancing these two forces has been at the heart of the mainframe world for 5 decades now.

My presentation takes a look at how the world of “change” (in all of its forms and meanings) is changing and suggests that we need to change the way we think about and react to change. If you would like to watch the presentation you can attend in person or you can watch it online on the SHARE website. After the conference the presentation will be available on the SHARE website and I will post the presentation here also.

As well as exhibiting at the conference we will be meeting with many customers to share one-on-one briefings about the exciting new version of ChangeMan ZMF that will be available later this year. If you’d like to schedule a one-on-one briefing  please let me know by emailing me at


The annual DefenceIT conference concluded this week at the Defence Academy in the UK. More than 250 uniformed and civilian technology leaders gathered to talk about the intersection of business solutions and battlespace technology needs.

Defence spending reductions and the prospect of no active engagements beyond December 2014 are reshaping priorities in the UK Ministry of Defence. This is leading to rebrigading (reallocating brigades resources into fewer organizational units) which has the most immediate impact on the armed forces. However this has the potential to move the focus away from preparing for future mission profiles Her Majesty’s armed forces may be tasked to do.

The massive effort of repatriating war-fighters and their materiel from Afghanistan is well underway. However with billions of pounds worth of equipment and only a few months to complete the redeployment before winter comes, the logistical complexity is huge. Ensuring that vital, sensitive and strategic materials are shipped with priority and shipped securely is just as much of a challenge as shipping the more mundane. The added complexity of an uncertain outcome to the current Afghan elections brings a special frissance to the expression “mission critical.”

Serena’s presence once again underscored our commitment to supporting our military uniformed and civilian customers as well as our defence contractor partners. Just like our business customers, the pace of change and the imperative for compliance has reached the point where failure is not an option. Technology underpins both peacetime and wartime effectiveness. Our solutions are used today to manage fighter configurations, provide rapid deployment of helicopter spare parts in the theatre of operations, manage development and deployment of software applications by security services and more. Serena is proud to support the men and women serving around the world who keep the peace and establish global security.

If you have stories around how technology is helping make the world a safer, more secure place, please share in the comments.

applauseSo you have all your source code under change control. If it’s under control of ChangeMan ZMF you are in the best hands possible.

But what about all those other datasets? How do you control the changes to the SYS1.PARMLIB? How would you know if an authorized application updated SYS1.LINKLIB? How do you keep changes to SYS1.PROCLIB in step on every image?

It’s not only SYS1.** but what about the configuration datasets for CICS, IMS, DB2 and for Websphere? There are thousands of datasets in our infrastructure that are not under any form of change control other than through the security access controls of RACF, ACF2 or Top Secret.

Of course, secure access control is usually enough but errors do occur and they can go unnoticed for hours, even days before their effects are discovered. This is why you need to put your system files under change control too; but a new kind of change control that meets the dynamic needs of systems programmers and the risk parameters of the business continuity team.

ChangeMan SSM provides real-time tracking of system datasets. The systems programmers choose which ones and report changes as they occur. Changes are noted and stored away in a dataset. In the future, those changes can be reviewed and, if needed, can be restored to their original state through a very simple online interface. Of course, the changes can be propagated to other systems too if they are needed elsewhere, thus making multi-system changes easy.

Find out more information about ChangeMan ZMF and ChangeMan SSM or contact me,

Kevin Parker I trust that all of our customers that attended xChange13, Serena’s global user conference, are back in their offices implementing one or more valuable tips they learned from the conference.  And I know there wasn’t any shortage of great information delivered, especially from the Mainframe product team.

The most highly anticipated and rated session at xChange13 came from Bob Yates, Mainframe Account Manager, who showed off the new migration utility. Customers who have old repositories in products that are no longer supported are feeling exposed and left behind. Many of them face serious audit finding when the state of these repositories are discovered. The new migration utility pulls out the code, the relationships, the histories and the version from old repositories from CA-Panvalet, CA-Librarian and CA-Endevor. Along the way it validates the repository and fixes the errors it inevitably finds.

Also in the mainframe track, veteran users showcased how to get the most out of their solutions by taking the technology to new levels. R&D introduced the 2014 roadmap and showed off some advanced features that are coming soon while partners shared how to exploit the Serena mainframe ecosystem. With every session filled and positive feedback received, the mainframe track was, once again, the place to be at xChange13.

Our customer presentations really set the tone for the track. Prakash Balakrishnan, from Nationwide Insurance, showed how they were making use of off-host development by exploiting the client-pack. This was followed by Serena’s Bob Yates describing all the other capabilities that Client Pack had to offer. Long-time user and ChangeMan ZMF guru, Michael Bailey of MetLife, laid out a comprehensive plan of user configurable tweaks that make administration as easy as possible. Many of these great ideas will, one day I’m sure, find their way into the product.

Thank you to all of our customers and partners for informative and entertaining presentations. Liberal sprinklings of Belgian chocolate and a cool demo are always crowd pleasers. If you missed xChange13, contact me and I will be happy to share the presentations with you. Or watch the xChange13 playlist on YouTube to see some of the main stage presentations.

It used to be that the mainframe was an island of technology just as much as the PC once was. But we have seen that distinction blur and nowhere more so than in the world of software development.

Today’s mainframe programmers are just as much at home writing in Java and C as they once were writing in COBOL and PL/I. They are happy editing, compiling and testing in TSO/ISPF on a green screen or debugging, optimizing and tuning in the GUI of Eclipse.

Even the execution environment has morphed into an array of choices that are designed to match the profile of the application and the user experience. z/OS is happy serving up web pages and z/Linux can be your transaction processing hub, Unix System Services (USS) might host your data while CICS serves up web services.

The freedom to select our technology topology to suit our business and application needs is very liberating. But there is a price to pay. Managing all this code and ensuring the integrity of those myriad of pieces is complex. This is why Serena introduced the ChangeMan ZMF Client Pack (screenshot above) and added support for z/Linux and USS deployments.

The Client Pack is designed to provide developers with the ability to use the same Software Change and Release Management solution they have always used, ChangeMan ZMF, whether they are developing in Eclipse or an Eclipse-based IDE like Rational Developer for z/Series (RDz) or Windows IDE’s, such as the ones from Micro Focus. Simple plug-in technologies instantly make the ChangeMan ZMF repositories available on your chosen platform with full access to the code you are working on and the ability to manage the change packages right there from the software menus.

Support for long filenames and member names was introduced into ChangeMan ZMF two versions ago to enable developers who want to write in Java and C for the mainframe to do so under full change control. Developers can see the full 1024 byte file names and 256 byte members names on the mainframe and from their Eclipse, or Eclipse-based, IDE’s. This makes ChangeMan ZMF the only solution to give developers complete access through one technology. The previous version of ZMF added support for Hierarchical File System (HFS) used on USS and z/Linux File System (ZFS), which means your release can con now be deployed to all the mainframe platforms from one solution.

So whatever your development or execution environment, only ChangeMan ZMF supports where you want to be for all your developers and their applications.

Release Management imageIn a very unscientific survey of more than 60 release management customers over the past 18 months, the winner in the “Largest Gantt Chart” award came in at 2.5 meters wide (8’ 2”) by 1.5 meters tall (4’11”) with over 90 “deployment tracks” covering the 68 hours of the “go live deployment weekend” or “GLDW.” One whole track was devoted to catering and it was on the critical path. More than 400 people are involved in the GLDW from noon on Friday until 8:00 am on Monday morning. This happens four times a year and many of the 400 employees spend at least one, sometimes two, nights sleeping at their desks.

Releases have become, for many organizations, more complicated than NASA space launches. And, just like John Glen, you too are now “sitting on top of two million parts … all built by the lowest bidder.” The complexity of releases today is vast when you consider the requirement to deploy software to multiple platforms and geographies. What’s more, that software comes in a myriad of technologies (many of which you have no visibility into and little control over), is developed from a variety of methodologies, and is managed across countless organizations. For many of us, managing this means spreadsheets and project plans, endless meetings and a deluge of email.

Today’s sophisticated, interdependent releases can only happen when you have the infrastructure that allows you complete visibility into the moving parts of the release and the tools that ensure coordinated movement through the lifecycle. At Serena, we have taken this need to the next level by developing the world’s first and leading Enterprise Release Management solution that spans your platforms, connects yours teams, manages your calendar and coordinates your deliverables. Working in concert with our proven Change and Configuration Management solutions on the Mainframe (ChangeMan ZMF) and on Open Systems and Windows (Dimensions CM), Serena Release Control not only gives you the flexibility you need to allow your teams to work in the way that best meets the business needs but also brings coordination and control to make sure they arrive at and depart from release milestones as expected.

By exploiting the open, web services-based architecture of our product set, Serena is able to manage your releases, even if you are using third party source-management solutions. We provide the upstream and downstream visibility needed by everyone from request-to-release and from Dev to Ops, including the ability to fully automate the deployment and handle exceptions.

So, if you are spending your next weekend in the office shepherding your next quarterly release, perhaps you should check the DevOps Drive-In Webcast series, past and upcoming.  What you learn might just be able to give you a good night’s sleep.