Kevin Parker Archive

The third day of the expo started at 10:30 this morning. It was very quiet competing against a very full and interesting agenda in the general sessions.

It was very rewarding to see so many customers come by and talk about their experiences with ChangeMan ZMF. We had one customer, who had been working as a ChangeMan ZMF Administrator for over 20 years, come by with his intern, a young college graduate who was being trained to take over from him when he retires. It was very moving to see one generation handing off to the next.

SHARE runs a fun competition for the attendees with some excellent prizes. They asked us for a question to add to their quiz and so we asked, “ChangeMan ZMF has been the leading mainframe software change, configuration and release management solution for 29 years. What was it originally called?” Do you know the answer? Post your answer in the comments section – sorry no prizes. I’ll post the answer next week.

Keep following the tweets @SerenaSoftware and @KevinParkerUSA.

And don’t forget to watch my presentation live from SHARE on the SHARE Live tomorrow at 08:30 Eastern Daylight Time



Download the community edition for FREE today

The exciting news of the day is the immediate availability of the Community Edition of Serena Deployment Automation.

If you want to simplify and automate software deployments, then download the Community Edition of Serena Deployment Automation. By the end of the day you’ll be enabling continuous delivery for your dev teams and production deployments for your ops teams. You’ll be enabling the deployment pipeline and reducing cycle times faster than you thought possible. With Serena Deployment Automation, you will deliver high-quality, valuable software to your target environments with repeatability and predictability.

This is what customers are already experiencing …

  • Deployment time reduced by 90% and effort by 50%
  • End-to-end deployments in the cloud and on premise – including mobile devices
  • Out-of-the-box integration with your entire DevOps tool-chain
  • Developer self-service enablement

… time for you to be part of the community.

Is it really free?

Yes! And it’s free forever! This is not a limited-time trial, this is not a limited functionality offer. You can download the fully functional product, templates, samples and quick-start guides. We have limited this version to five end-points so you can experience all the features of the very best in deployment automation.

When you’re ready, you can upgrade to the Professional Version for just $1,499 an end-point and you still get to keep your original 5 for free.

Installation is easy!

We have even configured a virtual appliance with all the operating system, product and samples already configured. You can download your appliance today and and be up and running minuted later (you’ll need to download the Virtual Box software for your platform (Windows, OS X, Linux, Solaris) to use the appliance.

Included are four common deployment templates and there will be more in the future.

Or download the full product for your platform and follow the simple installation guide to be up and running your own customized Deployment Automation solution.

What about support?

The Community Edition is supported by the Development Automation community of users. You will be directed to the community when you sign up for the download. You can see there are several threads already chatting about how to get the most out of Serena Deployment Automation.

Once you’ve downloaded the product we’ll check in with you in a few days to see how it’s going. If you like it you can keep using the Community Edition for as long as you like and you can even get updates as and when they become available. And its still free. Forever. Once you’re ready to upgrade to the Professional version simply contact us and we’ll help you extend your deployments to your entire enterprise.

We even have support for the mainframe in the Professional Edition.

So? What are you waiting for?

It’s time to try the Community Edition now and start deploying before you go home. Get your deployment automation started and experience the most up-to-date deployment automation technology for free. Download it here today.



Buzz words from today's sessions

Buzz words from today’s sessions

I just slipped away from the conference to pen a few notes about today here at SHARE. There are more than 1,000 attendees from all over the world and it is great to see so many familiar faces once again.

The opening address this morning was a very salutary reminder that we live in a world of constant threat to our computer systems. It isn’t just threats terrorists from without and disgruntled employees with anymore. Now we face more sophisticated challenges from cyber criminals, foreign and domestic security agencies and kids in dorm rooms. Keeping pace with those threats is probably what most IT/IS departments strive for. Clearly that is a losing proposition so we need to turn our focus on getting ahead of the threats on one hand and hardening our systems on the other hand. Its like a sinking ship: you can bail water, your can try to plug the hole but it’s best if you do both.

Today I took time to look at mainframe trends and one thing that caught my eye, and it came up in a number of sessions, was the idea that mainframe skills are not that different from the skills used, in all fields, on non-mainframe platforms. The challenge is getting developers and systems programmers to make the move to the mainframe. The advent of modern user interfaces, especially through Eclipse, is making it possible for a new migration of people to the mainframe. Here at Serena we have been long term advocates of what we call “role specific user interfaces”, by which we mean that if you are a developer your needs for interaction with a system are going to be very different than if you’re a tool administrator. Java developers need different tools to COBOL developers. The idea that “one size fits all” satisfies no one.

Today the Serena booth on the expo floor has been busier than ever. We had a huge crowd last night for the prize drawing of a Nexus 7. I expect the crowd to be just as big tonight as we give away another one.

The new ChangeMan SSM 8.4 is wowing every Systems Programmer who takes the test drive. And most are signing up for the free 90-days trial. You can sign up to.

We’re here again tonight until 7:30 pm and you’ll find us at booth #418.

I’m tweeting and following the hash tag #SHAREorg and you can follow me at @SerenaSoftware and @KevinParkerUSA.

Thanks to Tagxedo for help with the word cloud again.

Tags: Serena


Myth 1: Every deployment is unique

In the last post we talked about some of the myths about release and deployment. Perhaps the most telling comment there was the belief that “Every deployment is unique.”

Let’s break that apart and see what it really means and why it just doesn’t hold up in reality.

Deploying an application comprises of a number of parts:

  • The “payload” – what is actually being deployed and that will be code, scripts, configuration items, SQL, data and so on
  • The “set up” – what you have to do before you can deploy the “payload” like stopping servers, data migration and reformatting and backing up the environment
  • The “verify” – the what you have to do to be sure you deployed correctly
  • The “startup” – what you have to do to bring things back on the air after you have verified that there has been a successful deployment including restarting servers, re-opening telecommunications, resetting log files
  • The “what if” – the steps you need to take if any part of the “set up”, the “verify” or “startup” doesn’t go as predicted
SDA_2

Deployment to multiple targets

Constant inconsistency is consistently predictable

Your application may be simple and confined to a few identical target platforms or it may be n-tiered and deployed to a chaotic topology completely out of your control. Irrespective your deployment will have these 5 elements.

All of these steps are predictable and any variation in the how the steps are executed is determinable. For example if there are no SQL DDL changes in the payload then there’s no need to stop the database. If the web server won’t stop abort the deployment and notify the release engineer.

It might take a release engineer the best part of a day to re-craft a deployment script for each “unique” deployment and even the very best engineers will only have a 99% success rate. If the script executes for an hour every day there will be at least three outages a year and more time spent fixing scripts that actually deploying.

With Serena’s Deployment Automation solution you spend less time developing scripts because the whole process is entirely graphical. You get to reuse elements for deploying to standard environments like Oracle database and the Amazon cloud. You get to spend more time thinking about what to do if the deployment fails and you get to build in all the logic you need so that even the most diverse and complex deployments become commonplace and predictable. Your release engineers spend their time improving and automating and Serena Deployment Automation takes care of everything else.

You can learn more about deployment automation at serena.com.



Share Monday Cloud

Monday’s buzzwords here at SHARE

It has been pretty hectic here this afternoon at the Serena booth (#418) where we are showcasing the latest version of ChangeMan SSM to the nearly 1,000 attendees. Response has been great and a number of Systems Programmers have taken advantage of the special offer of trying ChangeMan SSM for free for 90-days. You can be part of this great offer too by signing up for the free trial at www.serena.com/freessm

This morning’s keynote presentation was very thought provoking and looked at the idea of trying to get the right resources and right technology in the right place in order to sustain the modernization that businesses need in the 21st century. The message was clear, the mainframe is still at the heart of the enterprise and is just as capable of contributing to, and leading in many cases, an organization’s innovation initiatives and it will continue to remain relevant for many decades to come.

Away from the general session there were many topics to choose from and I was drawn to the Big Data and Big Analytics sessions. For more than a decade Serena has been pioneering the delivery of corporate insight from the host to mobile devices. As long ago as 1999 we were alerting Change and Release Managers on their pagers about production deployments and getting their approvals from the web. Now IBM has developed the infrastructure to make that happen with ease and speed for all companies. The recent announcement of an “alliance” between IBM and Apple is proof that the battle is over and the smartphone and smart devices win and that the data battle is over too (in IBM’s view) it will thrive in the corporate data-center on the mainframe.

The expo is winding down right now but we’ll be here until 7:30 pm if you want to stop by booth #418. Also you can track the activities live on Twitter by following the hash tag #SHAREorg and by following @SerenaSoftware and @KevinParkerUSA.

Thanks to Tagxedo for help with the word cloud



5 deployment traps we can’t seem to avoid

In this 7-part series we’ll look at some common misconceptions about the process of deploying software in today’s unforgiving world. Over the next few posts we tackle these myths head on and show how there is a better way.

Releasing software into the wild is exciting and terrifying. When it goes well: we party. When it doesn’t: we spend the weekend without sleep, showers, food or sleep. Wait! Did I mention no sleep already?

Too often the reason our deployments fail is because we fall into the same traps over and over again. We never have time to step back and do it right so we keep on doing it the best we can and that is where the errors creep in.

Here are five common traps we fall into that are easy to avoid and inexpensive to solve.

1: Every deployment is unique

There’s your whole problem. It is true that what is being deployed is (or at least should be) different each time you deploy but how it is deployed needs to be standardized and familiar so it becomes repeatable and predictable. Each time you update an application it likely that it has the same topology, the same dependencies, the same footprint and the same risks.

2: Every target is unique

This is a common problem too. In the Internet of Things every device on every hip, in every pocket and buried on our phone is a unique configuration of versions and patches and operating systems. Every server has custom settings and distinctive considerations that need to be accommodated. It is beyond any human’s ability to track and manage the discrepancies amongst so many deployment targets. But yet we try with our spreadsheets and notebooks.

3: Emergency fixes are different

On a Sunday morning, at 3:00 am, no one wants to modify the 27 deployment scripts and the 81 server config files or follow the defined procedures to stop the 14 databases and quiesce the 8 transaction queues just to make a simple change. A skilled developer can do what’s need by writing a simple PERL script, right? But on Monday morning, at 8:00 am, no one wants to explain to the CEO why overnight trading in Tokyo and Hong Kong was down either. In many ways emergency fixes have more risk because they are usually developed under pressure, tested less, by pass approval levels and get deployed in whatever way seems quickest. There is too great a temptation (or expectation) to do what is expedient over what it right because the right way is the long way. Yet we all know the automated way is the right way and the fastest (and safest) way.

4: Each deployment needs me

I remember sitting in a meeting room. Outside I could see two developers peering at a screen and nodding their heads up and down in an erratic and random manner. I asked the client what was happening and she explained they were deploying a release and “watching the script go by in case something bad happened.” If a deployment fails many release engineers will roll up their sleeves and start to unpick the changes and the rest will try to fix it on the fly and keep going. This is why deployments need their release engineers to be close during the process. But what are they going to do if the deployment stops (or worse flags and error and doesn’t stop)? They have to find the problem and decide “fix forward” or “backout”? They have to then work out which is best and easiest, quickest, safest to do. If this was built right into the scripts from the beginning they could devote their energies to finding out why it failed and fixing that.

5: Errors happen: its software

There is a sense that errors are inevitable. That something will always get missed and we can correct for that later. If we are agile and iterating in small increments the risks are low and the impact minimal. In our hearts we know that is bogus. One change to one configuration line in a server can stop it from executing and bring our entire system to a halt.

Conclusion

In the next post we will address these issues and look at what is needed to be effective and efficient in deploying software. As this series continues I would welcome your thoughts and experiences. You can add them right here.

Next week: why neither deployments nor target platforms unique.



Logo
Well it’s here! The registration for xChange15 in Washington DC opened today. Visit www.serena.com/xchange for details.

If you are a xChange alumnus, you will have been sent a discount code giving a very special price as a thank you for being a returning attendee.

If you are new to xChange, we have a special promotional code for you that will save you $300 off the full price and this is good through August 29th, 2014. You can contact me directly at kparker@serena.com to get your discount code.

Looking forward to seeing you in DC!

 

 

 



Two weeks ago I had the good fortune to be at the Serena Customer Day in Frankfurt. There I was able to see the latest version of Dimensions CM demonstrated by Don Irvine, Senior Director of the Dimensions Development Team. After the event I sat down with him to ask him about the work his team had been doing on the performance of Dimensions 14.

KP: Great Demo Don. I heard you mention the great work you’ve been doing on performance of Dimensions. With super-fast computers and high-speed networks why is it still important to optimize for performance?

DI: The modern development environment has changed, not only do we need to delivery more changes faster than ever before, but we have to deal with our development teams being heavily distributed on a global scale. As an example, the Dimensions development team is split across two continents and multiple sites and several home workers for good measure.

KP: How did you determine when you were fast enough?

DI: Good question! When we started CM 14, we set ourselves a goal of being able to match the performance of simple distributed version management tools whilst, at the same time, providing the richness of features and benefits of a centrally managed repository. What we came up with was a clever caching technology that we call the Personal Library Cache Directory (PLCD) which, when coupled with a new and really innovative delta transfer technique, has literally supercharged our file transfers.

KP: That sounds impressive. Do you have metrics you can share?

DI: Earlier this week I got to see the results of these changes, and the performance is truly breathtaking! Our own development server is a Dimensions CM server of course. The produiction instance of that server is located in Oregon, on the West Coast of the United States, but my development teams are based in around the world with most being in our centers in St. Albans in the UK and in Kiev in the Ukraine. This network topology results in my teams having both limited bandwidth and high latency (ping time in excess of 200ms) to the Dimensions server. The entire source code for Dimensions CM is close to 40,000 artifacts and is just over 1.3GB in size. On a busy day when Dimensions CM version 12.2.2 was our production server the fetch of all the source code, using a Library Cache in our European data center, would take over 200 seconds. For developer who were home based and not using the Library Cache it could take in excess of 20 minutes. Now with CM 14 this same operations takes around 70 seconds.

KP: Don, that is really impressive. How does that compare to those simple versioning systems like SubVersion?

DI: We did do some benchmarks against SubVersion and GIT. In comparison the same fetch from Subversion took over 40 minutes to complete (KP: wow!), and from Git took 53 seconds but our instance of GIT was a clone of a local repository.

KP: So having a Dimensions repository hosted on the other side of the World now gives similar performance to having a distributed repository on your local machine?

DI: Exactly. But we’re not stopping there. Last week my team came to me with even more ideas for making Dimensions even faster still in the next release!

KP: Don, this is great. Congratulations to you and your exceptional team. Thanks for taking the time to chat with me today.

 



We are excited to announce that Serena’s xChange15 will be from Sunday March 22nd until Wednesday March 25th 2015. We are bringing xChange back to the east coast and will be in the wonderful Washington D.C. area in time for the First Day of Spring and the world-famous Cherry Blossom Festival. So mark your calendars today! We’ve selected the prestigious Ritz-Carlton at Tyson’s Corner for the event and we have secured an amazing rate for the accommodation there.

This year we will be focusing on number of critical IT issues and we will be showcasing our latest innovations in Release Management and Release Automation. We also bringing news and updates to our product line and will be launching exciting new releases first at xChange!

We will have more than 60 specialist sessions delivered by you, our customers and partners, as well as by the amazing technical teams from R&D, Customer Support and Professional Services. Deep dive, hands-on and advanced topics will, once again, lead the content making xChange the most valuable three days you can spend.

We are looking for speakers right now to deliver the intensive sessions and we invite you to contribute around the topics of:

  • Application Development
  • Software Release and Deployment
  • Human and System Automation
  • Modern Mainframe Application Development
  • Industry Trends and Hot Topics

The usual and unique features of xChange will be back including the now famous AnswerZone with its intense one-on-one consulting sessions and the Birds-of-a-Feather lunches. In addition this time we are introducing Ignite Sessions and more dynamic, customer-led discussions. You’ll hear Serena executives as well as all the technical leadership describe the direction we are forging as the leader in Application Change, Release and Configuration Management and you will be able to interact with them in private briefing sessions throughout the conference.

Check here for more details and how to register. We will be posting regular updates such as the Agenda, Breakout Sessions, Training Sessions, Hotel Info, Special Events, Expo and Exhibit Hall, and more.

If you have ideas for presentations you want to give or if you have questions you want answered please drop me a note and I’ll share them with the xChange team.

I’m really looking forward to seeing you again at xChange15.



Next month I will be presenting at the SHARE conference in Pittsburgh. The bi-annual event is the place to be to learn about the trends and tricks for developing modern applications on the mainframe.

No one knows better than the army of Change and Release Managers that guard the mainframe environment, just how risky it is to change anything on the mainframe. And no one knows better than they just how business threatening it is not to keep pace with the market and customer needs. Balancing these two forces has been at the heart of the mainframe world for 5 decades now.

My presentation takes a look at how the world of “change” (in all of its forms and meanings) is changing and suggests that we need to change the way we think about and react to change. If you would like to watch the presentation you can attend in person or you can watch it online on the SHARE website. After the conference the presentation will be available on the SHARE website and I will post the presentation here also.

As well as exhibiting at the conference we will be meeting with many customers to share one-on-one briefings about the exciting new version of ChangeMan ZMF that will be available later this year. If you’d like to schedule a one-on-one briefing  please let me know by emailing me at kparker@serena.com.