Two weeks ago I had the good fortune to be at the Serena Customer Day in Frankfurt. There I was able to see the latest version of Dimensions CM demonstrated by Don Irvine, Senior Director of the Dimensions Development Team. After the event I sat down with him to ask him about the work his team had been doing on the performance of Dimensions 14.
KP: Great Demo Don. I heard you mention the great work you’ve been doing on performance of Dimensions. With super-fast computers and high-speed networks why is it still important to optimize for performance?
DI: The modern development environment has changed, not only do we need to delivery more changes faster than ever before, but we have to deal with our development teams being heavily distributed on a global scale. As an example, the Dimensions development team is split across two continents and multiple sites and several home workers for good measure.
KP: How did you determine when you were fast enough?
DI: Good question! When we started CM 14, we set ourselves a goal of being able to match the performance of simple distributed version management tools whilst, at the same time, providing the richness of features and benefits of a centrally managed repository. What we came up with was a clever caching technology that we call the Personal Library Cache Directory (PLCD) which, when coupled with a new and really innovative delta transfer technique, has literally supercharged our file transfers.
KP: That sounds impressive. Do you have metrics you can share?
DI: Earlier this week I got to see the results of these changes, and the performance is truly breathtaking! Our own development server is a Dimensions CM server of course. The produiction instance of that server is located in Oregon, on the West Coast of the United States, but my development teams are based in around the world with most being in our centers in St. Albans in the UK and in Kiev in the Ukraine. This network topology results in my teams having both limited bandwidth and high latency (ping time in excess of 200ms) to the Dimensions server. The entire source code for Dimensions CM is close to 40,000 artifacts and is just over 1.3GB in size. On a busy day when Dimensions CM version 12.2.2 was our production server the fetch of all the source code, using a Library Cache in our European data center, would take over 200 seconds. For developer who were home based and not using the Library Cache it could take in excess of 20 minutes. Now with CM 14 this same operations takes around 70 seconds.
KP: Don, that is really impressive. How does that compare to those simple versioning systems like SubVersion?
DI: We did do some benchmarks against SubVersion and GIT. In comparison the same fetch from Subversion took over 40 minutes to complete (KP: wow!), and from Git took 53 seconds but our instance of GIT was a clone of a local repository.
KP: So having a Dimensions repository hosted on the other side of the World now gives similar performance to having a distributed repository on your local machine?
DI: Exactly. But we’re not stopping there. Last week my team came to me with even more ideas for making Dimensions even faster still in the next release!
KP: Don, this is great. Congratulations to you and your exceptional team. Thanks for taking the time to chat with me today.
|Kevin Parker is a 30 year industry veteran, holder of three technology patents and is VP and Chief Evangelist at Serena Software. He speaks and writes on application development methodologies, business analysis, quality assurance techniques, governance, open source issues, and tool interoperability, from the mainframe to distributed platforms to the web and mobile and embedded systems. Kevin was born and educated in the UK and currently lives on a boat on the San Francisco Bay.|