DevOpsDays Silicon Valley was the biggest DevOpsDays event yet. The event had a more professional feel than previous events I have attended. This is not to say the other events were unprofessional, just that the layout and facilities were in a more traditional conference setting.
Attendees from “enterprise” companies were out in force to learn more about DevOps and there was an excellent open space session about DevOps in the enterprise, which was well attended.
The one session that really stood out was Jeff Sussna’s (@jeffsussna) session on Continuous Quality, which was fantastic. Over the past few years, great advances have been made in the Dev and Ops spaces but QA has been somewhat overlooked. The shift in focus from Quality Assurance (which frequently means validation of fixes) to Quality Engineering sets the stage for Continuous Quality.
I agree with Jeff that QA engineers are customer advocates and should be brought into the design process as early as possible. However, as always, keep in mind that the way someone is measured influences behavior. I have witnessed people trying to influence a design so that it is easier to test, not easier to use. Fortunately, the move from Quality Assurance to Quality Engineering will result in the focus changing from simply verifying that a fix has been done to really being an evangelist for the end user.
This leads me to something that is key to success in a DevOps initiative. Incentives must be aligned. If QA is measured by the amount of defects that have been verified as fixed then they will prioritize that first. The same if Dev is tasked with making changes and Ops is tasked with ensuring stability; each will focus on meeting their own goals, which will frequently conflict. By ensuring that goals are aligned with business outcomes and are shared across teams, conflict will be reduced and Dev, QA and Ops can all focus on delivering quality products to customers faster.
My expectations for Velocity 2013 were high, based on my experience at Velocity 2012. Once again, the folks from the Ops side of the house were out in force asking plenty of insightful questions. This year I signed up to host a “birds-of-a-feather” session (more commonly known as a BOF) on DevOps in the traditional enterprise. The session had good attendance; it was great to see interest from companies that have been around for a long time and don’t have their roots in high tech.
For the most part, agile adoption is ramping up and cultural changes that help companies be more responsive are already underway. A good example of this is developers sharing responsibility for systems once the systems are in production. It’s amazing what gets fixed when developers are on call. Having been part of a “SWAT” team in the past, I can attest that I wanted to make sure my areas of responsibility were as robust as possible before they got anywhere near production.
As the session went on, it became apparent that while there is plenty of information out there on DevOps for tech companies, there is little information to help traditional “brick and mortar” companies become more agile.
As Serena has a long history working with the enterprise, we hope to help our customers deliver business value faster by helping them transition to a more agile way of working: a process-based approach to release management.
You can learn more by joining us for our monthly DevOps Drive-in Webcast Series.
Last week, Dave Nielsen (@davenielsen) ran a series of five classes on Cloud Computing for Mid-Pacific ICT’s California Community College Faculty Development Week. As part of the series, Dave invited me to lead the Cloud App Deployment Workshop, which covered DevOps in the cloud.
The faculty members were really engaged and got to work on manual deployments to the cloud. They experienced their own version of a “million dollar meeting,” where a large group of people were collaborating to deploy a release successfully. They then used Amazon OpsWorks cloud platform and saw how easy it was to deploy an application to the cloud when there was a concept of application and environment modeling provided.
It was great to work with a group of teachers who are passionate about giving their students real world skills and understanding of how those skills map to business requirements.
I’m extremely grateful that Serena was supportive of my participation as a volunteer at this event and I’m excited at the prospect of working with Dave again to present this class elsewhere.
Last week I presented a session at the Agile Development Conference West. A quick poll of the audience indicated that not one person in the room had heard of DevOps. This wasn’t the case of people not putting up their hands. We engaged in some good discussions throughout the session and the lack of DevOps knowledge was apparent.
There was one person in the room who works for a company that seems to be quite advanced in agile practices and supportive of automation. Feedback loops were tight. Communication between Dev and QA was adding value. But, there were still problems once software was to be deployed from QA to production. The usual challenges were to blame: different deployment methods, environmental differences, incorrect assumptions and so on – pretty much the norm for many organizations.
I explained a bit about DevOps, breaking down silos and some of the supporting tools available that could solve some of his immediate problems (for example Puppet or Chef). He really got the value of how engaging Ops earlier in the process and embedding Ops into the cross-functional agile teams would go a long way towards tackling a major area of pain.
My lesson from this is that I made an incorrect assumption. I assumed that those in development organizations doing agile successfully had likely heard about DevOps. To encounter people working in an agile silo was a real surprise to me. I usually keep in mind that many organizations working in DevOps don’t realize that what they are doing is even called “DevOps.” So, I am used to that conversation, but this was a good reminder that there is still a long way to go with DevOps awareness.
A great place to start learning about DevOps is at the Serena DevOps Drive-In webcast series. Register to attend and get a free bag of popcorn!
This week I had the pleasure of hearing Gene Kim speak about DevOps and release management again. Although I’ve seen Gene present multiple times, I always leave his talks with more insight than the previous time.
During this presentation, it was Gene’s description of “the three ways” that caught my attention. For those of you who haven’t read Gene’s book, The Phoenix Project, the hero, Bill, is introduced to the three ways by a mysterious and sometimes frustrating mentor named Eric. With Eric’s help, Bill learns about the three ways.
The first way: Flow
Flow is understanding how work moves through your process, how it flows from left to right. The message that really stood out to me was not letting local optimization cause global degradation. I’ll readily admit I have been guilty of this in the past, partly because I felt trapped in a silo and had specific MBOs to meet, which of course were silo-specific. While I had supportive management, it was not always possible to work within company culture to work across silos.
The second way: Feedback
Frequently, when managing software releases I have seen information flowing from left to right, from dev to QA to production teams. There is minimal opportunity for feedback and, often, little time allocated to react to feedback when it is received. The second way stresses the importance of feedback, from right to left in a process. In order for this to be successful, the feedback loops should be short so that information is received in a timely manner. Continuous improvement must be integral to your process and the feedback loops need to provide information to feed the continual improvement.
The third way: Experimentation and Learning
Gene also talked about failing fast, which is part of experimentation and learning. Develop a minimally viable product (MVP), get feedback and if, for whatever reason, an idea isn’t working out, abandon it or change course quickly. In my experience, the longer you work on a project, the harder it is to convince people to change course, no matter how compelling the data proves that change is needed. Developing a minimally viable product is a great way to get feedback in a timely manner when there is still a high chance feedback will be acted upon.
My only gripe with the MVP model is that, all too frequently, teams seem to deliver a minimally viable product and then go onto the next big thing. So, what starts as a great MVP and is compelling to users, ends up as a product that does not meet expectations. Revisiting MVP’s to make sure that they remain competitive is as important as adding new features.
No matter what vertical market you are in, mastering the three ways will help your company become a high-performing organization. For more about the “the three ways,” check out Gene Kim’s The Phoenix Project. I highly recommend it.
We’re thrilled to announce Serena’s participation and sponsorship at three prominent industry events during the month of June.
Serena executives will be on hand at each event demonstrating release automation and release process control solutions that help solve the DevOps challenges faced by IT organizations today. We will also be running giveaways and launching surveys—so come on by for your chance to win and be heard!
Gartner IT Infrastructure & Operations Management Summit
June 18-20 in Orlando, FL
At Gartner IOM, we will be giving away the popular “Keep Calm and Release More” t-shirts, as well as running an extensive DevOps trends survey. Those of you who participate in the survey are automatically entered to win a Google Nexus 7 tablet.
For more information on this event or to register, click here.
O’Reilly Velocity Conference
June 18-20 in Santa Clara, CA
At Velocity, we will also be giving away cool t-shirts, as well as giving away a Google Nexus 7 tablet at every break in the conference schedule.
For more information on this event or to register, click here. Use promo code DEVOPS20 to get a 20% discount.
DevOps Days Mountain View
June 21-22 in Santa Clara, CA
At DevOps Days Mountain View, please look for Mark Levy and myself, or tweet us at @SerenaSoftware.
For more information on this event or to register, click here.
Hope to see you there!
Release automation is a hot topic. This is pretty exciting to witness since I have worked in the industry for a long time and crafted quite a few deployment solutions by hand. However, there may be too much focus on release automation and not enough on release management holistically. Pushing bits to servers with a small amount of process management around it is where most release automation tools stop.
Process management, visualization and traceability are all critical, especially as the number of releases increases. A release management solution, of which release automation is one component, must also have a concept of what a release is. A release is more than just a collection of builds. It includes change requests, scheduling, approvals and many other things.
Trying to add the concept of a release on top of a release automation solution that has no concept of release or a simplistic version of release is a lot of work. Furthermore, having to roll your own frequently results in auditability, traceability and visualization capabilities becoming more unwieldy, losing some of the value that a solution provides.
Without the right checks and balances, release automation provides an extremely effective vehicle for potentially releasing the wrong thing to production efficiently. You can talk to me more about this at the upcoming Velocity Conference from June 18-20 in Santa Clara, CA. I’ll be in the Serena booth. Register for the event, if you haven’t already!
Last week I was in Austin attending my second DevOpsDays event of the year and it had a very different feel to DevOpsDays London. It was the biggest DevOpsDays event yet with over 300 attendees and a reasonable amount of sponsors, Serena being one of them.
DevOpsDays Austin focused very much on culture, much to the frustration of some but I believe it is the right choice. Unless you are fortunate enough to be in an environment that has a culture perfectly suited to DevOps, then focusing on technology alone will probably end in disappointment.
My co-workers who attended DevOpsDays Austin last year tell me that there were more staff from larger enterprises attending this year, which tracks with the general uptake of DevOps in the enterprise. While DevOps is slowly gaining traction in the enterprise, I certainly see a lot of interest in tools to help bring DevOps to the enterprise. Tackling culture at an enterprise scale is something of particular interest to me and something that I continue to think about. There is no easy answer to cultural change at an enterprise scale, but it is great to see discussions in that area at DevOpsDays events.
Patrick Debois attended and gave a presentation on the future of DevOps. He even included the definition of a meme in his session and the description was quite surprising. Patrick always delivers great sessions and didn’t disappoint with this one.
The Ignite talks and Open Spaces were thought-provoking and full of useful information. A particularly interesting session was on monitoring. Jenny Yang and Toufic Boubez of Metafor Software facilitated this discussion, which was lively and informative. You can read Toufic’s blog post on the subject.
On the second day I presented an Ignite session. Let me tell you, they are much harder to prepare for and do than a regular speaking slot. My session on “DevOps when you can’t hire the A-Team” focused on breaking out of what I call the DevOps bubble and how to leverage open source and commercial software to achieve DevOps success.
Patrick is posting videos of the event to vimeo.
Finally, thanks to the folks at Puppet Labs for getting me to participate in their 3PM push-up session! No photos were taken that I know of, and in all honesty, I’m okay with that!
I’m looking forward to DevOpsDays Mountain View. Hope to see you there.
Last week I was able to get the most out of my trip to Austin by attending Puppet Camp and then DevOpsDays. Puppet Camp was a high quality event with great sessions. The content was quite varied and really resonated with me. I’ve noticed that the Puppet community is undergoing phenomenal growth and I’m impressed with the level of community engagement. Initiatives such as ask.puppetlabs.com and Puppet Forge make community involvement easier than ever. A couple of highlights from the event…
@GrandmaHenri, a technical writer at Puppet Labs, presented a session on how to document modules on the Puppet Forge. This should keep the quality of submissions to Puppet Forge high and help people find the appropriate modules to use.
Adrian Thebo did a session on “Writing and Sharing Great Modules on the Puppet Forge” and provided good programming advice for all of the Puppet users in the room with or without a programming background. Acronyms like MDD (Mistake Driven Development) got a lot of sheepish grins and people readily admitted to testing Puppet code in production! I think everyone came away with ideas on how to improve what they were doing.
There were many other sessions that were just as excellent and a cut above what I’m used to seeing at these types of events. I’m looking forward to PuppetConf in August!
A big thank you to Dawn Foster (@geekygirldawn) for helping to organize an amazing event and for helping to build an awesome community. I hope to see everyone I met at Puppet Camp at PuppetConf in August.
I just got back from attending a few back-to-back events. One of which was ChefConf 2013. It was my first ChefConf and it was a really high energy event with a wide variety of speakers and attendees. I chose to focus on the sessions covering culture, but there were many great sessions.
The first memorable session was “Scaling systems configuration at Facebook,” presented by Phil Dibowitz from Facebook. A larger-than-life rock and roll guy if ever I saw one, Phil did an awesome session on why Chef was the appropriate tool for use at Facebook. I think Phil probably made Chef support staff cringe when he discussed “tweaking” the Chef libraries to meet his needs, but he did add a disclaimer that tweaking probably shouldn’t be done.
Glenn O’Donnell of Forrester Research presented what I would call a “feel good” session which described how important the skills of everyone at ChefConf are as the industry moves forward. Infrastructure as code is no longer a nice to have but a competitive advantage that we need to stay on top of.
Disney presented a highly polished, extremely interesting session on how Chef is used at Disney. One session highlighted potential problems with enterprises using open source solutions. A company had made several Chef Cookbooks and had not given any back to the community. Comments voicing frustration quickly erupted on Twitter.
The stats on Chef community growth were truly impressive; it has more than doubled in the past 12 months. You can read more about the momentum that Chef currently has on the Opscode Blog.
Finally, one thing was mentioned over and over and over again at most of the sessions I attended — The Phoenix Project by Gene Kim. Characters Eric, Brent, and Bill were referenced as if they were personal acquaintances of everyone. I suspect many people will also purchase and read The Goal, as it inspired Gene. I know I will.
I can’t wait for the next ChefConf or Chef meet-up. Hope to see you there!