What Is DevOps?

DevOps is a term for a group of concepts that, while not all new, have catalyzed into a movement and are rapidly spreading throughout the technical community.  Like any new and popular term, people have somewhat confused and sometimes contradictory impressions of what it is.  Here’s my take on how DevOps can be usefully defined; I propose this definition as a standard framework to more clearly discuss the various issues DevOps covers. Like “Quality” or “Agile,” DevOps is a large enough concept that it requires some nuance to fully understand.

Definition of DevOps

DevOps is a new term emerging from the collision of two major related trends. The first was also called “agile system administration” or “agile operations”; it sprang from applying newer Agile and Lean approaches to operations work.  The second is a much expanded understanding of the value of collaboration between development and operations staff throughout all stages of the development lifecycle when creating and operating a service, and how important operations has become in our increasingly service-oriented world (cf. Operations: The New Secret Sauce).

One definition Jez Humble explained to me is that DevOps is “a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale.”

That’s good and meaty, but it may be a little too esoteric and specific to Internet startup types. I believe that you can define DevOps more practically as 

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

A primary corollary to this is that part of the major change in practice from previous methods is

DevOps is also characterized by operations staff making use many of the same techniques as developers for their systems work.

Those techniques can range from using source control to testing to participating in an Agile development process.

For this purpose, “DevOps” doesn’t differentiate between different sysadmin sub-disciplines – “Ops” is a blanket term for systems engineers, system administrators, operations staff, release engineers, DBAs, network engineers, security professionals, and various other subdisciplines and job titles. “Dev” is used as shorthand for developers in particular, but really in practice it is even wider and means “all the people involved in developing the product,” which can include Product, QA, and other kinds of disciplines.

DevOps has strong affinities with Agile and Lean approaches. The old view of operations tended towards the “Dev” side being the “makers” and the “Ops” side being the “people that deal with the creation after its birth” – the realization of the harm that has been done in the industry of those two being treated as siloed concerns is the core driver behind DevOps. In this way, DevOps can be implemented as an outgrowth of Agile – agile software development prescribes close collaboration of customers, product management, developers, and (sometimes) QA to fill in the gaps and rapidly iterate towards a better product – DevOps says “yes, but service delivery and how the app and systems interact are a fundamental part of the value proposition to the client as well, and so the product team needs to include those concerns as a top level item.” From this perspective, DevOps is simply extending Agile principles beyond the boundaries of “the code” to the entire delivered service.

Definition In Depth

DevOps means a lot of different things to different people because the discussion around it covers a lot of ground.  People talk about DevOps being “developer and operations collaboration,” or it’s “treating your code as infrastructure,” or it’s “using automation,” or “using kanban,” or “a toolchain approach,” or “culture,” or a variety of seemingly loosely related items.  The best way to define it in depth is to use a parallel method to the definition of a similarly complex term, agile development.  Agile development, according to Wikipedia and the agile manifesto, consists of four different “levels” of thing. I’ve added a fifth, the tooling level – talk about agile and devops can get way too obsessed with tools, but pretending they don’t exist is also unhelpful.

  • Agile Values – Top level philosophy, usually agreed to be embodied in the Agile Manifesto. These are the core values that inform agile.
  • Agile Principles – Generally agreed upon strategic approaches that support these values.  The Agile Manifesto cites a dozen of these more specific principles. You don’t have to buy into all of them to be Agile, but if you don’t subscribe to many of them, you’re probably doing something else.
  • Agile Methods – More specific process implementations of the principles.  XP, Scrum, your own homebrew process – this is where the philosophy gives way to operational playbooks of “how we intend to do this in real life.” None of them are mandatory, just possible implementations.
  • Agile Practices – highly specific tactical techniques that tend to be used in conjunction with agile implementations.  None are required to be agile but many agile implementations have seen value from adopting them. Standups, planning poker, backlogs, CI, all the specific artifacts a developer uses to perform their work.
  • Agile Tools – Specific technical implementations of these practices used by teams to facilitate doing their work according to these methods.  JIRA Agile (aka Greenhopper), planningpoker.com, et al.

Ideally the higher levels inform the lower levels – people or organizations that pick up specific tools and practices without understanding the fundamentals may or may not see benefits but this “cargo cult” approach is generally considered to have suboptimal results. I believe the different parts of DevOps that people are talking about map directly to these same levels.

  • DevOps Values – I believe the fundamental DevOps values are effectively captured in the Agile Manifesto – with perhaps one slight emendation to focus on the overall service instead of simply “working software.” Some previous definitions of DevOps, like Alex Honor’s “People over Process over Tools,” echo basic Agile Manifesto statements and urge dev+ops collaboration.
  • DevOps Principles – There is not a single agreed upon list, but there are several widely accepted attempts – here’s John Willis coining “CAMS” andhere’s James Turnbull giving his own definition at this level. “Infrastructure as code” is a commonly cited DevOps principle. I’ve made a cut at “DevOps’ing” the existing Agile manifesto and principles here. I personally believe that DevOps at the conceptual level is mainly just the widening of Agile’s principles to include systems and operations instead of stopping its concerns at code checkin.
  • DevOps Methods – Some of the methods here are the same; you can use Scrum with operations, Kanban with operations, etc. (although usually with more focus on integrating ops with dev, QA, and product in the product teams). There are some more distinct ones, like Visible Ops-style change control and using the Incident Command System for incident reponse. The set of these methodologies are growing; a more thoughtful approach to monitoring is a hot topic right now.
  • DevOps Practices –Specific techniques used as part of implementing the above concepts and processes. Continuous integration and continuous deployment, “Give your developers a pager and put them on call,” using configuration management, metrics and monitoring schemes, a toolchain approach to tooling… Even using virtualization and cloud computing is a common practice used to accelerate change in the modern infrastructure world.
  • DevOps Tools – Tools you’d use in the commission of these principles. In the DevOps world there’s been an explosion of tools in release (jenkins, travis, teamcity), configuration management (puppet, chef, ansible, cfengine), orchestration (zookeeper, noah, mesos), monitoring, virtualization and containerization (AWS, OpenStack, vagrant, docker) and many more. While, as with Agile, it’s incorrect to say a tool is “a DevOps tool” in the sense that it will magically bring you DevOps, there are certainly specific tools being developed with the express goal of facilitating the above principles, methods, and practices, and a holistic understanding of DevOps should incorporate this layer.

In the end, DevOps is a little tricky to define, just like its older brother Agile. But it’s worth doing. When left at the pure philosophy level, both can seem like empty mom-and-apple-pie statements, subject to the criticism “You’re just telling me ‘do my job better,’ duh…” But conversely, just the practices without the higher level guidance turn into a cargo cult. “I do what this Scrum book says so I’m doing Agile” is like “I’m using Chef so I’m DevOps right?” To be a successful Agile or DevOps practitioner is to understand all the layers that explain what it is, what it might be, and what a given implementation might contain or not contain. In the end, what DevOps hopes to bring to Agile is the understanding and practice that software isn’t done until it’s successfully delivered to a user and meets their expectations around availability, performance, and pace of change.

History of DevOps

The genesis of DevOps comes from an increasing need for innovation on the systems side of technology work.  The DevOps movement inherits from the Agile System Administration movement and the Enterprise Systems Management (ESM) movement.

ESM, which arose in the mid-2000’s, provided the original impetus of “Hey, our methodology of running systems seems to still be in a pretty primitive state despite years of effort.  Let’s start talking about doing it better.”  John Willis, whurley, and Mark Hinkle from Zenoss were involved in that, and sponsored aBarCamp around the concept.   I think during this phase, initial enchantment with ITIL as a governance framework was largely overthrown for the “ITIL Lite” Visible Ops approach, as well as a shift from being “large vendor” focused – used to be, the enterprise frameworks like HP, IBM, and CA were the only meaningful solutions to end to end systems management, but more open source and smaller vendor stuff was coming out, including Spiceworks, Hyperic, Zenoss, and others.

Also in 2008, the first Velocity conference was held by O’Reilly, focusing on Web performance and operations, which provided a venue for information sharing around operations best practices. In 2009 there were some important presentations about the developer/operations collaboration at large shops (most notably Flickr) and how that promoted safe, rapid change in Web environments.  Provisioning tools like Puppet and Chef had strong showings there. More people began to think about these newer concepts and wonder how they might implement them.

Somewhat in parallel, as agile development’s growth in the development space was reaching its most fevered pitch and moving from niche to common practice, this turned into thinking about “Agile Systems Administration” especially in Europe.  Gordon Banner of the UK talked about it early on with this presentation.  A lot of the focus of this movement was on process and the analogies from kanban and lean manufacturing processes to IT systems administration.  Then sometime in 2009, Patrick Debois from Belgium and Andrew “Clay” Shafer from the US met and started talking up (and coined the term) DevOps, and then Patrick held the first DevOpsDays event in Ghent that lit the fuse.  The concept, now that it had a name, started to be talked up more in other venues (I found out about it at OpsCamp Austin) including Velocity and DevOpsDays here in the US and spread quickly.

In Patrick Debois’ view, DevOps arose as a reaction against the silos and inflexibility that were resulting from existing practices, which probably sounds familiar. Here’s a good piece by John Willis on the history of the DevOps movement that deconstructs the threads that came together to create it.

DevOps emerged from a “perfect storm” of these things coming together.  The growing automation and toolchain approach fed by more good monitoring and provisioning tools, the need for agile processes and dev/ops collaboration along with the failure of big/heavy implementations of ITSM/ITIL – they collided and unconsciously brought together all three layers of what you need for the agile movement (principles, process, and practices) and caught fire. Since then it has developed, most notably by the inclusion of Lean principles by many of the thought leaders.

What is DevOps Not?

It’s Not NoOps

It is not “they’re taking our jobs!”  Some folks thinks that DevOps means that developers are taking over operations and doing it themselves.  Part of that is true and part of it isn’t.

It’s a misconception that DevOps is coming from the development side of the house to wipe out operations – DevOps, and its antecedents in agile operations, are being initiated out of operations teams more often than not.  This is because operations folks (and I speak for myself here as well) have realized that our existing principles, processes, and practices have not kept pace with what’s needed for success.  As businesses and development teams need more agility as the business climate becomes more fast paced, we’ve often been providing less, and we need a fundamental reorientation to be able to provide systems infrastructure in an effective manner.

Now, as we realize some parts of operations need to be automated, that means that either we ops people do some automation development, or developers are writing “operations” code, or both.  That is scary to some but is part of the value of the overall collaborative approach. All the successful teams I’ve run using this approach have both people with deep dev skill sets and deep ops skill sets working together to create a better overall product. And I have yet to see anyone automate themselves out of a job – as lower level concerns become more automated, technically skilled staff start solving the higher value problems up one level.

It’s Not (Just) Tools

It’s also not “about the tools.”  One reason why I want to have a more common definitionof DevOps is that the risk of having various confusing and poorly structured definitions increases the risk that people will pass by the “theory” and implement the processes or tools of DevOps without the principles in mind, which is definitely an antipattern.

Agile practitioners would tell you that just starting to work in iterations without initiating meaningful collaboration is likely to not work out real well. There are some teams at companies I’ve worked for that adopted some of the methods and/or tools of agile but not its principles, and the results were suboptimal. Sure, a tool can be useful in Agile (or DevOps), but if you don’t know how to use it then it’s like giving an assault weapon to an untrained person.

But in the end, fretting about “tools shouldn’t be called DevOps” is misplaced. Is poker planning “agile” in the sense that doing it magically gets you Agile?  No.  But it is a common tool used in various agile methodologies, so calling it an “agile tool” is appropriate. Similarly, just because DevOps is not just a sum of the tools doesn’t mean that tools specifically designed to run systems in accordance with a DevOps mindset aren’t valuable. (There are certainly a bunch of tools that seem specifically designed to prevent it!)

It’s Not (Just) Culture

Many people insist that DevOps “is just culture” and you can’t apply the word to a given principle or practice, but I feel like this is overblown and incorrect. Agile has not helped thousands of dev shops because the work on it stopped at “culture,” with admonitions to hug coworkers and the lead practitioners that identified the best practices simply declaring it was all self-evident and refusing to be any more prescriptive. (Though there is some of that). DevOps consists of items at all the levels I list above, and is largely useless without the tangible body of practice that has emerged around it.

It’s Not (Just) Devs and Ops

And in the end, it’s not exclusionary.  Some people have complained “What about security people!  And network admins!  Why leave us out!?!”  The point is that all the participants in creating a product or system should collaborate from the beginning – business folks of various stripes, developers of various stripes, and operations folks of various stripes, and all this includes security, network, and whoever else.  There’s a lot of different kinds of business and developer stakeholders as well; just because everyone doesn’t get a specific call-out (“Don’t forget the icon designers!”) doesn’t mean that they aren’t included.   The original agile development guys were mostly thinking about “biz + dev” collaboration, and DevOps is pointing out “dev + ops” collaboration, but the mature result of all this is “everyone collaborating”. In that sense, DevOps is just a major step for one discipline to join in on the overall culture of agile collaboration that should involve all disciplines in an organization.

It’s Not (Just) A Job Title

Simply taking an existing ops team and calling them “The DevOps Team” doesn’t actually help anything by itself.  Nor does changing a job title to “DevOps Engineer.” If you don’t adopt the values and principles above, which require change at an overall system level not simply within a given team, you won’t get all the benefits.

However, I’m not in the camp that rails that you ‘can’t have DevOps in a job title.” It is often used in a job title as a way to distinguish “new style DevOps-thinking, automation-first, dev-collaborating, CI-running, etc. sysadmin” from “grouchy back room person who aggressively doesn’t care what your company does for a living.” Some people find value in that, others don’t, and that’s fine.

It’s Not Everything

Sometimes, DevOps people get carried away and make grandiose claims that DevOps is about “everything everywhere!” Since DevOps plugs into the overall structure of a lot of lean and agile thinking, and there are opportunities for that kind of collaboration throughout an organization, it’s nice to see all the parallels, but going and reengineering your business processes isn’t really DevOps per se.  It is part of an overall, hopefully collaborative and agile corporate culture, but DevOps is specifically about how operations plugs into that.  Some folks overreach and end up turning DevOps into a super watered down version of Lean, Agile, or just love for everyone. Which is great at the vision level, but as you march down the hierarchy of granularity, you end up mostly dealing with operational integration – other efforts are worrying about the other parts (you can personally too of course).

How Not to Get Overwhelmed as a Web Developer

In the past week, I’ve worked on projects that have required me to write HTML, CSS, Javascript, and PHP. In working on those projects, I’ve had to employ various technologies, including responsive design, AJAX, WordPress theme development, API integration, and modular javascripting. Let’s not forget that most (if not all) of these projects involved a preprocessor, build tool, or method of version control. Does that sound a lot like your week?

Truth be told, in todays world of web design, development, and software engineering, you’re expected to know a variety of languages, tools, technologies, and coding methods. This field is fast paced, frequently changing, and incredibly complex. It’s no surprise that so many of us have felt the growing burdens of Information Overload.

How do you identify Information Overload?

For me, IO is the feeling of being overwhelmed with the large amount of information I need in order to stay useful as a web developer. Other times, it manifests itself as a feeling of panic when a new tool, language, or project is announced. IO can go on to cause fear when you feel that you’re failing to keep up with the industry, or even make you upset when a new tool leads you to consider changing your workflow. IO can lead to avoiding new technologies, not fully enjoying your career, and feeling inferior to those who have more experience than you in a certain area.

IO causes real problems

If you have or are currently struggling with IO, then you probably understand the side effects it can cause. If you tend to overwork (as I sometimes do) IO can lead to more hours spent studying code, reading articles, and making demos. On its own, this isn’t necessarily a bad thing, but too much time spent working, combined with too little time spent eating or sleeping, can lead to a burnout. If IO is leading you to resent your job, depression and anxiety can also be common side effects, perpetuating the general feeling of being overwhelmed with your work.

Solutions

Stacks

Although keeping up to date is an expected requirement in the field of web development, IO doesn’t have to be a consequence. For me, the most helpful solution to the problem of IO has been to limit the number of languages I aim to be proficient in. I call it a ‘stack’, and it currently consists of HTML, CSS, Javascript, and PHP. Outside of my stack, I’m able to use other languages if a project requires it, but I won’t be looking to gain an expert knowledge of them.

After establishing what my stack languages are, I suddenly don’t need to pay attention to every popular tool that comes my way. If it doesn’t involve one of my stack languages, I don’t need to use it! It’s important to note here that, even if it does involve one of my stack languages, I still don’t need to use the tool. Tools are not mandatory, and should only be used if they help you be more productive, or become so popular that the industry expects you to be using them. For example, I work with PHP quite often, but I’ve never used Laravel, because I simply haven’t needed it yet.

Filters

Podcasts, video blogs and articles are a great source of information, but again, trying to read, watch, and listen to every single one will definitely leave you feeling pretty overwhelmed. My solution to this has been to set up a rather extensive feed, to which I add every educational resource that I find. The catch of course being that I only allow myself to spend a half hour a day looking through it, ensuring that I don’t try to read all 500+ unread items on my list at once. Worried about missing something? If it’s really important or groundbreaking, more than one source will cover it, and you’re bound to see it at some time. Frontend Feeds is a great place to get started. Don’t forget to take notes on what you learn. Putting pen to paper can help you retain more information, while also serving as a great way to quickly look up information when you need to remember something you previously learned.

After reading up on new and relevant information, I’ll likely come across a topic that requires further exploration, which is why I always set aside an hour or two each day to make a few demos, get better at using my stack languages, and talk to other developers in the community. Side projects are another great way to keep up to date, because they provide a space to experiment with new tools and techniques.

Breaks

In a typical five day work week, I make a conscious effort to set aside one day where I don’t spend my evenings working on a demo, side project, or reading articles. This isn’t always easy to do, but its importance cannot be overstated, especially when struggling with IO. Eliminating the impulse to work during the weekend is another important factor. Deadlines and difficult projects will always require extra attention, but those challenges will be much easier to solve when you’re well rested, and not feeling overwhelmed.

I work around 60 hours per week, while maintaining side projects and doing what I can to keep up to date with the industry. I’ve felt IO before, but thanks to organization, intentional rest, and great time management, I’ve been able to relax and enjoy what I do once again.

Conclusion

Being a web developer means long hours and hard work in a fast paced environment, and battling IO and the urge to overwork can be a challenge that takes serious effort. If you’re currently struggling with IO, hopefully the system I’ve outlined above can serve as a way to help you get organized, while moving you closer towards a stress free balance between work and home life. Also, be sure to check out Burnout.io, which offers resources and advice to those who are feeling overwhelmed.

How to cope with change requirement in agile team

Agile software development teams embrace change, accepting the idea that requirements will evolve throughout a project. Agilists understand that because requirements evolve over time that any early investment in detailed documentation will only be wasted. Instead agilists will do just enough initial requirements envisioning to identify their project scope and develop a high-level schedule and estimate; that’s all you really need early in a project, so that’s all you should do. During development they will model storm in a just-in-time manner to explore each requirement in the necessary detail.

1. The Agile Change Management Process

Because requirements change frequently you need a streamlined, flexible approach to requirements change management. Agilists want to develop software which is both high-quality and high-value, and the easiest way to develop high-value software is to implement the highest priority requirements first. This enables them to maximize stakeholder ROI. In short, agilists strive to truly manage change, not to prevent it.

Figure 1 overviews the disciplined agile approach to managing the work items potentially needed to be accomplished by the team (you may not actually have sufficient time or resources to accomplish all items). This approach reflects the Disciplined Agile Delivery (DAD)’s approach to work management which is an extension to the Scrum methodology’s approach to requirements management (read about other agile requirements prioritization strategies). Where Scrum treats requirements like a prioritized stack called a product backlog, DAD takes it one step further to recognize that not only do you implement requirements as part of your daily job but you also do non-requirement related work such as take training, go on vacation, review products of other teams, address defects (I believe that defects are simply another type of requirement) and so on. With this approach your software development team has a stack of prioritized and estimated work items, including requirements, which need to be addressed – Extreme Programmers (XPers) will literally have a stack of user stories written on index cards whereas DAD might use a defect tracker such as ClearQuest to manage the stack. Stakeholders are responsible forprioritizing the requirements whereas developers are responsible for estimating. The priorities of non-requirement work items are either negotiated by the team with stakeholders or are addressed as part of slack time within the schedule.

Figure 1. Disciplined agile requirements change management process.

The “lifecycle” of a typical development iteration:

  1. Start. At the start of an iteration the team takes the highest priority requirements from the top of the stack which they believe they can implement within that iteration. If you have not been modeling ahead, more on this below, you will need to discuss each of the requirements that you pulled off the stack so that you the team can plan how it will proceed during the iteration. In short, you will be doing some modeling at the beginning of each iteration as part of your overall iteration planning effort, often using using inclusive modeling tools such as paper or whiteboards.
  2. Middle. The team then develops working software which meets the intent of the requirements, working closely with stakeholders throughout the iteration to ensure that they build software which meets their actual needs. This will likely include some model storming to explore the requirements in greater detail.
  3. End. The team will optionally demo the working software to a wider audience to show that they actually did what they promised to do. Although a demo is optional I highly recommend doing it: because working software is the primary measure of progress on a software development project, you want to communicate your team’s current status by regularly demoing your work.

1.1 Should You Freeze The Requirements During an Iteration?

Scrum suggests that you freeze the requirements for the current iteration to provide a level of stability for the developers. If you do this then any change to a requirement you’re currently implementing should be treated as just another new requirement. XP and DAD support changing requirements during the iteration if you wish to work that way, although doing so may force you to sometimes move some requirements to the next iteration to make room for new requirements introduced during the current iteration. Both approaches are perfectly fine, you just need to choose the approach which makes the most sense for your situation.

1.2 How Much “Modeling Ahead” Should You Do?

Figure 1 indicates that the items towards the top of the stack are described in greater detail than those towards the bottom. There’s a few important things to understand:

  1. It’s a significant risk to do detailed modeling up front. The article “Examining the Big Requirements Up Front (BRUF) Approach” addresses this problem in detail.
  2. The requirements in the current iteration must be understood in detail. You can’t implement them properly if you don’t understand them. This doesn’t imply, however, that you need mounds of comprehensive documentation. You can model storm the details on a just in time (JIT) basis.
  3. You may decide model a bit ahead. For complex requirements which are approaching the top of the stack, you may choose to model them a few days or weeks in advance of implementing them so as to increase the speed of development. Note that any detailed modeling in advance of actually needing the information should be viewed as a risk because the priorities could change and you may never need that information.
  4. You just need enough detail to estimate the later requirements. It’s reasonable to associate an order-of-magnitude estimate with requirements further down on the stack, so you’ll need just enough information about the requirement to do exactly that.

2. Why Requirements Change

People change their minds for many reasons, and do so on a regular basis. This happens because:

  1. They missed a requirement. A stakeholder will be working with an existing system and realize that it’s missing a feature.
  2. They identified a defect. A bug, or more importantly the need to address the bug, should also be considered a requirement.
  3. They realize they didn’t understand their actual need. It’s common to show a stakeholder your working system to date only to have them realize that what they asked for really isn’t what they want after all. This is one reason why active stakeholder participation and short iterations are important to your success.
  4. Politics. The political landscape within your organization is likely dynamic (yes, I’m being polite). When the balance of political power shifts amongst your stakeholders, and it always does, so do their priorities. These changing political priorities will often motivate changes to requirements.
  5. The marketplace changes. Perhaps a competitor will release a new product which implements features that your product doesn’t.
  6. Legislation changes. Perhaps new legislation requires new features, or changes to existing features, in your software.

The bottom line is that if you try to “freeze” the requirements early in the lifecycle you pretty much guarantee that you won’t build what people actually need, instead you’ll build what they initially thought they wanted. That’s not a great strategy for success.

3. Prioritizing Requirements

New requirements, including defects identified as part of your user testing activities, are prioritized by your project stakeholders and added to the stack in the appropriate place. Your project stakeholders have the right to define new requirements, change their minds about existing requirements, and even reprioritize requirements as they see fit. However, stakeholders must also be responsible for making decisions and providing information in a timely manner.

Fundamentally a single person needs to be the final authority when it comes to requirement prioritization. In Scrum this person is called the product owner. Although there is often many project stakeholders – end users, managers, architects, operations staff, and so on – the product owner is responsible for representing them all. On some projects a business analyst may take on this responsibility. Whoever is in this role will need to work together with the other stakeholders to ensure everyone is represented fairly, often a difficult task.

4. Estimating Requirements

Developers are responsible for estimating the effort required to implement the requirements which they will work on. Although you may fear that developers don’t have the requisite estimating skills, and this is often true at first, the fact is that it doesn’t take long for people to get pretty good at estimating when they know that they’re going to have to live up to those estimates.

Smaller requirements are easier to estimate. Shall statements, such as “the system shall convert feet to meters”, are an example of very small requirements. User stories are a little larger but still relatively easy to estimate. Use cases, a staple of the Rational Unified Process (RUP) and the Agile Unified Process (AUP), can become too large to estimate effectively although you can reorganize them into smaller and more manageable artifacts if you’re flexible. A good rule of thumb is that a requirement must be implementable within a single iteration. Scrum teams usually have month long iterations whereas XP teams often choose one or two weeks as an iteration length. Short iterations reduce the feedback cycle making it easier to stay on track. Successful teams will deploy a working copy of their system at the end of each iteration into a demo environment where their potential stakeholders have access to it. This provides another opportunity for feedback, often generating new or improved requirements, and shows stakeholders that the team is making progress and thus their money is being invested wisely.

5. Why This is Desirable

This approach is desirable to IT professionals because it enables us to always be working on the highest-value functionality, as defined by our stakeholders, at all points in time. This is not only a good business decision, it is also very satisfying for developers because they know that their work is actually having a positive impact on the organization.

There are several reasons why this is incredibly attractive for stakeholders:

  1. They get concrete feedback on a regular basis. By developing working software on a regular basis stakeholders can actually see what they’re getting for their IT investment.
  2. They have control over the scope. The stakeholders can add new requirements, change priorities, or rework existing requirements whenever they want. To do so, they merely modify what is currently in the stack. If the team hasn’t gotten to the requirement yet, then it really doesn’t matter that the requirement has changed.
  3. They have control over the schedule. The stakeholders can fund the project for as long as they need to. The development team is always working on the highest priority requirements which are currently identified, and they produce working software each iteration. The implication is that at various points in the project that the stakeholders should be able to say “OK, this is good enough for now, let’s deploy this into production”, giving them control over the schedule. Yes, they will still need to go through a release iteration to actually get the system in production.
  4. They have control over the budget. At the beginning of each iteration the stakeholders can decide to fund the team for as much, or as little, as they see fit. If the team has been doing a good job then the stakeholders are likely to continue the same level of funding. If they team is doing a great job then they may decide to increase the funding, and similarly if the team is doing a poor job then they should decrease or even cut funding. The implication is that not only do stakeholders have control over the budget, they can also treat their IT investment as a true investment portfolio and put their money into the project teams which provide the greatest ROI.

In short, with this sort of approach stakeholders are now in a position where they can govern their IT portfolio effectively.

6. Potential Challenges With This Approach

Traditionalists often struggle with the following issues:

  1. It isn’t clear how much the system will cost up front. As the requirements change the cost must also change. So what? The stakeholders have control over the budget, scope, and schedule and get concrete feedback on a regular basis. In this situation stakeholders don’t need an estimate up front because of the increased level of control which they have. Would you rather have a detailed, and very likely wrong, estimate up front or would you rather be in control and spend your money wisely? The approach I’m describing enables the latter, and according to the 2007 Project Success survey, that’s what the vast majority of people desire. Furthermore, with a bit of initial requirements envisioning you can easily gather sufficient information about the project scope to give a reasonable, ranged estimate early in the project.
  2. Stakeholders must be responsible for both making decisions and providing information in a timely manner. Without effective stakeholder involvement any software development is at risk, but agile teams are particularly at risk because they rely heavily on active stakeholder participation. Someone needs to be the final authority when it comes to requirement prioritization. In Scrum this person is called the product owner. Although there are often many project stakeholders – end users, managers, architects, operations staff, and so on – the product owner is responsible for representing them all. On some projects a business analyst may take on this responsibility. Whoever is in this role will need to work together with the other stakeholders to ensure everyone is represented fairly, often a difficult task.
  3. Your stakeholders might prioritize the requirements in such a way as to push an architecturally significant (read devastating) requirement several months out. For example, the need to support several technical platforms or several cultures will often cause significant havoc to projects teams which are unprepared for these changes. My experience is that the order of requirements really doesn’t matter as long as you do two things: First, keep your design modular and of the highest quality possible via code refactoring and database refactoring. Second, just as you do some initial requirements modeling up front you should also do some initial architectural modeling up front. This model effort should still be agile, it’s surprising how quickly you can sketch a few whiteboard diagrams which captures a viable architectural strategy for your team.
  4. You still need to do some initial requirements modeling. The requirements stack just doesn’t appear out of nowhere, you’re still going to have to do some high-level initial requirements modeling up front. This is a lot less than what traditionalists will tell you what you need to do, but it’s a bit more than what some of the extremists might like to claim. You need to do just barely enough for your situation.

Kanban vs Scrum: Kanban isn’t for Software Development, but Scrum is!

There are a number of software teams and organizations that think they should choose between Kanban and Scrum as their software development process.  This is a GIANT and RISKY mistake, in my professional opinion.

It’s not an either/or proposition.  Scrum is about software development.  Kanban is about change management.

There are several reasons why choosing Kanban as your team’s software development process is a mistake.

1.  You are applying Kanban to the incorrect context.

Would you use a hammer to insert a screw in a wall?  You can, but you’ll probably damage your wall in the process, and the same is true of Kanban as a software development approach.  David Anderson, the creator of The Kanban Method, has apparently said this over and over again since 2005, but no one seems to listen.

Don’t take my word for it, listen to David:

“Kanban is NOT a software development life cycle or project management methodology! It is not a way of making software or running projects that make software!” — David Anderson

“There is no kanban process for software development. At least I am not aware of one. I have never published one.”  — David Anderson

“It is actually not possible to develop with only Kanban.  The Kanban Method by itself does not contain practices sufficient to do product development.” — David Anderson*

(*The first two came from the “over and over..” link above.  The last quote was sent to me via email from someone at David’s company.  I think they just pasted in something David had already written)

I should also mention that others have mentioned to me that David talks out of both sides of his mouth about Kanban, Agile, and software development, perhaps trying to capitalize on the fame and success of Agile software development.  That may be true, but it may also be true that David has been saying all of these things for years and no one is paying attention to what he says, which is unfortunate.

2.  Kanban is modeled more after the assembly line and manufacturing.  Scrum is modeled more after creative product design.

Which do you think more closely resembles software development?  Laverne and Shirley on the assembly line at the Shotz Brewery? Or the group of NASA engineers on the ground who saved the lives of the Apollo 13 astronauts by coming up with a creative solution to a problem within a time-box?  If you think software rolls off of an assembly line, then I think that it is unfortunate that you’ve never worked in a creative software development environment — it’s AWESOME!

Maybe my Laverne and Shirley reference is oversimplified.  The reason to use Scrum instead of Kanban for software development delves down into process control theory, and the difference between a “defined process” and an “empirical process.”  In short, a defined process works better when the inputs and outputs to the process are well known and repeatable (like a manufacturing line).  An empirical process works better when the inputs and outputs to the process are less known and very difficult to repeat.  No two software features are alike.  This is why it’s darned near impossible to measure software productivity directly, even though some naive “bean counters” still try to.  Like the stock market, no one metric will predict it accurately, but a range of indicators can help predict it more accurately.  So, in summary, Scrum is based on empirical processes like product design.

One of the very key parts of empirical processes is the characteristic of inspecting and adapting the product.  Think of yourself making a pot of soup from scratch, without a recipe.  Think about all of the “taste-tweak ingredients-taste” experiments(feedback loops) you would need to get a pot of soup that tastes good.

Scrum has the frequent feedback loops built in, for a variety of audiences(Dev Team, Product Owner, Stakeholders, Users) , and for a variety of topics(process-Sprint Retro, product-Ordered Product Backlog, product-Sprint Review, product-Valuable/Releasable Increments).  Kanban has no such built in loops, but again, that’s because it wasn’t designed for software development!

3.  From a Complexity Science view, Kanban is for ‘complicated’ work while Scrum is for “complex” work.

I know the Kanban folks don’t like hearing this, but I think Ken Schwaber was right when he said this, and I think history will prove him right about Kanban as it was described in David Anderson’s book.  In short, the Cynefin model defines 5 domains, of which 2 of them are “complicated” and “complex” work.

‘Complicated’ work is best solved via ‘good practice’ and ‘experts’ who can find ’cause and effect’ fairly easily. When I think of ‘complicated’ work, I think of an the IT support person who sets up your computer or trouble shoots it.  Yes, you need an expert to solve these problems, and the vast majority of the time, the steps to solve these kinds of problems are fairly consistent and repeatable.  They are not exactlyrepeatable, just mostly repeatable.   If the steps were exactly repeatable then they would fall into the ‘Simple’ domain of Cynefin.

‘Complex’ work is best solved via ‘safe to fail experiments’ and one can only ascertain cause and effect after the fact.  Each Sprint in Scrum is a ‘safe to fail’ experiment because, while the Sprint increment is always releasable, the business side of the house makes the decision on whether it is safe/valuable to release it or not.  In the case of an increment that is un-safe, the team course corrects and comes back with an increment the next sprint that is hopefully safe or more-safe.  These safe to fail experiments can be repeated over and over again until it’s time to release the increment.

Applying Kanban Correctly

Having said all of the above, there IS a time and place for Kanban — a correct context, if you will.  If you’ve been reading closely, that context is as a change management process, which is ‘complicated’ work, and requires that there be already existing processes in place.  So, if your software team is doing XP, Scrum, Crystal, Waterfall, RUP, DSDM, FDD, etc, then you can layer Kanban on top of it to help find bottlenecks and waste.  Also, for all of those teams out there that don’t use a software development process(framework, approach, etc) that is named in the industry, you’re probably doing cowboy coding, ad-hoc, or command and control project management — none of which is a software development process either.  So, layering Kanban on top of crap will still yield crap.

For those that want to apply Kanban at the enterprise level to monitor the flow of work through their Scrum teams (Or XP, Crystal, etc), or want to use it for IT support or Dev Ops, I say have at it and I hope it helps you.  I imagine just visualizing your workflow alone will help in those contexts.  I myself have recommended and coached Kanban for a couple of teams — but only because those teams exhibited the right context for Kanban to be successful.

Having said all of this, just visualizing your workflow and the other Kanban principles is not enough for software development.  Software development has things like business value, technical complexity, and user experience/acceptance/adoption — all of which are not addressed directly by Kanban.  Scrum does address these areas, as I have shown above.  But hey, let’s not forget, the Kanban Method is “not a way of making software or running projects that make software.”  Would you criticize a hammer for not doing a good job of being able to insert a screw into a wall?

Design Pattern: Adapter vs Facade

During my recent round of interviews for my next consulting gig I was exposed to a barrage of “off the shelf” technical interview questions. Fortunately for you I am not going to bore you with them in this post (though I may choose to regale you with these tales at a later time). A fair number of the questions that I was asked glanced off of and dodged around the topic of design patterns. In one such interview the Adapter Pattern came to the forefront of the conversation. I say “conversation” because I prefer to conduct and participate in interviews using a conversational style rather than the oft used interrogation style, a series of rapid fire questions given as the interviewer checks off the answers on his answer sheet. As I completed my eloquent explanation of the pattern one interviewer asked me, “What is the difference between the Adapter and the Facade pattern? They seem like they are doing the same thing to me.”

This question really caught me off guard because until that moment I had not given it much thought. In my mind they are distinct patterns that solve different problems. It is never good in an interview to appear that you have been caught off guard so I mustered a meek answer. After further thought and consideration I have developed a much stronger response to the interviewer’s question.

In this post I will clearly demonstrate the differences between these two patterns by crystallizing their intent and demonstrating when it would be appropriate to use each pattern.

Scenario 1: The Adapter Pattern 
Assume that we are faced with the following situation. We have an existing class in our system called “A”. We have decided that we need to change the “B” class to use the “A” class rather than the “C” class it is currently using. However, “B” is dependent on the “C” interface which is different from the one exposed by “A”. How do we make this work?

The Adapter Pattern is intended to help us solve just this problem by creating a class that “adapts” the exposed interface of a class to an interface that is expected by the new client. To use this pattern to solve the problem described earlier we create a third class called “A2C” that has the “C” interface that the “B” class is expecting. Inside our “A2C” class is the necessary logic to wire the “B” class to the “A” class without either of the existing classes having to change in any material way. Now we just change “B” to depend on “A2C” rather than depending on “C”. If we were smart enough to inject this dependency into “B” then “B” won’t have to change at all. This keeps us from having to change “A” and in doing so makes it impossible for us to break “A” and any of its other clients.


Scenario 2: The Facade Pattern 
For scenario 2 assume that we are dealing with the following circumstances. We need to integrate our new application with an existing legacy subsystem. This subsystem has a relatively complex API that consists of a number of class each with a number of public methods. To integrate our new application with this subsystem we need to instantiate only some of the classes and then use these objects via a few of the methods calls on each object. How should we integrate with this subsystem?

Keep in mind that we are integrating with a subsystem here. So it’s possible that one day this entire subsystem could be replaced. If we integrate directly with each class and method directly then we have created a tight coupling between our new application and the subsystem and if the underlying subsystem did change then we would have to go into our new application and do some “shotgun surgery” to fix all of our dependencies. A better solution is to create an abstraction layer that would insulate us from future changes in the subsystem and simplify the API, all at the same time? The Facade Pattern has the recipe to help us create this abstraction layer. I can always remember the Facade Pattern because I think of Facade in the construction/building/architecture sense. In these terms, a Facade is a thin decorative front that is used to hide something ugly underneath it. The Facade design pattern has a similar definition, in that it is used to make an complex API simpler and (hopefully) prettier to the client.

So we have decided to implement the Facade pattern. We do this by creating our Facade class. This class exposes a minimal set of public methods that our new system will use in lieu of communicating directly with the subsystem. The Facade class will handle the object creation for any classes in the subsystem that are needed. Now if the underlying subsystem ever changes or is replaced the impact to our new system will be limited to the Facade class that we have created.

What We Have Learned 
The Adapter Pattern and the Facade Pattern solve different problems. As we have seen each pattern has a different intent and a different implementation. The intent of the Adapter Pattern is to adapt one classes interface into an expected interface used by an existing client class or classes. The intent of the Facade Pattern is to simplify the API of a subsystem.

So to answer that interviewer’s question again, “No! These patterns are not doing the same thing. They are intended to do very different things and solve very different problems.”

Methodologies: Agile vs Scrum

Due to increasingly rapid rates of change in the corporate world, whether they occur within customer demands, project requirements, support issues or tasks, many companies are finding that their traditional business processes do not allow them to move fast enough and keep up with changes.

An increasing number of project management, product management and software development teams are transitioning from traditional Waterfall methodologies to Agile ones. Those who are new to Agile are often unaware of the fact that there are different types of Agile methodologies. One of the most popular Agile process is the Scrum methodology. We hope that this post clarifies the idea behind both Scrum and Agile.

An overview of the Agile methodology

The Agile methodology was first introduced in 2001 when the “Agile Manifesto” was formalized when 17 people got together at Snowbird Ski Resort in Utah. The Agile Manifesto outlines 12 important principles, which include communication, collaboration, the importance of software, and open-mindedness to change.

I previously wrote a post entitled Waterfall vs. Agile, in which I explain what differentiates Agile from Waterfall. The Agile methodology was basically put together as a solution to circumvent to pitfalls associated with Waterfall. Being a more flexible management framework, Agile allows teams to bypass traditional sequential paradigms and get more work done in a shorter time period.

What new Agile teams don’t realize, is that there are different types of Agile methodologies, the most popular one being Scrum.

The Scrum Methodology

Most teams that transition to Agile choose to start with Scrum because it is simple and allows for a lot of flexibility.

As explained on Scrummethodology.com, “Scrum is unique because it introduced the idea of “empirical process control.” That is, Scrum uses the real-world progress of a project — not a best guess or uninformed forecast — to plan and schedule releases.”

What differentiates Scrum from other methodologies?

– Scrum has three roles: Product owner, team members, scrum master.

– Projects are divided into sprints, which typically last one, two or three weeks.

– At the end of each sprint, all stakeholders meet to assess the progress and plan its next steps.

– The advantage of scrum is that a project’s direction to be adjusted based on completed work, not on speculation or predictions.

The Scrum process includes the following steps:

Backlog refinement

This process allows all team members to share thoughts and concerns, and properly understand the workflow.

Sprint planning

Every iteration starts with a sprint planning meeting. The product owner holds a conversation with the team and decides which stories are highest in priority, and which ones they will tackle first. Stories are added to the sprint backlog, and the team then breaks down the stories and turn them into tasks.

Daily Scrum

The daily scrum is also known as the daily standup meeting. This serves to tighten communication and ensure that the entire team is on the same page. Each member goes through what they have done since the last standup, what they plan to work on before the next one, and outline any obstacles.

Sprint review meeting

At the end of a sprint, the team presents their work to the product owner. The product owner goes through the sprint backlog and either accepts or rejects the work. All uncompleted stories are rejected by the product owner.

Sprint retrospective meeting

Finally, after a sprint, the scrum master meets with the team for a retrospective meeting. They go over what went well, what did not, and what can be improved in the next sprint. The product owner is also present, and will listen to the team lay out the good and bad aspects of the sprint. This process allows the entire team to focus on its overall performance and identify strategies for improvement. It is crucial as the ScrumMaster can observe common impediments and work to resolve them.

HowTo Install Redmine on Ubuntu step by step

Prerequisite: check your ubuntu version against intended Redmine version

Before using or studying this guide you should check which Redmine version you are going for. Be aware that only latest stable releases will be fully compatible with current releases of plugins.
To check redmine versions versus your ubuntu version have a look at http://www.ubuntuupdates.org/pm/redmine

Generally you could also consider only installing ruby from ubuntu-repos and then heading for a release of redmine from redmine’s download page:http://www.redmine.org/projects/redmine/wiki/Download (this is the more common way of installing redmine on ubuntu). In this case this guide is not suited for you and you should check for an alternate guide. Google provides lots of resources for this alternate installation procedure.

If you are sure that you want to install from ubuntu-repositories, keep on reading:

Introduction

This tutorial walks you step-by-step through installing Redmine on a clean/fresh Ubuntu 12.04 installation. This is intended to be a complete cookbook method for getting Redmine installed and running. It makes no assumptions about other things being installed or configured. Since I have had some issues when using the graphical package managers, we will be doing this from the command line prompt to keep things as clear and clean as possible.

I recommend that you install any Ubuntu updates prior to beginning this process. There are almost always some waiting to be applied after Ubuntu is first set up.

Prerequisites: Apache, mod-passenger, and MySQL

There are several support packages that we will install first. The apache installation is pretty simple if you just follow the prompts and accept the defaults.

$ sudo apt-get install apache2 libapache2-mod-passenger

Installing mysql takes just a little more, so the details are spelled out.

$ sudo apt-get install mysql-server mysql-client

The installation process for mysql is going to prompt you for a password for the “root” access for the database server, then ask you to confirm the password in a follow-up screen. This sets the database adminstration password.

Package configuration                                                           

  ┌────────────────────┤ Configuring mysql-server-5.5 ├─────────────────────┐   
  │ While not mandatory, it is highly recommended that you set a password   │   
  │ for the MySQL administrative "root" user.                               │   
  │                                                                         │   
  │ If this field is left blank, the password will not be changed.          │   
  │                                                                         │   
  │ New password for the MySQL "root" user:                                 │   
  │                                                                         │   
  │ _______________________________________________________________________ │   
  │                                                                         │   
  │                                 <Ok>                                    │   
  │                                                                         │   
  └─────────────────────────────────────────────────────────────────────────┘

  ┌────┤ Configuring mysql-server-5.5 ├──────────┐
  │                                              │
  │ Repeat password for the MySQL "root" user.   │
  │                                              │
  │                                              │
  │ ____________________________________________ │
  │                                              │
  │                   <Ok>                       │
  │                                              │
  └──────────────────────────────────────────────┘

Installing and configuring the Ubuntu Redmine package

Now it is time to install redmine itself.

$ sudo apt-get install redmine redmine-mysql

You want to allow dbconfig-common to configure the database when prompted so select Yes from the prompt in the panel below.

Package configuration                                                           

 ┌──────────────────────────┤ Configuring redmine ├──────────────────────────┐  
 │                                                                           │  
 │ The redmine/instances/default package must have a database installed and  │  
 │ configured before it can be used.  This can be optionally handled with    │  
 │ dbconfig-common.                                                          │  
 │                                                                           │  
 │ If you are an advanced database administrator and know that you want to   │  
 │ perform this configuration manually, or if your database has already      │  
 │ been installed and configured, you should refuse this option.  Details    │  
 │ on what needs to be done should most likely be provided in                │  
 │ /usr/share/doc/redmine/instances/default.                                 │  
 │                                                                           │  
 │ Otherwise, you should probably choose this option.                        │  
 │                                                                           │  
 │ Configure database for redmine/instances/default with dbconfig-common?    │  
 │                                                                           │  
 │                    <Yes>                       <No>                       │  
 │                                                                           │  
 └───────────────────────────────────────────────────────────────────────────┘

Then you want to provide the “root” password for the database, so that the installer can create the redmine database. This is the password set when you installed mysql.

Package configuration                                                           

 ┌──────────────────────────┤ Configuring redmine ├──────────────────────────┐  
 │ Please provide the password for hte administrative account with which     │  
 │ this package should create its MySQL database and user.                   │  
 │                                                                           │  
 │ Password of the database's administrative user:                           │  
 │                                                                           │  
 │ ******__________________________________________________________________  │  
 │                                                                           │  
 │                   <Ok>                       <Cancel>                     │  
 │                                                                           │  
 └───────────────────────────────────────────────────────────────────────────┘

Tell the redmine installer we are using mysql for this installation by highlighting “mysql” from the list of database choices:

Package configuration                                                           

 ┌──────────────────────────┤ Configuring redmine ├──────────────────────────┐  
 │ The redmine/instances/default package can be configured to use one of     │  
 │ several database types. Below, you will be presented with the available   │  
 │ choices.                                                                  │  
 │                                                                           │  
 │ Database type to be used by redmine/instances/default:                    │  
 │                                                                           │  
 │                                  sqlite3                                  │  
 │                                  pgsql                                    │  
 │                                  mysql                                    │  
 │                                                                           │  
 │                                                                           │  
 │                    <Ok>                        <Cancel>                   │  
 │                                                                           │  
 └───────────────────────────────────────────────────────────────────────────┘

Now you are asked to provide a password that will be used to protect the redmine database. Redmine itself will use this when it wants to access mysql.

Package configuration                                                           

 ┌──────────────────────────┤ Configuring redmine ├──────────────────────────┐  
 │ Please provide a password for redmine/instances/default to register with  │  
 │ the database server.  If left blank, a random password will be            │  
 │ generated.                                                                │  
 │                                                                           │  
 │ MySQL application password for redmine/instances/default:                 │  
 │                                                                           │  
 │ *******__________________________________________________________________ │  
 │                                                                           │  
 │                    <Ok>                        <Cancel>                   │  
 │                                                                           │  
 └───────────────────────────────────────────────────────────────────────────┘

Now confirm the redmine password.

Package configuration                                                           

   ┌────┤ Configuring redmine ├─────┐                       
   │                                │                       
   │                                │                       
   │ Password confirmation:         │                       
   │                                │                       
   │ *******_______________________ │                       
   │                                │                       
   │     <Ok>         <Cancel>      │                       
   │                                │                       
   └────────────────────────────────┘

Configuring Apache

You need to modify two files for apache. The first is /etc/apache2/mods-available/passenger.conf which needs the text PassengerDefaultUser www-dataadded as seen here:

<IfModule mod_passenger.c>
  PassengerDefaultUser www-data
  PassengerRoot /usr
  PassengerRuby /usr/bin/ruby
</IfModule>

Now create a symlink to connect Redmine into the web document space:

$ sudo ln -s /usr/share/redmine/public /var/www/redmine

And modify /etc/apache2/sites-available/default to insert the following with the other <Directory> sections so that apache knows to follow the symlink into Rails:

<Directory /var/www/redmine>
    RailsBaseURI /redmine
    PassengerResolveSymlinksInDocumentRoot on
</Directory>

Now restart apache:

$ sudo service apache2 restart

You should now be able to access redmine from the local host

$ firefox http://127.0.0.1/redmine

In the upper right corner of the browser window you should see the “Sign in” link. Click that and enter “admin” at both the Login: and Password:prompts. Note: this is not the password you set during the installation process. Click the Login button.

I recommend that the next thing you do is to click on My account in the upper right corner and change that password. In the page that is displayed there should be a Change password link in the upper right of the white area of the page. Click to change the password.

Backing up Redmine

You should arrange a regular backup of the Redmine database and the files that users upload/attach. The database can be dumped to a text file with:

/usr/bin/mysqldump -u root -p<password> redmine_default | gzip > /path/to/backups/redmine_db_`date +%y_%m_%d`.gz

where <password> is the one you set when installing mysql.

The attachments are stashed in /var/lib/redmine/default/files and can be backed up with something like:

rsync -a /var/lib/redmine/default/files /path/to/backups/files

You can have these commands run automatically by creating a script called /etc/cron.daily/redmine that contains:

#!/bin/sh
/usr/bin/mysqldump -u root -p<password> redmine_default | gzip > /path/to/backups/redmine_db_`date +%y_%m_%d`.gz
rsync -a /var/lib/redmine/default/files /path/to/backups/files

Again, be sure to substitute the mysql root password for <password> in the mysqldump command line. The file should be protected so that only root has read permission because you are storing the root password for your mysql installation in this file. That the first line creates a new file every time the script is run. This can eventually create a large number of backups of your database files. You should have a script that purges old ones periodically.

Additional semi-optional packages

There are some services that Redmine can use that are not absolutely necessary, but are useful. These are email and software repository/revision control systems.

Email setup

At some point you will probably want Redmine to be able to send email. For this you will need to install and configure email. This can be achieved by installing the postfix package. I do not recommend the exim4 package, as there have been some incompatibilities in the way the “sendmail” command line is handled between Redmine and exim4. Unless everyone has an email account on the redmine server you will want to set up external email as a full internet host. Once email service is installed, you will have to restart apache for Redmine to know that it has access to email services.

$ sudo apt-get install postfix

Now that you can send email, you have to tell Redmine about it. You need to create/edit the file /etc/redmine/default/configuration.yml and add the following lines:

production:
  email_delivery:
    delivery_method: :sendmail

Then restart apache so that Redmine reloads the configuration file:

$ sudo service apache2 restart

Revision control repository setup

In order to have your software repository on the system Redmine will need the corresponding software installed.

$ sudo apt-get install git subversion cvs mercurial
$ sudo service apache2 restart

That covers it as far as I have gotten in my use of Redmine to date.

What is Kanban?

Kanban is a new technique for managing a software development process in a highly efficient way. Kanban underpins Toyota’s “just-in-time” (JIT) production system. Although producing software is a creative activity and therefore different to mass-producing cars, the underlying mechanism for managing the production line can still be applied.

A software development process can be thought of as a pipeline with feature requests entering one end and improved software emerging from the other end.

Inside the pipeline, there will be some kind of process which could range from an informal ad hoc process to a highly formal phased process. In this article, we’ll assume a simple phased process of: (1) analyse the requirements, (2) develop the code, and (3) test it works.

The Effect of Bottlenecks

A bottleneck in a pipeline restricts flow. The throughput of the pipeline as a whole is limited to the throughput of the bottleneck.

Using our development pipeline as an example: if the testers are only able to test 5 features per week whereas the developers and analysts have the capacity to produce 10 features per week, the throughput of the pipeline as a whole will only be 5 features per week because the testers are acting as a bottleneck.

If the analysts and developers aren’t aware that the testers are the bottleneck, then a backlog of work will begin to pile up in front of the testers.

The effect is that lead times go up. And, like warehouse stock, work sitting in the pipeline ties up investment, creates distance from the market, and drops in value as time goes by.

Inevitably, quality suffers. To keep up, the testers start to cut corners. The resulting bugs released into production cause problems for the users and waste future pipeline capacity.

If, on the other hand, we knew where the bottleneck was, we could redeploy resources to help relieve it. For example, the analysts could help with testing and the developers could work on test automation.

But how do we know where the bottleneck is in any given process? And what happens when it moves?

Kanban reveals bottlenecks dynamically

Kanban is incredibly simple, but at the same time incredibly powerful. In its simplest incarnation, a kanban system consists of a big board on the wall with cards or sticky notes placed in columns with numbers at the top.

Limiting work-in-progress reveals the bottlenecks so you can address them.

The cards represent work items as they flow through the development process represented by the columns. The numbers at the top of each column are limits on the number of cards allowed in each column.

The limits are the critical difference between a kanban board and any other visual storyboard. Limiting the amount of work-in-progress (WIP), at each step in the process, prevents overproduction and reveals bottlenecks dynamically so that you can address them before they get out of hand.

Worked Example

The board below shows a situation where the developers and analysts are being prevented from taking on any more work until the testers free up a slot and pull in the next work item. At this point the developers and analysts should be looking at ways they can help relieve the burden on the testers.

Notice that we’ve split some of the columns in two, to indicate items being worked on and those finished and ready to be pulled by the downsteam process. There are several different ways you can layout out the board. This is a fairly simple way. The limits at the top of the split columns cover both the “doing” and “done” columns.

Once the testers have finished testing a feature, they move the card and free up a slot in the “Test” column.

Now the empty slot in the “Test” column can be filled by one of the cards in the development “done” column. That frees up a slot under “Development” and the next card can be pulled from the “Analysis” column and so on.

(http://www.kanbanblog.com)

Pair Programming Is A Good Software Development Practice

Agile is not only about the ‘process’ we follow to build software products. Agility is not just achieved by following a method. Agile is also about the behavior we show while creating software products. But agile is certainly also about the software craftsmanship we show in building software products.

Good software engineering practices help development people build better software, faster. It makes sure they can spend proportionally more time of a sprint on the complex, creative work of designing, creating, testing and documenting the great piece of working software, than they have to do on manual, repetitive tasks.

Unfortunately, too often people and organizations adopt Scrum hoping it to magically solve all problems. Fortunately, Scrum shows them their problems very clearly; opening up the opportunity of addressing them. A lack of good engineering practices (and the absence of tools, platforms, organizational support and automation) is an often encountered revelation when adopting Scrum. Yet, although Scrum doesn’t prescribe specific engineering practices, Scrum does state to have them. Scrum does fully support and promote the 9th agile principle, saying “Continuous attention to technical excellence and good design enhances agility”. And eXtreme Programming is a perfect fit to fill in the demand by Scrum to have good engineering standards. In my experience the combination of Scrum and eXtreme Programming has even shown to be an unbeatable combination.

From the eXtreme Programming set of practices I still promote pair programming strongly, although it seems that over time the exotic feel to it seems not to have vanished.

Let me start by saying that pair programming is about quality in the first place. It’s certainly not about the opportunity to always work with one’s best friend. And pair programming is also not about 2 people writing the same code. The success of pair programming, in my experience, lies in the roles and the rotation of pairs.

Rotation and Roles

A Sprint Backlog is composed of the forecasted functionality, i.e. the selected Product Backlog items, and their decomposition in development work. Any person from the Development Team can select tasks, but in our application of pair programming we ask team members to take the ‘lead’ in a PBI, often expressed as User Stories, another technique from eXtreme Programming. The lead commits to taking care of the full story. At the start of the day the pairs are formed, holding that the leads look for the right ‘partner’ to work with them on the development work for the story that lies right ahead of them. After lunch the pairs are reformed. The lead once again looks for the right ‘partner’ to work with during the second half of the day. That’s the first set of roles, i.e. lead-partner, and the rotation, i.e. every half day. The rotation has proven quite essential because it gives team members the opportunity to get the best possible help for any given problem every half day.

During the half day however that a pair works together, they take up a second set of roles. There is always one person holding the keyboard and mouse, the ‘driver’. And the second person, the ‘navigator’, looks over the driver as he/she writes the code. The driver can focus on the code being written, while the navigator minds respect for the overall direction and design. And within the half day that this pair has, these roles are switched frequently. The control over mouse and screen frequently goes from the one to the other, and back. It depends on the specific code being written, whether either of them has done this already before, whether someone has a great idea, etc.

We even encourage people with functional testing skills to join the pairs regularly. It helps people writing better and more complete functional tests, up to the level of creating automated GUI tests, e.g. with tools like Selenium, and including those tests in a test-driven approach and in the continuous integration loops. After all, it is much more fun and respectful to maintain test sets for automated execution over repeating such tests over and over again in a manual way; at all levels: technical, functional, integration, regression, performance.

The separation of driver and navigator is quite essential. It allows activities that a single programmer would do in a serial way anyhow to be performed instantaneously, i.e. in a parallel way (write code, compile, sit back and check, read again, verify coding standards, check naming conventions, consider the design). The person minding the road, writing code, gets immediate feedback, even while the writing is happening, by the navigator who is minding the overview, the overall direction.

And you might as well not complicate your way of working by having to decide when to or not to pair program, in what areas, on what type of work, etc. Just do it all the time. If simpler work pops up which is too easy for 2 people to collaborate over, let the ‘partner’ do something else. That may be some research, a little spike, a break.

The cost of pair programming

The navigator in real time checks the code being written, on standards, optimal solution, duplicate or re-use of code (Has something like that has already been created elsewhere in the system?), naming, etc. These are tasks that a single programmer will also do, on top of merely writing code. It is why pair programming has no higher cost than single programming. Because no more activities than a single programmer would do are being performed. But instead of those activities being done in a serial way, they are now being performed in a parallel way.

And time-consuming code reviews can be deleted from the Definition of Done. And the nasty effects of rework from such reviews, delays, unpredictable efforts, … are prevented. Waste is prevented.

A little example

Suppose a task estimated at 10 hours performed by a single programmer actually takes 10 hours (to the extent that this can be predicted). The same task performed by a pair takes 5 hours to get finished, but as this time is consumed by 2 people the total cost remains the same. Early studies on pair programming though indicated an increase of the cost with 10%, holding that in my example a task of 10 hours performed by a pair would take 11 hours. To be clear, it is not my experience. In estimating projects or development work we have never raised the budget just because we did pair programming.

Mind however simplistic thoughts over the time gains. It is not like a total volume of work is suddenly ready in half of the time. Obviously on a certain volume of work, the elapse time tends to be the same. If 4 tasks of 10 hours need to be performed by 4 single programmers it will lead to an elapse time of 10 hours, and a total cost of 40 hours. Get the same tasks done by 2 pairs (still 4 people) and they will take the same time, each pair performing 2 tasks. Elapse time is still 10 hours, and total budget is 40 hours.

Quality (and other effects)

Obviously, pair programming results in a lot of communication, makes the room look like a madhouse (when observed from the outside). This is good. This shows the high energy levels, it is not chaos.

As an organization, pair programming should be promoted for reasons of quality first. Early studies found a decrease in defects and rework with 60% and more. So, even if initial budget would be +10%, which I defy, the decrease of rework alone makes it more than worthwhile, from a total cost perspective.

Pairs also produce less lines of code to get the same functionality to work. They create lighter applications with lighter architectures that are easier to maintain, require less server capacity and thus lower the TCO.

A great additional advantage of pair programming is knowledge sharing. By the pair-based collaboration people get insights in other skills, work on other layers of the application, different modules of the system. It allows not having to waste time on getting new team members up and running. They just join the pairs. We even encourage new team members to take the ‘lead’ over a story as soon as possible (and never longer than one week after joining the team). Because it is the best and fastest way to get introduced in all aspects of the software product and be assured of the assistance of the best placed partner, every half day.

In short, Pair Programming should be part of a software engineer’s normal daily practice.

(http://www.capgemini.com)

Is Agile Methodology a good fit for Mobile Application Development?

The Agile approach to software development refers to the iterative and incremental strategy involving self-organizing teams and cross-functioning teams working collaboratively to create software and solutions.

The principles of Agile software development include:

  • The development team provides early and continuous delivery of software frequently, usually in one to four week intervals.
  • There is constant collaboration among business people and the software developers.
  • Changes are welcomed, even at late stages of software development, since these modifications often serve to give the customer a competitive advantage in the marketplace.
  • Projects are people driven as jobs are completed by teams of motivated individuals who have the environment, support, and trust they need to get work done.
  • The preferred mode of communication is face-to-face interaction since it is the most efficient and effective mode of conveying ideas and solving problems.
  • The primary means of measuring progress is working software.
  • Technical excellence is the driving force of Agile development.
  • Simplicity is essential to Agile software development.

Many approaches such as Extreme programming, SCRUM, Feature Driven Development, Test Driven Development and so on are considered Agile methodologies, as these methodologies share a lot of common characteristics with the Agile manifesto. Decision to adopt one methodology over the other depends on how well the team members know a particular methodology, how big the team is, and how the team is organized. Agile methodology can be particularly advantageous for developing mobile apps.

Following comic strip sums up the Extreme Programming for us:

Agile development for mobile

The Agile Approach is a Good Fit for Mobile Application Development

One of the challenges that mobile app developers face is the hardware and infrastructure for mobile apps is constantly evolving, which results in the average lifespan for a mobile app to be approximately 12 months. In order for a mobile app development team to bring relevant and functional mobile apps to market, they need to be able to work quickly to develop a software solution. The principles of Agile software development establish a framework to a development team to use to develop and release mobile apps so they have the longest life span in the marketplace.

Since mobile apps are not expected to be perfect when they are first released, the expectation for the software product fits the iterative and incremental strategy that drives Agile software development. Most users of mobile apps have to come to expect a beta version followed by 1.0 and 2.0 versions of newly released applications. Additionally, since the Agile approach is driven by change, it is highly responsive to the feedback of business who contract for the development of the application, as well as the consumers that use the application.

Some of the Drawbacks of Agile for Mobile Application Development

Many times mobile application development teams are located in different parts of the globe, making in person face-to-face communication impossible. Most developers overcome this hurdle by using video conferencing tools.

It is often assumed that Agile methodologies do not focus on documentation at all. Thus, some developers using Agile methods often overlook documentation! and focus only on development. Whichever methodology is chosen, documentation should never be ignored.

Overall, the Agile approach to mobile application development has significant benefits for both the end users and the developers.