During my work-life I collected some points which are relevant when actually checking in code and setting a task to done. The definition of done (DoD) is to be defined for every single project separately. I’m just providing a kind of a set, what I’m trying to fulfill when I have the chance (enough time provided). This has to be considered with a common sense by trying to be productive instead of being busy.
- Code is source controlled – every part of it
- Committing only a complete change at once – ideally a solution, immediately delivering value for the customer. Every part of it has to have a usage in code – at least from unit tests.
- Code builds on more than one systems – ideally builds on a build server
- The code is according to some Coding Guidelines – in my case Microsoft Guidelines are normative
- Code quality is according to standard rule sets of the following static code analysis tools
- .NET Code
- Microsoft Code Analysis
- Web Essentials – Not unused CSS classes
- .NET Code
- The code is placed according to a well defined architecture
- The code is unit tested – above a coverage of 50%
- The code is implemented along awell defined logging and exception handling strategy
- The exception contains every information which is relevant in the context
- Nothing throws exceptions that shouldn’t (Visual Studio – Exception Settings – Thrown and User-unhandled)
- Try to rethrow exception instead of rebasing its stack with throw ex;
- Don’t “swallow” exceptions without commenting why. Maybe logging it.
- try..catch on component boundaries
- Not solving application logic with Exceptions
- Having last resort error handlers
- Application_Error in global.asax.cs
- The code is checked against the top 3 causes of bugs (see my related post – Causes of Bugs)
- A last view of the code is done to check
- Is it really what I want to commit?
- Can that be solved better in general?
- Code is reviewed by another person if you think it’s necessary
- The feature is at least so far testable on every systems as the project assets are involved – sometimes using Dependency Inversion for decoupling productive functionality
- Locally (in the development environment)
- On a quality assurance environment (stage server)
- Feature is working locally – rock solid, you shouldn’t be able to kill it
- The feature is configurable
- The feature works performant – there should be a reaction from the server within 1 second for web applications
- The feature is secure – you shouldn’t be able to hack it
- The customer is satisfied with the feature implemented.
- A last view on the feature is done
- Is it really what the customer needs?
- Can we suggest a better flow for the functionality?
Application level – Optimized for web applications
- The application is always easily deployable – one step deployment
- No extra steps (files to be copied or settings to be set)
- There is a Smoke-Test defined which can be run after deployment
- The application is to be Search Engine Optimized
- Responsive Web Design approach is to be followed
- The website design is evaluated e.g.: according to http://www.lib.umd.edu/tl/guides/evaluating-web or http://depts.washington.edu/trio/trioquest/resources/web/assess.php
- The dynaTrace rankings are mostly in the green area
The following book contains also a lot of good points about the topics mentioned above:
Code Complete, Second Edition
By Steve McConnell