Bifrost – end of the line

Prior to 2008 I had only worked at ISVs (Independent Software Vendors) and game companies going back to 1994. At this point I decided I needed to get a change of scenery and become a consultant – try something completely different. In the beginning, since I had no experience with being a consultant – my boss and me figured it would be best to have a few small projects to get the gist of things. After a couple of these small projects I started feeling some pain; I was rince-repeating a lot of the things. I’m not entirely sure how other consultants do things, but I decided it was time to gather the things that I was constantly doing into an open-source library; a collection of tools that would speed up the process when onboarding into projects. From this Bifrost was born.

It grew from over a couple of years and at the time had no true vision other than make my everyday easier. Back in 2009 I got hired as a consultant @ the largest eCommerce in Norway; Komplett. At Komplett I was tasked at first with working on some integration and establishing a new object model, or as we liked to call it back then; a domain model. It was pretty much an anemic domain model, something representing the data in C#. An approach I had done a number of times over the course of my career at this point. In the middle of this work, someone had discovered a missing feature in the system which was a critical feature for a company being merged into the Komplett group. I was asked to head development of this feature and I could bring 2 more devs. We decided to make this a learning experience in addition to concretely implement the feature. In the learning we wanted to pick up on new ways of thinking and also some new tech, amongst it ASP.NET MVC. The team ended up being Michael Smith, Pavneet Singh Saund and myself, plus a dedicated product owner. We did quite a few discoveries in this project and by far the most important being the discovery of commands. We also discovered how domain models as we knew at this point in time didn’t really work. Worth mentioning is also how we started seeing patterns of how we should not do things. This lead to larger project, a rewrite of the core web. Midway through this, I gave a talk @ NDC 2011 about our experience with applying CQRS, which was the conclusion we were closing in on. From this I was contacted by a representative of Statoil a few months later. They found the topic interesting and also the platform; Bifrost (yes we got pretentious and dropped the library and framework label and called it a platform – it kinda grew :) ). We were onboarded in the beginning of 2012, and I mean we – Michael joined and later also Pavneet. Kept the team together, all as individual independent consultants. This was a huge part of our progress in learning; the team. We knew each other well, had concrete focus areas and managed to move and learn fast.

At this stage we found ourselves in a position of having gone through a sort of knowledge transition. To me it felt like I had gone back quite a few years from 2008 and created a fork in my knowledge bank. It felt at times like this fork was diverging more and more from what was commonly being done on projects around us. Not claiming that our fork was better or anything, but made more sense to us – with our history together and our shift in mindset that we decided to do. From my experience, comparing before and after – I conclude with we managed to write better and more predictable software. But it came with a certain amount of friction. The stuff we were talking about was not as Google-friendly. Less resources to refer to, which is a very important aspect when you have for instance a team of 20+ developers that you’re trying to convince that your thing is better; not to be underestimated at all! Something we were quite aware of and made us a bit hesitant to push through, but had confidence from key persons on the projects we were on to push forward. The huge disadvantage I’m seeing today of having pushed on in this fork and having to spend a lot of time maintaining and convincing others, left us blind to what was going on outside and we became an echo-chamber for a lot of things, not realizing that some of the concepts we had discovered and if I dare; maybe even pioneered (without letting anyone know) has surfaced in elsewhere and are now becoming part of how we all do things. Which is just truly great.

 

The future

 

Its now been 7-8 years since the project was started, where is it today. Well, it still exists, sits on GitHub. Thus far the project has moved forward because of the maintainers needing it to. We were using it for the projects we were on. This has also been the key reason why we’ve implemented things; if it had a real world scenario – we could identify it and develop it. We’ve always kept a close eye on the real world, not making anything for its own purpose – but fill a need. Fast forwarding to today, February 2016, our team is no longer a team and skattered in different companies. We are no longer working together, and none of us really using Bifrost for our day to day work – this represents a problem for moving things forward, given the history of how things have moved. Take me for instance, I’m no longer actively working on a real world project – although I’m working on changing that somewhat. To keep my knowledge fresh in the type of role I have now, I’m dedicating time to build a project I’ve been thinking of for a couple of years and something I need myself. But it turns out, this is not enough of a reason for me. I have decided to go a slightly different route to build relevant knowledge to what I do.

One of the goals of Bifrost was to tackle things we believed would provide better quality software, one of them being to decouple and not build monoliths. Today, I see Bifrost as a bit of a monolith. There are too many things inside the core framework. It should have been broken up. On top of this there also are some logical couplings we never intended. There are also things I’d like to do differently in how its been architected, and basically learn from mistakes and correct them. Worth mentioning is also complexity that was put in there for imaginary problems that turned out to not be anything that would happen any time soon. For those using Bifrost, it would be too much work moving forward with the amount of refactoring that I’m looking for, plus too much work for me or anyone else maintaining it all. Although difficult, I see no other way around it but to just admit that I will not be able to set aside time to do this properly and therefor I will not continue the maintenance of it, nor maintain any pull requests at this point. When I can’t guarantee a feedback loop, it only looks bad in the repository. I’m truly proud of what we have accomplished with the project. It is by far the cleanest code I’ve ever written, adhering to the principles that I believe in. With more than 2000 automated specifications (BDD style), it has proven to be easy to refactor and as far as we know; very stable code – thoroughly tested. There are of course things I’d love to have done differently. Blogging more actively about the things we did in Bifrost, creating awareness – because as I said, I’m truly proud of what we have done. Actively maintaining documentation is a second thing – making it easier to figure out what things do. We did put in XML documentation into all C# code, but never really did anything proper with it. I’d also do WebCasts to cover topics, create tutorials and more. Who knows, maybe Bifrost could have been a business on its own. The luxury of looking back :).

 

Harvesting

The last couple of months has been a retrospective going on inside my head with the purpose of figuring out what I think was good and not. In combination with this, Pavneet, Michael and myself has had a few discussions between ourselves about the same topic. Gaining the distance from everyday maintenance and implementation has really helped and has made things a lot clearer.

From this I want to start harvesting from the experience. Primarily I want to harvest in the term of learning, take the knowledge and bring it forward – modernize it – keep it fresh. Then I have a hope, not willing to call it a goal yet, to turn this knowledge into something more concrete. I’m not entirely sure what this concrete thing is yet. If it appears in the wild in terms of an open-source project representing this or not, I’m not sure. I’d love to, but only if I find it viable.

What I do concretely want to do is to start braindumping some of the things we learned, the things we really enjoyed and we found useful. The braindumping will be in the form of posts here.

Below is a list of topics from the top of my head that I see as key points for the kind of harvesting I’m talking about. I have no idea what format things will take and what will end up where, so lets see.

 

  1. Core Principles – SOLID, SOC, DRY
  2. Productivity and true meaning
  3. Discovery mechanisms and Conventions
  4. Cross cutting concerns
  5. Domain Models
  6. Commands
  7. Events + EventSourcing
  8. Domain Driven Design
  9. Eventual Consistency
  10. Low Coupling, High Cohesion
  11. The importance of the clientside
  12. Validation != Business rules
  13. Operations
  14. MVVM – the building blocks
  15. Regions in the client
  16. Coding style
  17. BDD – Specifications by example style of testing
  18. Feedback loop
  19. Declarative
  20. Compositional software
  21. How inheritance creates coupled software
  22. Don’t be afraid of duplication in data – persistence polyglotism
  23. Bounded Contexts and their relationship to MicroServices

 

 

Conclusion

Being realistic can sometimes be hard. I must admit I have not been honest with myself nor with others. Its hard when its your own brainchild to let go, but the place I am in my life right now has little room for maintaining a project the size of Bifrost. I have a job that is different and demanding in its own way, I have 2 kids thats growing up and I must prioritize being there more for them before its too late! I have neglegted hobbies that I’m trying to get in touch with; my IOT devices, writing more blogposts, attending user groups. I have a house that I want to do some work on and tons of tools to do it with that’s screaming for my attention. I’m also active in the school, in something similar to PTA here in Norway. The only correct and honest thing to do is to tell everyone; I’m not going to maintain Bifrost anymore – nor maintain pull requests. Thanks to everyone that believed in the project, the guys maintaining it and fellow journeymen; Michael, Pavneet. A big shoutout and thanks to Børge Nordli who is on a project using Bifrost and has the last year contributed quite a bit back.

PS: Project will still sit on GitHub

 

6 months into it

Time flies when you have fun, they say. Its been 6 months into my life @ Microsoft. Figured I’d summarize a bit.

The first indication I’ve been busy is the fact its actually also been more than 6 months since I posted anything here. That is just crazy. I know I kinda have slowed down the last couple of years, but never to this level of NADA, ZIP, ZERO.. And trust me, its been very busy.

Onboarding @ Microsoft has been a lot of fun – so much to learn, and I love learning! My role; Technical Evangelist – really does not explain what I do – at all. Basically I’m more of an advisor. My job is to help ISVs in Norway get the best experience in Azure. The thing is; there are an estimated 600+ ISVs in Norway. Some of these aren’t ones we would talk to, based on “natural selection” (basically, those not doing anything or have any plans of doing anything in any cloud). But even after this, there are enough left to talk to at least one ISV every day. I work in a team with 3 others representing more of the business aspect of working with the ISVs, with different focus areas ranging from startups to more established ISVs. Needless to say, these guys run in a lot of meetings, talking and trying to figure out what ISVs are interested and need to dig deeper. This is were I enter; try to understand their business and what technical needs they have – but also what needs they might have based on where they see their business go. On occassion I dive deep and get involved in creating proof of concepts that requires me to deliver code – not something that will be put in production though.

One of the true joys of the prototype part is that I can do things in the open @ GitHub when I identify common things that would benefit more than one partner. Having done open-source development for a few years, I’m careful though. Anything put out there needs more attention if its a reusable component.

Then there is this fact that my role is called Technical Evangelist; there is a part of the job that actually involves doing presentations and write blogposts. And I love both.


Different

I’ve worked a few places in my career. But hands down, Microsoft is different in all aspects – in a good IMO. Fair enough, I don’t have the type of role I’m used to having – but that aside, there is something to be said for the working environment, focus, structure and pace.


EMail

So.. What are these team chats you talk about. After getting used to everything being available on Flowdock or Slack for a couple of years, email feels like a true step down. This is a massive difference. The amount of emails I receive is ridiculous – but its needed. And I am coming to the realization that none of the tools I’ve been using could fix this in any way. So even with its shortcomings, email is king and I’m learning to live with it again. Its probably been 10 years since I’ve had to use folders to organize my emails – now after only 6 months I must have a 100 or more folders.


How about that coolaid?

Well.. Before I started @ Microsoft I had a few things I was starting to dislike about what they were doing. I felt that there was a need for revitalizing some of the tech and to be honest – recognize that there are other platforms and really start targetting them. I’m a Mac guy – and have been so since 2008. At this point in time I made a decision to step outside the Microsoft bubble I was in. It lead to a lot of cool new experiences and my favorite part; learning. Most of this learning I brought back to the projects I was on, which was C#/.NET projects. But the change of platform meant I had to actually learn new things which again lead to me wanting to explore even more. I grew back my apetite for learning. My wish has come through; Microsoft over the last couple of years have done the exact same thing and exactly what I criticized them for not doing when I was on the outside. The amount of stuff that has been open sourced for instance. Most of the things being done has a cross-platform thinking going into it. I’m really loving this. And yes, I’m half way down the cool aid. Azure has been on my radar and in my toolbox ever since it was launched in 2008 @ PDC. With all the new stuff thats going on top of this, I am truly smiling. I can now do all the things I’ve grown fond of over the years and the best part; I can use the best tool for the job – mix and match – on almost any platform.

My Biggest Challenge

Having a role like I have requires me to pay attention to a lot of things. Being in a small country, we don’t have the luxury of having a lot of people focusing on the different aspects of Azure – I basically have to know most of it, or at least know of it and go deep when called for. This is kinda fine, its knowing technology – and over the years one kinda absorbs new technology without too much hassle. The biggest challenge to me is to not stay in touch with real projects and gain real true experience with actually deploying the technology. This is truly the most important aspect in my opinion. Its when you use things you learn what works and what doesn’t. We’ve put in motion actions to actually gain the needed experience across our department. But in addition, I’m now starting a pet project that I’ve been planning for a couple of years and as luck would have it; the project will touch on most aspects of Azure. Now that I’ve gotten past the initial hurdles and starting to get a bit more comfortable with my role, I think I’ll be able to have the extra energy at night to do this.

Conclusion

In conclusion there is no conclusion. This is all so new and different that I just have to go day by day and see what is behind every corner. I’m excited, willing to learn and highly motivated. These things help.. :)




New and interesting challenges

First of July I’m starting @ Microsoft Norway in the Developer Experience (DX) team. My official, on the paper, title is Technical Evangelist – which might sound scary enough. But in reality it’s a multifaceted position ranging from sure enough evangelism in the sense of talking about Microsoft products and promotoing these, to advisory for clients, community work, blogging and more. My particular role will be geared towards the cloud; Azure, but I will be sure to keep in touch with the entire stack – as I really love keep on top of things. I spend a lot of time in front of the computer both work and non-work related, so I’ll be sure to keep up.

I’ve worked closely with Microsoft since 2001, since 2008 I’ve been a Microsoft MVP and at times its felt like I’ve been an employee without having the privileges an employee has. So when my new boss said to me a couple of weeks ago; “Welcome home..” – that is the feeling I actually do have.

The timing for me is perfect, both on a personal level, professional level and seeing all the cool things that really excites me that is going on at Microsoft these days. I’m bursting with joy for how Microsoft has turned around and really looking forward to engage more in the coming years with this.

How does this impact other things. Well, I’ve had a company on and off since 1997 called Dolittle. At first as a way of picking up freelance work and the last few years more focused. I am closing down the company. Keeping the brand though, with domains and all, but putting it to sleep as a company. For other things, the opensource projects I’m involved in; Balder, Bifrost, Forseti and more. I’ll keep on working on them when I have a chance. Maybe not as focused as before, as I’ve had the pleasure and luxury of being able to maintain the projects at work for the last few years. They’ve been with me for years and are babies to me, so it would be hard to let them go. Besides, they serve a great purpose for me in keeping up with the whats going on the world of development. I am a developer after all and if I am to do a good job talking about it, I need to maintain my knowledge.

I’m super excited and honored to become a member of the team. Really looking forward to it.

SignalR: Blueprints

When I wrote my first book; SignalR: Real-time Application Development, I was not entirely sure I’d ever write another anytime soon. First time author and a bunch of rookie mistakes during the initial draft of the book costed me a lot of blood and sweat getting to the second draft and then eventually the release. Even though not a big book, I was exhausted trying to juggle the writing and doing my day job. But got back on the bandwagon and eventually started on my second book, this time around actually learning from the mistakes and the whole process was so much smoother.

The end product; SignalR Blueprints. 

 

You can get it as print or eBook from Packt or Amazon.


The elevator pitch

SignalR is a piece of tech that opens up great opportunities for your apps. It enables a client connected to your server to be persistently connected allowing the server to push messages to any connected client. This opens up a pathway for new ways of doing things, making our software better in technological terms but also from a user experience perspective, which is the biggest reason one should pick up on this. SignalR Blueprints takes you on a journey presenting the usage of the technology paired with different environments ranging from ASP.NET MVC to CQRS based onto mobile devices such as Windows Phone, iOS and Android using Xamarin with a high focus on code quality all the way through but more importantly a high focus on the user.


Thanks are in order…

I’d like to thank the entire editorial staff at Packt for pointing me the right direction and making it a smooth process. High five and great thanks to Sumeet Sawant and Shashank Desai for their excellent work as respectively Content Development Editor and Technical Editor, you and your teams have really helped raise the quality of the product. A big thanks to the technical reviewers as well; Dejan Caric, Anup Hariharan Nair and Michael Smith.

Concepts and more

With Bifrost we’re aligning ourselves more and more with being a platform for doing Domain Driven Design. Introducing more and more artefacts from the building blocks as we go along. When we set out to build Bifrost, we decided early on to be true to not be building anything into it that we didn’t need in a real world scenario. This was after we had started falling into the pattern of what if of software development. We started imagining problems and had to deal with them way before they had actually happened. With the risk of generalising; a fairly common scenario amongst dirty minded tech people. It stems from experience, knowing that there will always be something that can go wrong. Sure, there always is. I digress, I think this could probably be a blogpost on its own. The point being, we were heading down this path and for some reason got jolted back to reality and we started focusing on implementing only the things we needed and rather actually go back and remove things that came out of the “what if game”. On this new path we wanted to also stay focused on implementing things that were aligned with DDD and keep a close eye on the user.

Concepts

With the philosophy of CQRS at heart built with SOLID care we keep a very close eye on being very specific in our modelling. Things that are used in one part of the system is not automatically reused somewhere else, for the DRYness. We don’t believe in DRYing up properties and we favor composition of inheritance. Logic is still kept only once, on the command side of the fence. With all these principles at hand we were missing something that would link it all back together and make things look and feel consistent.

Let’s look at a scenario; say I want to update the address of a person. A command could be something like the following:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

In Bifrost you’d then have a CommandHandler to deal with this and then an AggregateRoot that would probably look like the following:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid personId) : base(personId) {}
   public UpdateAddress(string street, string city, string postalCode, string country)
   {
      // Apply an event
   }
}

The aggregate would then apply an event that looks like the following:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

An event subscriber would pick this up and update a read model that might look like the following:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

That was the artefacts we would typically be dealing with; command, aggregateroot, event and readmodel. For simplicity, these look pretty much the same – but they don’t have to, and in fact; most of the time they don’t. Lets address something here. We’re losing out on a potential in the modelling here. Take the Guid representing the unique identifier for the person. This is in fact something  that is part of the domain vocabulary that we’re losing by just making it a Guid directly.

In Bifrost we have something called ConceptAs that we can use to represent this domain concept. This is a base class that we recognize throughout the system and deals with properly during serialisation between the different out of process places it might go.

using System;
using Bifrost.Concepts;

public class Person : ConceptAs<Guid>
{
   public static implicit operator Person(Guid personId)
   {
      return new Person() { Value = personId };
   }
}

What this does is to wrap up the primitive, giving us a type that represent the domain concept. One modelling technique we applied when doing this is to stop referring to it as an id, so we started calling it the noun in which it represented. For us, this actually became the abstract noun. It doesn’t hold any properties for what it represents, only the notion of it. But codewise, this looks very good and readable.

In the ConceptAs base class we have an implicit operator that is capable of converting from the new type to the primitive, unfortunately C# does not allow for the same implicit operator going the other way in the base class, so this has to be explicitly implemented. With these operators we can move back and forth between the primitive and the concept. This comes very handy when dealing with events. We decided to drop the concepts in the events. The reason for this is that versioning becomes very hard when changing a concept, something you could decide to do. It could also make serialization more complex than you’d hope for with some serializers. Our conclusion is that we keep the events very simple and uses primitives, but everywhere else the concept is used.

The way we structure our source we basically have a domain project with our commands, command handlers and aggregates. Then we have a project for our read side and in between these two projects sit a project for holding the domain events. With this model we don’t get a coupling between the domain and the read, which is one of our primary objectives. The concepts on the other hand, they are going to be reused between the two. We therefor always have a concepts project where we keep our concepts.

Our typical project structure:

2015-02-03_07-43-27.png

So, now that we have our first concept, what did it do? It replaced the Guid reference throughout, introducing some clarity in our models. But the benefit we stumbled upon with this; we now have something to do cross cutting concerns with. By having the type of pipelines we have in Bifrost, we can now start doing things based on the type being used in different artefacts. Take the command for instance, we can now introduce input validation or business rules for it that would be applied automatically whenever used. Our support for FluentValidation has a BusinessValidator type that can be used for this:

using Bifrost.FluentValidation;
using FluentValidation;

public class PersonBusinessValidator : BusinessValidator<Person>
{
   public PersonBusinessValidator()
   {
      RuleFor(p => p.Value)
         .Must(… a method/lambda for checking if a person exist …)
         .WithMessage(“The person does not exist”);
   }
}

As long as you don’t create a specific business validator for the command, this would be automatically picked up. But if you were to create a specific validator for the command you could point it to this validator as a rule for the person property.

The exact same thing can then also be used for an input validator, which then would generate the proper metadata for the client and execute the validator on the client before the server.

It opens up for other cross cutting concerns as well, security for instance.

Value Objects

A second type of object, with the same importance in expressing the domain and opening for solving things in a cross cutting manner are value objects. This is a type of object that actually holds information, attributes that have value. They are useless on their own, but often used in the domain and also on the read side. Their uniqueness is based on all the fields in it. We find these in any domain all the time, they are typical things like money, phone number or in our case address. These are just the off the top of my head type of value objects you’d have, but you’ll find these in many forms. Lets tackle address:

using System;
using Bifrost.Concepts;

public class Address : Value
{
   public string Street { get; set; }
   public string City { get; set; }
   public string Postal { get; set; }
   public string Country { get; set; }
}

 

The Value baseclass implements IEquatable and deals with the property comparisons for uniquness.

With the value object you do get the same opportunities as with the concept for input and business validation, and yet another opportunity for dealing with cross cutting concerns.

If we summarize the sample before with these new building blocks, we would get:

using System;
using Bifrost.Commands;

public class UpdateAddressForPerson : Command
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Our event:

using System;
using Bifrost.Events;

public class AddressUpdatedForPerson : Event
{
   public Guid PersonId { get; set; }
   public string Street { get; set; }
   public string City { get; set; }
   public string PostalCode { get; set; }
   public string Country { get; set; }
}

As you can see, we keep it as it was, with the properties all in the event.

Our AggregateRoot:

using System;
using Bifrost.Domain;

public class Person : AggregateRoot
{
   public Person(Guid person) : base(person) {}

   public UpdateAddress(Address address)
   {
      Apply(new AddressUpdatedForPerson {
         Person = Id,
         Street = address.Street,
         City = address.City,
         Postal = address.Postal,
         Country = address.Country
      });
   }
}

The readmodel then would be:

using System;
using Bifrost.Read;

public class AddressForPerson : IReadModel
{
   public Person Person { get; set; }
   public Address Address { get; set; }
}

Conclusion

For someone more familiar with traditional N-tier architecture and modelling your EDM or rather than separating things out like this, this probably raises a few eyebrows and questions. I can totally relate to it, before starting the Bifrost journey – I would have completely done the same thing. It seems like a lot of artefacts hanging around here, but every one of these serves a specific purpose and is really focused. Our experience with this is that we model things more explicitly, we reflect what we want in our model much better. Besides, you stop having things in your domain that can be ambiguous, which is the primary objective of DDD. DDD is all about the modelling and how we reach a ubiquitous language, a language that represent the domain, a language we all speak. From this perspective we’ve found domain concepts and value objects to go along with it to be very useful. With them in place as types, we found it very easy to go and retrofit cross cutting concerns we wanted in our solution without having to change any business logic. When you look at whats involved in doing it, its just worth it. The few lines of code representing it will pay back in ten folds of clarity and opportunities.

Life vNext

Tomorrow, 5th of January 2015, I’m starting a new job – not for a client, but an actual new job. The place is called dRofus. I’m very excited about starting. It marks the end of my now soon 7 year career as a consultant and means I’m moving back to product development, which has been what I’ve done for the most of my career from 1994 till 2008. This gets me truly excited. I’m so much more a product developer than a consultant. I quite enjoy maintaining things over a long period. Don’t get me wrong, I love doing new things, but the idea of improving something over time is something I love. 

 

Back in 1997 I started a company called Dolittle – basically as a freelance thing, something I did on my sparetime, developing products for different companies. Meanwhile working for different companies at day time, I kept this company dormant and activated it when I got opportunities I felt I had the time to do. In 2010 I took the final step; I swapped out my daytime job with the sparetime and really took the plunge and got long term contracts with clients leading me all the way to today. During this, Dolittle became 3 guys. The three of us hit it off on a project and decided to see what we could do together.

 

2014 has been a very hectic year, its the year I didn’t really have time to blog – even though it marked the 10th anniversary since I started my blog. The first half of 2014 was pretty much dedicated to getting a product for a client out the door and on its way to the users. Seeing that the contract for the 3 of us was coming to an end, we started looking around for new things to do. Our goal was a long term project that the 3 of us could be on together. While looking, I got a 6 month extension to the contract and we got a request for one of the others for a long term contract. On top of this we had a project we were all doing bits and pieces on for a client every so often. We had enough work, but still wanted to get the three of us onto the same project. At NDC in June we all went to try to create a bit of a buzz, got a few leads from it; but nothing concrete. The summer went and we started broadening our search, creating more material to show what we had done and what we represented. Still, projects we came over were for one person. In fact, the third of us picked up one of these ensuring. During our expansion, we started looking outside our geographical area; Vestfold – Norway. We knew that it was a tough market in that area, but we wanted to try it out first before expanding. The minute we expanded after the summer, it started picking up. After a few meetings we got great feedback from a few clients impressed with what we had achieved. During this, we were headhunted individually by different companies looking for resources. This completely threw us off. We knew we weren’t really the perfect fit for being consultants, as our focus has always been on the product in a longer perspective than consultants normally get to do. All of a sudden we found ourselves doing interviews and at a couple of places we were doing interviews all together, as they wanted the entire crew. Long story short, we got opportunities we could not say no to. Two of us (myself and Michael Smith) ended up accepting an offer from dRofus. Our third chap; Pavneet Singh Saund accepted an offer at a more local client, which makes totally sense for his situation. 

 

What about Dolittle?  Well, it will have to back into a dorment state. I’m not a person to give up, and I don’t see this move as giving up. It was just not meant to be right now. But who knows, that could change in the future. However, I hope if it changes that it changes to be a product and not back to being consultants. 

2015

In many ways, I think for my own sake that moving into a regular job right now is a really good move. I get to focus more on my kids than I have had for the last couple of years. I also get to focus more on other things I love, such as blogging, local dev-community work, writing books and more. I will also try to be more visible on stage and talk about topics I feel passionate about.

Bifrost and other projects

So, what happens with the open-source projects I’m involved in. Well, having clients that push the development forward has been a luxury for the projects we maintain. It really has had an excellent velocity due to this. That being said, I’ve already mentioned for my new employer that I’m part of a couple of projects and that I would be surprised if they wouldn’t fit in their products as well. My employer has said we will need to look into it in more depth. But it is most likely the most perfect timing to introduce Bifrost as they are looking at rewriting a lot of the product.

 

However, I am expecting to have a lot more time on my hand to maintain these projects now. They are projects that has been with me for quite some years now and I’m not about to give up on them. We’ve been very clear that we were never going to develop on the projects without a real business need and these would come from real projects. We do however know a lot more now than when we started and I’m expecting to go back and refactor things, make a few things more focused and also modernize a few things as well. In addition, there are needs we know about that we found on real projects that we have yet to implement – so these will be put in as well. But the overall vision will be maintained; don’t make things in the projects without a really well founded argument. 

 

 

Wish me luck..

 

 

 

 

 

Moving from WordPress.com to Azure

[UPDATE: Want a Norwegian translation of this, you can find it here]

I decided to move from a hosted WordPress solution on WordPress.com to an installation sitting in Microsoft Azure. The reason for this is that I want to be able decide what content to post myself. WordPress.com has a limitation on what is allowed, and when running Open Source projects – I do get some donations from companies every now and then and I want to put posts up with logo and link back as a thank you to the ones donating. This could be considered ads when reading the terms and agreement from WordPress.com and not allowed. I figured I wanted to have editorial control myself of whats ok and not. Besides, being a huge fan of Azure, I’ve always wanted to host my blog on it. You might be asking, why don’t I write my own blog engine – seeing that I’m a dev. Well, I used to have my own before – but I don’t really have time to maintain it myself, I just want something that works. The choice of WordPress might seem odd as well, but again – its just me picking what is the most convenient to me; I can export from WordPress.com and directly into this with the least amount of hassle – which brings me to the real purpose of this post; a walkthrough of how I moved to Azure. This tutorial is most likely applicable to any other hosted WordPress service you might be using as it is basically just using WordPress features. 


Exporting from WordPress.com

The first thing you want to start with is logging into your WordPress.com account and export your site. I use the classic menu @ WordPress.com – you’ll find it under Tools | Export.


If you want to completely migrate, chose “All content”.

Click the  button and you should get an XML file:


This is all we need from WordPress.com


Setting up a new site in Azure

I’m going to assume that you’ve signed up for Azure. From the management portal of Azure, you simply select the New button in the lower left corner.


Then Comput | WebSite | From Gallery. 


You’ll then get a wizard popup. Find WordPress in the list at the bottom and select it.


Click the arrow in the lower right corner to move to the next step.

From this its only the URL, DATABASE and WEBSCALEGROUP you need to care about. The URL is the public URL you want your blog to have. If you want a custom domain, we can add this later.


For the DATABASE you have the option using an existing or creating a new. You can only add one blog using the “Create a new MySQL database” option, after that you will have to manually add a MySql database from the marketplace and then use the “Use an existing MySQL database” option. The last option of WEBSCALEGROUP lets you chose where to host your blog – geographically. I’ve used North Europe, which is the Ireland based datacenter. If you are unsure which one to chose, go to the Azure region details to find the one closest to you and the readers of your blog.


For my first blog that I’m migrating I selected the “Create a new MySQL database” option which then gives the oportunity to give the database a name and again a region. It is smart to have the region be the same as your blog. In the lower left corner you will also have to agree to the ClearDB’s legal terms. And then you’re set to go. When it is ready, you will have to go to the URL you gave and finish the WordPress installation by setting the administrator username and password.


Setting up a theme

You probably want to personalize your blog and find the appearance you want. In the default installation there is 4 themes, if you want something else, you will have to “Add New Theme” from the Appearance menu item in the dashboard.


If you’re like me and you want to have the same theme you had on WordPress.com, you can just search for it and see if its available and add it.


Once you’ve found your theme, activate it.



WordPress Importer

Now its time to get the content back by importing it. Go to Tools | Import in the dashboard menu.


Select WordPress.


Click the  button to find the XML file on your computer. 


Then click .


You will now have to map users from your WordPress.com authors to users on the new installation. You do get a change to create new logins if you have multiple authors coming in. If you want the attachments to be available on your new site and not be linked back to the original location where they came from, you need to tick off the “Download and import file attachments”.



Import External Images

After importing your content, images might still be stuck on the WordPress.com site. Hovering any of the images you’ll see the URL point to a WordPress.com address. To be sure they are all stored in the new site, there is a plugin that we can use to import all external images. All you need to do is go to Plugins on your dashboard and then Add new, put in “Import External Images” in the search field and hit enter. You should get the plugin and then click the  button.




Once this is installed you’ll have a new option under the Media menuitem in the dashboard. Set the options thats right for you and click .




SyntaxHighlighter Evolved

Since I blog about code for the most part, I need to have my code look good on the page. This is called syntax highlighting. It will colorize the code appropriately and make the code stick out on the page. The plugin used on WordPress.com is the SyntaxHighlighter Evolved. All you need to do is go to Plugins on your dashboard and then Add new, put in “SyntaxHighlighter Evolved” in the search field and hit enter. You should get the plugin and then click the  button.


Statistics

To keep an eye on how many readers you get and details about your readers, there is a few plugins available. I basically went with the first one after a search for “statistics” in plugins.


This will give you a new menu item in the dashboard called Statistics with a lot of different filters in the sub menu.


Akismet for Anti Spam


I quickly realized after a couple of days that spam is really a huge problem in the comment fields of blogs. I noticed this the hard way after some 10 days with 2946 new comments on my posts. I do get comments, just not that many.


I chose to go with the Akismet plugin that comes preinstalled with the WordPress installation. You can chose to donate how much you want, starting with $6. After activating it, you can get it to go through 


One of the things I don’t want to have to do is to have to go through spam on a regular basis to allow or now. I’m basically going to trust the plugin and have it delete the worst spam. You do this by opening the plugins Settings | Akismet in the dashboard.


After activating the Akismet and you happen to have spam already sitting there, like me, you can go to the comments in the dashboard and click the “Check for Spam” button and let it run through it all.



Since I had a lot of comments, I had to run this multiple times till I got down to 0, but it works really nice.

Custom domain name

Its fairly simple to get your custom domain name up and running for Azure and your WebSite. Go to the Configure tab on your site.


Scroll down to the  and click it. You should get a popup with details on how to set up the DNS records to point correctly. Basically, all you need is a CNAME DNS record representing the full DNS entry that you want for your site and then point it to the URL that set up in the beginning, in my case: ingebrigtsen.azurewebsites.net. If you had a CNAME already in place pointing to WordPress.com, you will have to wait for the time to live to expire before it is propagated throughout. 

Setting it up in your favorite writer

With WordPress you have a pretty good experience for writing posts in their editor online. But the keyword here being “online”. If you need offline writing you might have your favorite editor for doing so. Personally, I like having a desktop client for writing even if I’m connected 99% of the time. These days, I’ve fell in love with an editor called Desk, simple and just helps me focus and gives me a certain joy in writing I thought I had lost. Anyways, with moving to Azure – you can still use your favorite writer. All you to do is point it at the blogs URL. For the XMLRPC part, some writers will figure this out others won’t. If you need to be explicit about it, just type in “xmlrpc.php” in the API url part. Then its the username and password.



The drawbacks

There are a few things that all of a sudden we have to consider moving away from a hosted service. Its always a balance of flexibility vs amount of work to do. In a Software-as-a-Service model, its always going to be the common denominator that is the rule and everything is taken care of – while hosting it yourself, you gain flexibility but along comes chores you didn’t have to in a SaaS world.

Backup

Things can happen, leading to loss of service or worse, loss of data. So you want to consider setting up backup to avoid losing anything. Luckily, this is fairly easy to setup in Azure. Have a look at the walkthrough of how this is done here.

Updates

Everytime there is an update to WordPress or a plugin, or even the theme you selected – you 


Scaling

Another thing that one does not have to think of in a SaaS mode is scalign and making your site responsive and available even when there is a lot of users hitting your site. Again, Azure does really come to the rescue. There are a few options of dealing with this, you can increase the instance size, increase number of instances. There is even an autoscale option that can do this dynamically.


Conclusion

Azure has really matured since I started back in 2008 right after PDC and the launch of Azure. The ease and consistency in UX of the service is really great. I’ve tried Amazon and I do feel its quite messy. With the new portal coming out for Azure, things are getting even easier. I think the ease of getting a WordPress site up and running on Azure proves how mature it is. Putting in place backup and scaling makes the drawbacks of hosting things myself go pretty much away. 


Today I use Azure for all my personal sites plus company sites. Every client we’ve had, our default answer is Azure. It comes from the fact that it is very mature – and not only for Microsoft technology, but also for things like PHP, NodeJS, Ruby on Rails or anything you want to throw at it basically, even Linux virtual machines. 







Improving Angular experience with some convention magic

Disclaimer: I’m a line of business app developer – not a Web site developer. My view on things is colored by my experience with building large enterprise applications that has larger teams of of developers working together and needs to keep velocity and code quality at a very high standard through years of development.

I’ve been on a short assignment with a client and they wanted to establish ways for them to work with their Web development. They’re a .net shop with little experience in Web – only thick client technologies like WPF. They had very few requirements, but they wanted to go for things that are fairly established and had a guy that had done some AngularJS. For me it was the first time I did Angular in any structured way – previously I’ve just dabbled with it, basically with the intention of supporting it for Bifrost. The first thing that struck me with Angular is the explicitness of just about everything and how everything needs to be configured with code. Obviously, I’m not going to claim to be an expert in Angular, and I’d love to be corrected if I’m wrong. But from my little experience I started itching, and I had to scratch the itch.

Routes and conventions

In Angular, with the Angular Route extension, routing is one of the things you can do. A fairly simple and consistent API. One typically ends with a pattern as follows:

var application = angular.module("MyApp", ["", "ngRoute"]);
application.config(["$routeProvider"], function($routeProvider) {
$routeProvider.when("/some/route", { templateUrl: "/some/path/to/view.html", controller: "MyController" });
});
application.controller("MyController", function() { /* Put your controller code here ... */ });

On one side one tells what to do when a route occurs – pointing it to a view and a controller that represents it. Then we need to configure the controller by its name and then point it to a function that represents the controller.
My claim is that looking at your app, you either have formalized a pattern or you have a pattern by accident for how these things are put together. This is a great time to formalize it by creating something that represents it and can automate it as a convention.

For instance, you probably will see that for most parts of your app there is a relationship between the routes and where things are placed on disk. In my experience, routing is more often than not something the end users really don’t care about. I do have a feeling that we put too much thought and effort into something like this, while the end users will just copy / paste / click links and don’t care how they look. With this in mind and if there is more in reality a correlation between routes and disk location, we can automate this whole thing.

Directives
Another aspect of Angular that is really useful are the directives, but again as with routes and controllers, one has to set this up very explicitly. This is something that could easily be automated. For instance, you could have a folder in your frontend project called Directives and every folder within it represents a directive by the folder name, within this a directive could then be represented by View.html for the view a Controller.js for the controller and a Link.js for the link part.

Proxy generation FTW
Something we’ve had great success with from our Bifrost development is proxy generation. With backend code written in a different language than the frontend, its just great to augment the frontend with code generated to make the transition between the two less painful. But regardless of the divine of having 2 languages in your system, generating code for automation can really be a lifesaver. With a fixed convention, developers on your team gets less options. You might be arguing that is a bad thing, but I argue its a very good thing. If you’re the only one on your team or you’re two guys – you can probably cope with full flexibility. But applying a regime makes it easier to do things right – or according to the regime, at least. And that should be a good thing. Another benefit of defining regimes is that in some cases when automating things and generating code you get the opportunity for doing cross cutting concerns. With a regime in place for routing for instance and pointing it by convention to a controller that matches the view, you could inject a man in the middle controller that could do a lot of interesting cross cutting concerns, for instance logging, error handling, security or other more domain specific concerns.

Code please
We are going to build support for Angular into Bifrost, and with it we will provide configurable conventions for routing and directives.

As part doing a Visual Studio 2015 Deep Dive for Microsoft recently I created a sample for Visual Studio 2015 showing off how to do all this, but now just using JavaScript and the new Grunt and NodeJS support coming with Visual Studio 2015. You can find the entire sample that uses SignalRKarmaJasmine over at GitHub here.

Bifrost and Proxy generation

One of the things we consider to be one of the most successful things we’ve added to Bifrost is the bridge between the client and the server in Web solutions. Earlier this year we realized that we wanted to be much more consistent between the code written in our “backend” and our “frontend”, bridging the gap between the two.  And out of this realization came generation of proxy objects for artifacts written in C# that we want to have exposed in our JavaScript code. If you’re a node.js developer you’re probably asking yourself; WHY..   Well, we don’t have the luxury to be writing it all in JavaScript right now, but it would be interesting leveraging what we know now and build a similar platform on top of node.js, or for the Ruby world for that matter – but thats for a different post.  One aspect of our motivation for doing this was also that we find types to be very helpful; and yes – JavaScript is a dynamic language but its not typeless, so we wanted the same usefulness that the types have been playing for our backend code in the frontend as well. The types represent a certain level of metadata and we leverage the types all through our system.

Anywho, the principle was simple; use .net reflection for the types we wanted represented in JavaScript and generate pretty much an exact copy of those types in corresponding namespaces in the client. Namespaces, although different between different aspects of the system come together with a convention mechanism built into Bifrost – this also being a post on its own that should be written :), enough with the digressions.

Basically, in the Core library we ended up introducing a CodeGeneration namespace – which holds the JavaScript constructs we needed to be able to generate the proxies we needed.

CodeGeneration_NS

There are two key elements in this structure; CodeWriter and LanguageElement – the latter looking like this:

public interface ILanguageElement
{
    ILanguageElement Parent { get; set; }
    void AddChild(ILanguageElement element);
    void Write(ICodeWriter writer);
}

Almost everything sitting inside the JavaScript namespace are language elements of some kind – to some extent some of them being a bit more than just a simple language element, such as the Observable type we have which is a specialized element for KnockoutJS. Each element has the responsibility of writing themselves out, they know how they should look like – but elements aren’t responsible for doing things like ending an expression, such as semi-colons or similar. They are focused on their little piece of the puzzle and the generator will do the rest and make sure to a certain level that it is legal JavaScript.

The next part os as mentioned the CodeWriter:

public interface ICodeWriter
{
    void Indent();
    void Unindent();
    void WriteWithIndentation(string format, params object[] args);
    void Write(string format, params object[] args);
    void NewLine();
}

Very simple interface basically just dealing with indentation, writing and adding new lines.

In addition to the core framework for building the core structure, we’ve added quite a few helper methods in the form of extension methods to much easier generate common scenarios – plus at the same time provide a more fluent interface for putting it all together without having to have .Add() methods all over the place.

So if we dissect the code for generating the proxies for what we call queries in Bifrost (queries run against a datasource, typically a database):

public string Generate()
{
    var typesByNamespace = _typeDiscoverer.FindMultiple&lt;IReadModel&gt;().GroupBy(t =&gt; t.Namespace);
    var result = new StringBuilder();

    Namespace currentNamespace;
    Namespace globalRead = _codeGenerator.Namespace(Namespaces.READ);

    foreach (var @namespace in typesByNamespace)
    {
        if (_configuration.NamespaceMapper.CanResolveToClient(@namespace.Key))
            currentNamespace = _codeGenerator.Namespace(_configuration.NamespaceMapper.GetClientNamespaceFrom(@namespace.Key));
        else
            currentNamespace = globalRead;

        foreach (var type in @namespace)
        {
            var name = type.Name.ToCamelCase();
            currentNamespace.Content.Assign(name)
                .WithType(t =&gt;
                    t.WithSuper(&quot;Bifrost.read.ReadModel&quot;)
                        .Function
                            .Body
                                .Variant("self", v =>; v.WithThis())
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .WithPropertiesFrom(type, typeof(IReadModel)));
            currentNamespace.Content.Assign("readModelOf" + name.ToPascalCase())
                .WithType(t =>
                    t.WithSuper("Bifrost.read.ReadModelOf")
                        .Function
                            .Body
                                .Variant("self", v => v.WithThis())
                                .Property("name", p => p.WithString(name))
                                .Property("generatedFrom", p => p.WithString(type.FullName))
                                .Property("readModelType", p => p.WithLiteral(currentNamespace.Name+"." + name))
                                .WithReadModelConvenienceFunctions(type));
        }

        if (currentNamespace != globalRead)
            result.Append(_codeGenerator.GenerateFrom(currentNamespace));
    }

    result.Append(_codeGenerator.GenerateFrom(globalRead));
    return result.ToString();
}

Thats all the code needed to get the proxies for all implementations of an interface called IQueryFor<>, it uses a subsystem in Bifrost called TypeDiscoverer that deals with all types in the running system.

Retrofitting behavior, after the fact..

Another discovery we’ve had is that we’re demanding more and more from our proxies – after they showed up, we grew fond of them right away and just want more info into them. For instance; in Bifrost we have Commands representing the behavior of the system using Bifrost, commands are therefor the main source of interaction with the system for users and we secure these and apply validation to them. Previously we instantiated a command in the client and asked the server for validation metadata for the command and got this applied. With the latest and greatest, all this information is now available on the proxy – which is a very natural place to have it. Validation and security are knockout extensions that can extend observable properties and our commands are full of observable properties. So we introduced a way to extend observable properties on commands with an interface for anyone wanting to add an extension to these properties:

public interface ICanExtendCommandProperty
{
 void Extend(Type commandType, string propertyName, Observable observable);
}

These are automatically discovered as with just about anything in Bifrost and hooked up.

The end result for a command with the validation extension is something like this:

Bifrost.namespace("Bifrost.QuickStart.Features.Employees", {
    registerEmployee : Bifrost.commands.Command.extend(function() {
        var self = this; this.name = &quot;registerEmployee&quot;;
        this.generatedFrom = "Bifrost.QuickStart.Domain.HumanResources.Employees.RegisterEmployee";
        this.socialSecurityNumber = ko.observable().extend({
            validation : {
                "required": {
                    "message":"'{PropertyName}' must not be empty."
                }
            }
        });
        this.firstName = ko.observable();
        this.lastName = ko.observable();
    })
});

Conclusion
As I started with in this post; this has proven to be one the most helpful things we’ve put into Bifrost – it didn’t come without controversy though. We were met with some skepticism when we first started talking about, even with claims such as “… it would not add any value …”. Our conclusion is very very different; it really has added some true value. It enables us to get from the backend into the frontend much faster, more precise and with higher consistency than before. It has increased the quality of what we’re doing when delivering business value. This again is just something that helps the developers focus on delivering the most important thing; business value!

Sustainable Software Development

Software is hard to make, capturing the domain and getting it right with as little bugs as possible. Then comes release time and your users starts using it, things get even harder – now comes all the changes for all the things that you as a developer did wrong, all the bugs, things you didn’t think of, misunderstandings. All that and the users have requirement changes or additional features they’d love to have on top. I’ve often heard people say things like “… if it wasn’t for the users, creating software would be quite joyful …”. It says tons, it does say amongst other things that creating software is not an easy task. Also, it does say a bunch of other things as well, for instance – it says that we’re not very good at talking to our users, or we’ve taken the concept of having your team working in black-box environment too far or too literally.

Even if we did it all right there are tons of external events that could cause the requirement of changing what we already made. The market could change, new competitors arriving to the table or existing competitors taking a completely different approach to the same problem you just solved. Another aspect that can cause changes is that the people we develop the software for that we’ve included in the dialog are learning all the way through the process of what is possible and what is not, causing them to have more knowledge and wanting change.

The purpose of this post is to shed some light on what we think are good practices on their own but put together represents a very good big picture of how you can create software that will easier to change, meeting the requirements of your users better and possibly reduce the number of bugs in your software.

Talk Talk Talk

One of the biggest mistakes I think we do is to think we are able to intuitively understand our users, what they need and want and also how it should take form. Going ahead and just doing things our way does not come from an inherent arrogance from us developers, but rather something I think is closer to an what we consider an intelligent and qualified assumption of what we think we’re making. At the core of this problem sits the lack of dialog with the real users. We as developers are not all to blame for this, of course. On unstructured projects without any particular process applied one finds very often that users or representatives of the users, such as support personell, sales persons or similar have a direct link to the developers. If you’ve been on a project like this, you’ll probably also remember that there was especially one guy in the office these guys went to – he was the “Yes Man” that just jumped and fixed that “critical” bug that came on his desk. This is needless to say a very counterproductive way to work; at times you’re not able to get into the zone at all because of all the interruption. Then on more mature teams they’ve applied something like ScrumeXtreme Programming or similar and been taught that we run demos for the end users or their representatives at the end of an iteration of X number of weeks and then they can have their input into the process. It gaves us the breathing room we were looking for, phew…

The only problem with this breathing room is that it is often misinterpreted and taken too literally. During an iteration, there is nothing saying you can’t speak to a user. In fact, I would highly recommend we do it as often as possible, so that we’re certain that what we’re making is really what the users want. We don’t know, in fact, unless you’re making software that you’re using yourself – like a dev tool or something else you are using on a regular basis, we really haven’t got a clue whatsoever. Even though opening up a communication channel like this sounds scary, and it should sound scary if you’ve ever been on the an unorganised team. Communication is key; make it so that you as a dev contact the users or their representatives and they can’t go to you directly for anything – unless agreed to because you’re working on a particular feature together. So in conclusion; users don’t speak to the developers about things they aren’t working on with the developer – if they’re working on something together, they should be the best of buddies.

One of the practices that can one could consider applying Domain Driven Design (DDD), which comes with a vocabulary in itself that is very helpful for developers. But the best part of DDD is the establishment of a ubiquitous language, a language representing the elements of your particular domain that both the users speak and the developers. Often this means representing the vocabulary the end users already have in your own code. Getting to this language will help you communicate better with the end users and you don’t have to do mental translations between what the users are talking about and “what it really is” in the code.

The secret sauce of agile

We’ve all been touted our ears full of how we have to become agile. You would be crazy to not do it, we’ve been told. I think its important to clarify what agile means; its all about feedback loops. Get the software into the hands of the users as fast as possible and often as possible, so that we can learn what works and what doesn’t. Find problems as early as possible when it is still fresh in the minds of the developers – leading to saving time in the long run and increased quality of the user experience and code.

I’m all in, keeping the feedback loop as tight as possible. In fact, the part about the feedback loop and keeping it as tight as possible is something I promote all the time as well – for just about everything from execution time of your automated tests to feedback from the real users. One of the promises of being agile is to be able to have software that is changeable and adapt to input from users. But I would argue that there is part of this story that seems to drown in the overall messaging; your software needs be written in a way that it is agile. What does that mean? I think there are a few basic techniques and principles we can apply to make this a reality.

Testing is one of these pieces to the puzzle. Writing code that verifies the production code and its promise is something some of us has been doing for quite a while, in fact I would argue – everyone does this, but not necessarily consciously. Every time one adds a little console app or something to test out a particular feature in the system that you’re developing – just to keep you from having to run the entire app every time, you’re then putting in code that you can more easily debug what you’re working on in a more controlled way. These are in fact tests that could with a small fine-tuning become automated tests that can be run all the time to verify that you’re not breaking anything while moving forward. This gives you a confidence level to be able to change your software whenever change is needed, typically due to requirement changes or similar.

Also, your tests becomes the documentation representing the developers understanding of what is to be created. Applying techniques like specification by example and BDD, your code becomes readable and understandable for other developers to jump in and understand what you originally had intended without having to explain it verbosely to another developer. By being clear in the naming of your specifications / tests, writing them out with the Gherkin language, you add the ability to really look at test / spec signatures and have a dialog right there and then with a user.

But with testing comes a new responsibility, the ability to have code that is testable. All of a sudden it can feel painful to actually test your code, due to complex setup and often one ends up testing a lot more than needed. The sweet spot lies in being able to mock things out, give fake representations of dependencies your system under test might have and test the interaction. Testing in isolation is the key, which leads to a couple of things that you want to consider when writing your code; Single Responsibility Principle and Interface Segregation Principle. Both of these helps in testing, but are on their own good practices and makes your software ready for change. With SRP, your code gets focused and specialised – having a type represent one aspect and a method only do one thing, all of a sudden your code gets more readable as well. Applying ISP, you have contracts between systems that aren’t concrete and these can be swapped out, giving you great opportunity for change but also makes it a lot easier to test interaction between systems as you now can give them fake representations and all you need to test is that the interaction between them are correct. All this again leading to higher focus and in return in the long run giving you as a developer greater velocity and confidence in changing your implementations.

I strongly feel that testing is a necessity for agile thinking, it gives you the power of changeability, and often forgotten as a vital part of the big picture.

Rinse repeat; patterns

When developing software for a while, one finds things that work and things that doesn’t; patterns and anti-patterns. These are often organically appearing by plain and simple experience. And some of these patterns have been promoted and been published in literature. Patterns are helpful on many levels; they give you as a developer a vocabulary – making it easier to talk to other developers. They also provide guidance when you not necessarily know how to do something – knowing a few well known patterns can then become very powerful.

Concerns

Software can be divided into logical parts that are concerned with different things. In its simplest form you can easily define things like what the user sees and uses as its separate concern, it is the UI, frontend or the View. Then you can also pinpoint down to what is the place that performs all the business logic, the thing that represents the vocabulary of your domain. Going back into the UI we can be even more fine-grained, you have the concern of defining the elements that are available in the UI, then you have the concern of styling these elements and making them look like you want to and thirdly the concern of making them come to life, the code that serves information into the UI and responds to the users actions. Right there in the UI we have at least 3 concerns. Translating this into the world of web this means; HTML, CSS and JavaScript. These are three different things that you should treat very differently, not in the same file with <style/> and <script/> tags all over the place, separate them out!

On all tiers in an application you have different concerns, some of them hard to identify and some just pop up, but they are nevertheless just as important to find and identify them to separate them out. Sometimes though, you run into something that applies to more than one thing; cross cutting concerns. These can be things you just want to automatically be used, or is something your main application does not want to think too much about. The classic example of such things are transactions, threading and other non-functional requirements. Something that we can technically identify and “fix” once and not have to think about again, in fact they often have a nature of being something we don’t have to think about in advance either. But there are other cross cutting concerns as well. Take for instance a scenario were you have the need for a particular information all over, something like tenant information, user information or similar. Instead of having to pass it along on all method calls and basically exposing your code to the possibility of being wrongly used, you can simply take these cross cutting concerns in as a dependency on the constructor of your system and define it at the top level of your application what that means.

Identifying your different concerns and also cross cutting concerns makes life so much simpler. Having cross cutting concerns identified makes your code more focused and easier to maintain and I would argue, more understandable. You’re not getting the big picture when looking at a single system, but nor should you. Keep things focused and let the different systems care about their own thing and nothing else. In fact, you should be very careful about not bleeding things between concerns. If your UI needs to have something special going on, you should be thinking twice about introducing it already in the business logic often sitting on the server. A classic example of this is to model a simple model of a person holding FirstName and LastName and then introduce a computed property holding FullName. FullName in this scenario is only needed in the UI and gives no value in polluting the model sitting on the server with this information. In fact, you could easily just do that in the View – not even having to do anything in the view logic to make that work.

Fixing bugs – acceptance style

There is probably no such thing as software completely without bugs – so embrace fixing bugs. You will have to fix problems that come in, but how you approach the bug fixing is important. If you jump in and just stunt the fix disregarding all the tests / specifications you wrote, you’re really just shooting yourself in the foot. Instead, what you should be doing is to make sure that you can verify the problem by either changing existing tests / specifications to accommodate the new desired behaviour or add tests / specifications to verify the problem. Then, after you’ve verified the problem by running the automated test and seeing that it is “green”, you go back and fix the problem – run the automated test and verify that it has become “green”. A generally good practice to follow is the pattern of “Red”, “Green”, “Refactor”. You can read more about it a previous post here.

Compositional Software

Making software in general is a rather big task, its so big that thinking of your solution as one big application is probably way too much to fit inside your head. So instead of thinking of it as one big application, think of it as a composition of many small applications. They are basically representing features in your larger application composition. Make them focused and keep them isolated from the rest of the “applications” in your composition. Keeping them in isolation enables you to actually perform changes to all the artefacts of the “application” without running the risk of breaking something else. All you need to do then is to compose it all together at the very top. Doing this you need to consider things like low coupling and high cohesion. Don’t take dependencies on other “applications”, keep all the artefacts of each tier close – don’t go separating out based on artificial technical boundaries such as “.. in this folders sits all my interfaces .. ” <- swap out interfaces with things like “controllers, views, viewModels, models”. If the artefact is relevant to your feature – keep it close, in the same folder. Nothing wrong with having HTML, CSS and JavaScript files in the same folder for the frontend part of your solution, in fact – nothing wrong having the relevant frontend code that runs on the server also sitting in the same folder. Partition vertically by feature and horizontally by tier instead and make your structure consistent throughout. All the tiers can have the same folders, named the same. If you have a concept and feature of “Documents”, that is the name of the folder in all tiers holding all the artefacts relevant for the tier in that folder for that feature.

File->New….

It is by far so much easier to create new things rather than changing, fixing or adding to existing code, ask anyone that is or has maintained a legacy system without having the opportunity to do the rewrite. Software do have a tendency to rot which makes it harder and harder to maintain over time. I would argue that a lot of that has to do with how the software is written. Applying the principles mentioned in this post like SRPSOC, building compositions are all part of making this easier and actually make you for the most part create new things rather than having to fix too much of your old code. In fact, applying these principles you will find yourself for the most part just adding new things, rather than amending existing. This leads to more decoupled software, which is less prone to regressions.

Conclusion

By saying Sustainable Software Development, I mean software that will sustain in time and meet the following requirements:

  • Maintainable – spaghetti is not what you’re looking for. Apply practices such as the SOLID principles, decouple things. Make sure you identify the concerns in your software, separate these out. Identify cross cutting concerns, keep them in its own little jar and not bleed it through your entire app.
  • Changeable – Single Responsibility do in fact mean single, make the contracts explicit – you should be constantly changing your software, improving it – not just for new features or added value, but also when dealing with bugs – improve the quality of the system through refactoring.
  • Debuggable – Make your code debuggable, but not let the tool suck you in – use it wisely
  • Testable – Be sure to write code that is singled out on its responsibility, taking dependencies on contracts – making it possible to be focused and your tests small and focused

What I’m realising more and more from the work I do, there is no single magic bullet – there are many things that fit together and doing them together makes the bigger picture better. We’re often told when some new practice is introduced, it will be the magic bullet making all problems go away. They often fail to say that they were already doing a bunch of other practices that needs to be applied as well in order for it to be successful.

It is very important that we as developers focus on the important parts; deliver business value and deliver it so that we can change it when needed without sacrificing quality.

Applying the things in this blog post has proved to us that we get measurably less bugs, less regression, and it all boils down to the simple fact that we’re for the most part just developing new things. After all, focusing on single responsibility and separating things out and putting things in isolation leaves to creating new code all the time, rather than having to constantly just add yet another if/else here and there to accommodate change. Of course, there are much more to it than a post like this can ever capture, but its part of the core values that we believe in.