Domain Concepts

Back in 2015, I wrote about concepts. The idea behind these are that you encapsulate types that has meaning to your domain as well known types. Rather than relying on technical types or even primitive types, you then formalize these types as something you use throughout your codebase. This provides you with a few benefits, such as readability and potentially also give you compile time type checking and errors. It does also provide you with a way to adhere to the element of least surprise principle. Its also a great opportunity to use the encapsulation to deal with cross cutting concerns, for instance values that adhere to compliance such as GDPR or security concerns where you want to encrypt while in motion etc.

Throughout the years at the different places I’ve been at were we’ve used these, we’ve evolved this from a very simple implementation to a more evolved one. Both these implementations aims at making it easy to deal with equability and the latter one also with comparisons. That becomes very complex when having to support different types and scenarios.

Luckily now, with C# 9 we got records which lets us truly simplify this:

public record ConceptAs<T>
{
    public ConceptAs(T value)
    {
        ArgumentNullException.ThrowIfNull(value, nameof(value));
        Value = value;
    }

    public T Value { get; init; }
}

With record we don’t have to deal with equability nor comparable, it is dealt with automatically – at least for primitive types.

Using this is then pretty straight forward:

public record SocialSecurityNumber(string value) : ConceptAs<string>(value);

A full implementation can be found here – an implementation using it here.

Implicit conversions

One of the things that can also be done in the base class is to provide an implicit operator for converting from ConeptAs type to the underlying type (e.g. Guid). Within an implementation you could also provide the other way, going from the underlying type to the specific. This has some benefits, but also some downsides. If you want the compiler to catch errors – obviously, if all yours ConceptAs<Guid> implementations would be interchangeable.

Serialization

When going across the wire with JSON for instance, you probably don’t want the full construct with a { value: <actual value> }, or if you’re storing it in a database. In C# most serializers support the notion of conversion to and from the target. For Newtonsoft.JSON these are called JsonConverter – an example can be found here, for MongoDB as an example, you can find an example of a serializer here.

Summary

I highly recommend using strong types for your domain concepts. It will make your APIs more obvious, as you would then avoid methods like:

Task Commit(Guid eventSourceId, Guid eventType, string content);

And then get a more clearer method like:

Task Commit(EventSourceId eventSourceId, EventType eventType, string content);

Tagged

Legacy C# gRPC package +  M1

I recently upgraded to a new MacBook with the  M1 CPU in it. In one of the projects I’m working on @ work we have a third party dependency that is still using the legacy package of gRPC and since we’ve started using .NET Core 6, which supports the M1 processor fully you get a runtime error when running M1 native and through the Roseatta translation. This is because the package does not include the OSX64-ARM64 version of the needed native .dylib for it to work. I decided to package up a NuGet package that includes this binary only so that one can add the regular package and this new one on top and make it work on the M1 CPUs. You can find the package here and the repository here.

Usage

In addition to your Grpc package reference, just add a reference to this package in your .csproj file:

<ItemGroup>
  <PackageReference Include="Grpc" Version="2.39.1" />
  <PackageReference Include="Contrib.Grpc.Core.M1" Version="2.39.1" />
</ItemGroup>

If you’re leveraging another package that implicitly pulls this in, you might need to explicitly include a package-reference to the Grpc package anyways – if your library works with the version this package is built for.

Summary

Although this package now exists, the future of gRPC and C# lies with a new implementation that does not need a native library; read more here. Anyone building anything new should go for the new package and hopefully over time all existing solutions will be migrated as well.

Tagged ,

Specifications in xUnit

TL;DR

You can find a full implementation with sample here.

Testing

I wrote my first unit test in 1996, back then we didn’t have much tooling and basically just had executables that ran automatic test batteries, but it wasn’t until Dan North introduced the concept of Behavior-Driven Development in 2006 it truly clicked into place for me. Writing tests – or specifications that specify the behavior of a part of the system or a unit made much more sense to me. With Machine.Specifications (MSpec for short) it became easier and more concise to express your specifications as you can see from this post comparing an NUnit approach with MSpec.

The biggest problem MSpec had and still has IMO is its lack of adoption and community. This results in lack of contributors giving it the proper TLC it deserves, which ultimately lead to a lack of good consistent tooling experience. The latter has been a problem ever since it was introduced and throughout the years the integrated experience in code editors or IDEs has been lacking or buggy at best. Sure, running it from terminal has always worked – but to me it stops me a bit in the track as I’m a sucker for feedback loops and loves being in the flow.

xUnit FTW

This is where xUnit comes in. With a broader adoption and community, the tooling experience across platforms, editors and IDEs is much more consistent.

I set out to get the best of breed and wanted to see if I could get close to the MSpec conciseness and get the tooling love. Before I got sucked into the not invented here syndrom I researched if there were already solutions out there. Found a few posts on it and found the Observation sample In the xUnit samples repo to be the most interesting one. But I couldn’t get it to work with the tooling experience in my current setup (.NET 6 preview + VSCode on my Mac).

From this I set out to create something of a thin wrapper that you can find as a Gist here. The Gist contains a base class that enables the expressive features of MSpec, similar wrapper for testing exceptions and also extension methods mimicking Should*() extension methods that MSpec provides.

By example

Lets take the example from the MSpec readme:

class When_authenticating_an_admin_user
{
    static SecurityService subject;
    static UserToken user_token;

    Establish context = () => 
        subject = new SecurityService();

    Because of = () =>
        user_token = subject.Authenticate("username", "password");

    It should_indicate_the_users_role = () =>
        user_token.Role.ShouldEqual(Roles.Admin);

    It should_have_a_unique_session_id = () =>
        user_token.SessionId.ShouldNotBeNull();
}

With my solution we can transform this quite easily, maintaining structure, flow and conciseness. Taking full advantage of C# expression body definition (lambda):

class When_authenticating_an_admin_user : Specification
{
    SecurityService subject;
    UserToken user_token;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             user_token = subject.Authenticate("username", "password");

    [Fact] void should_indicate_the_users_role() =>
        user_token.Role.ShouldEqual(Roles.Admin);

    [Fact] void should_have_a_unique_session_id() =>
        user_token.SessionId.ShouldNotBeNull();
}

Since this is pretty much just standard xUnit, you can leverage all the features and attributes.

Catching exceptions

With the Gist, you’ll find a type called Catch. Its purpose is to provide a way to capture exceptions from method calls to be able to assert that the exception occurred or not. Below is an example of its usage, and also one of the extension methods provided in the Gist – ShouldBeOfExactType<>().

class When_authenticating_a_null_user : Specification
{
    SecurityService subject;
    Exception result;

    void Establish() =>
             subject = new SecurityService();

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

Contexts

With this approach one ends up being very specific on behaviors of a system/unit, this leads to multiple classes specifying different aspects of the same behavior in different contexts or different behaviors of the system/unit. To avoid having to do the setup and teardown of these within each of these classes, I like to reuse these by leveraging inheritance. In addition, I tend to put the reused contexts in a folder/namespace that is called given; yielding a more readable result.

Following the previous examples we now have two specifications and both requiring a context of the system being in a state with no user authenticated. By adding a file in the given folder of this unit and then adding a namespace segment og given as well, we can encapsulate the context as follows:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();
}

From this we can simplify our specifications by removing the establish part:

class When_authenticating_a_null_user : given.no_user_authenticated
{
    Exception result;

    void Because() =>
             result = Catch.Exception(() => subject.Authenticate(null, null));

    [Fact] void should_throw_user_must_be_specified_exception() =>
        result.ShouldBeOfExactType<UserMustBeSpecified>();
}

The Gist supports multiple levels of inheritance recursively and will run all the lifecycle methods such as Establish from the lowest level in the hierarchy chain and up the hierarchy (e.g. no_user_authenticated -> when_authenticating_a_null_user).

Teardown

In addition to Establish, there is its counterpart; Destroy. This is where you’d typically cleanup anything needing to be cleaned up – typically if you need to clean up some global state that was mutated. Take our context for instance and assuming the SecurityService implements IDisposable:

class no_user_authenticated
{
    protected SecurityService subject;

    void Establish() =>
             subject = new SecurityService();

    void Destroy() => subject.Dispose();

}

Added benefit

One of the problems that has been with the MSpec approach is that its all based on statics since it is using delegates as “keywords”. Some of the runners have problems with this and async models, causing havoc and non-deterministic test results. Since xUnit is instance based, this problem goes away and every instance of the specification is in isolation.

Summary

This is probably just yet another solution to this and I’ve probably overlooked implementations out there, if that’s the case – please leave me a comment, would love to not have to maintain this myself 🙂. It has helped me get to a tighter feedback loop as I now can run or debug tests in context of where my cursor is in VSCode with a keyboard shortcut and see the result for that specification only. My biggest hope for the future is that we get a tooling experience in VSCode that is similar to Wallaby is doing for JS/TS testing. Windows devs using full Visual Studio also has the live unit testing feature. With .NET 6 and the hot reload feature I’m very optimistic on tooling going in this direction and we can shave the feedback loop even more.

Tagged , ,

Orleans and C# 10 global usings

If you’re using Microsoft Orleans and have started using .NET 6 and specifically C# 10, you might have come across an error message similar to this from the code generator:

  fail: Orleans.CodeGenerator[0]
        Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
  -- Code Generation FAILED -- 
  
  Exc level 0: System.InvalidOperationException: Grain interface Cratis.Events.Store.IEventLog has method Cratis.Events.Store.IEventLog.Commit(Cratis.Events.EventSourceId, Cratis.Events.EventType, string) which returns a non-awaitable type Task. All grain interface methods must return awaitable types. Did you mean to return Task<Task>?
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectGrainInterface(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000136+0x86
     at Orleans.CodeGenerator.Analysis.CompilationAnalyzer.InspectType(INamedTypeSymbol type) in Orleans.CodeGenerator.dll:token 0x6000138+0x23
     at Orleans.CodeGenerator.CodeGenerator.AnalyzeCompilation() in Orleans.CodeGenerator.dll:token 0x6000009+0x9f
     at Orleans.CodeGenerator.MSBuild.CodeGeneratorCommand.Execute(CancellationToken cancellationToken) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000014+0x44f
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.SourceToSource(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000025+0x45b
     at Microsoft.Orleans.CodeGenerator.MSBuild.Program.Main(String[] args) in Orleans.CodeGenerator.MSBuild.dll:token 0x6000023+0x3d

The reason I got this was that I removed an explicit using statement, since I’m now “all in” on the global usings feature. By removing:

using System.Threading.Tasks;

… the code generator doesn’t understand the return type properly and resolves it as a unknown Task type.
Putting in this explicitly resolves the issue and the code generator goes on and does its thing.

Tagged ,

C# 10 – Reuse global usings in multiple projects

One of the great things coming in c# is the concept of global using statements, taking away all those pesky repetitive using blocks at the top of your files. Much like one has with the _ViewImports.cshtml one has in ASP.NET Core. The global using are per project, meaning that if you have multiple projects in your solution and you have a set of global using statements that should be in all these, you’d need to copy these around by default.

Luckily, with a bit of .csproj magic, we can have one file that gets included in all of these projects.

Lets say you have a file called GlobalUsings.cs at the root of your solution looking like the following:

global using System.Collections;
global using System.Reflection;

To leverage this in every project within your solution, you’d simply open the .csproj file of the project and add the following:

<ItemGroup>
   <Compile Include="../GlobalUsings.cs"/> <!-- Assuming your file sits one level up -->
</ItemGroup>

This will then include this reusable file for the compiler.

Tagged

Red Cross Codeathon 2017

Some 6 months ago I found myself in a meeting where I had no clue what the topic was going to be or any prior knowledge to give away why I was there. Halfway through the meeting I found myself in complete awe at what was presented. The meeting was with the Norwegian Red Cross, the topic was how they wanted to take advantage of technology to gain insight into potential epidemics. The Norwegian Red Cross team had already done a couple of iterations on a software for dealing with this, trying out different technologies. They now had the real world experience from the versions they’ve been running and wanted to take it to the next level; professionalize the software – making something maintainable and sustainable. Red Cross does not have a software development branch within their organization and reached out to Microsoft for assistance and see how we could help. With Norwegian Red Cross being a nonprofit organization, they already get assistance from Microsoft through the Microsoft Philantropies. I was on Channel9 talking about the project, you can watch it here.

Taking the lead

With software there aren’t that many opportunities to really do good for mankind by applying the skillsets we already have(?). With what was presented and my position @ Microsoft, I started thinking about how there could be synergies and how it could all be brought together. My day to day work is an advisory type of role were I engage with ISVs around Norway and helping them move to the cloud or get the most out of Azure in general. With this work I get to meet a lot of people and I started thinking about opportunities of combining it all. In addition, I also felt that the most natural way for any of this software to be built would be to do it in the open with volunteers. Since Red Cross does not have any in-house developers and the cost of hiring consultants are very high. Besides, having external resources to do the work is not the best sustainable model for living software – ideally you’d want to do it in-house. With volunteers however, one would apply one of the core principles of Red Cross itself, that of volunteerism, as the Red Cross bases its work more than 17 million volunteers worldwide.

Understanding and getting the word out

We’ve had the dialog going the last 6 months on how it could be done – both from a process perspective, but also whether or not to base it on volunteer work and even go open source or not. I reached out to Richard Campbell to hear how they’ve been running the Humanitarian Toolbox for American Red Cross in the U.S.. He put me in contact with the product owner they’ve been having for the allReady project. From this, I gained even more confidence that the choice was right to do this on a volunteer basis as they’ve had 132 contributors pitch in on that particular project (@ the time of writing this post). As large organizations come, Red Cross also relies on a certain amount of red-tape and in general internal processes to make decisions.

In middle of June 2017 the NDC developer conference was held in Oslo. We were lucky to get a slot to talk about the project, what Norwegian Red Cross had done and our plans for the architecture for the new implementation. Richard Campbell joined Tonje Tingberg from Norwegian Red Cross and myself on stage (you can watch it on Vimeo here). I was really nervous whether or not there would be anyone coming to the talk, as we didn’t pitch it from a technology perspective – but was super glad and proud of my colleagues in the development community that wanted to learn more. We close to filled the room and the response from people was enormous, we got into good conversations right after and also good mail dialogs after NDC as well. This reinforced the belief that this could be done with volunteers. A couple of weeks ago I got a call from Tonje Tingberg bringing the happy news – Red Cross wants to move forward with the proposed model.

Codeathon – first call to action

The downside to basing everything on volunteer work is of course the fact that you have less control over when things get done. In order to get condensed and focused work done one needs a mechanism where you gather people in the same room for a couple of days.Building on the experience from Humanitarian Toolbox, Red Cross will be hosting a codeathon – not a hackathon or a hackfest – but more like a marathon for coding. The first of these will be held the weekend of 29 September – 1 October at the Norwegian Red Cross in Oslo. If you’re interested in joining and helping out; please sign up here. We will establish a core team that will be putting in place the framework for how we’re going to be building it and making sure we get as much work done as possible during the codeathon. Once you’re signed up, we will follow up with you to make sure you get to do what you want to do.

Learning Experience

One of the opportunities with this is learn. Not only from the project itself, but working together with others. In a room filled with developers and architects you’re bound to pick up a thing or two that can be brought back to your daily work. The solution will be built using modern techniques, state of the art architecture and utilizing the cloud as much as we can. It is also a great way to learn more about how to work in the open source community if you haven’t already got experience with doing open source.

Wrapping up

Seeing the impact of the work that Red Cross is doing really puts things in perspective. Bringing knowledge to the table is vital in helping others that don’t have the resources we are accustomed to. With the type of technical know-how we have as software developers, we can really make a difference. In our line of work, we focus on being problem solvers – trying to make smarter, more efficient systems. Imagine transferring this and saving lives by just using the power of our brains; this is what we as a community can bring to the table. I’m so glad I was asked to join that meeting months ago – finally I can help in a way I know how to.

Red Cross Norway has also put out a post on this with all the details as well. You can find it here.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Bifrost roadmap first half 2017 (new name)

This type of post is the first of its kind, which is funnny enough seeing that Bifrost has been in development since late 2008. Recently development has moved forward quite a bit and I figured it was time to jot down whats cooking and what the plan is for the next six months – or perhaps longer.

First of all, its hard to commit to any real dates – so the roadmap is more a “this is the order in which we’re going to develop”, rather than a budget of time.

We’ve also set up a community standup that we do every so often – not on a fixed schedule, but rather when we feel we have something new to talk about. You can find it here.

1.1.3

One of the things we never really had the need for was to scale Bifrost out. This release is focusing on bringing this back. At the beginning of the project we had a naïve way of scaling out – basically supporting a 2 node scale out, no consideration for partitioning or actually checking if events had been processed or not. With this release we’re revisiting this whole thing and at the same time setting up for success moving forward. One of the legacies we’ve been dragging behind us is the that all events where identified by their CLR types, maintaining the different event processors was linked to this – making it fragile if one where to move things around. This is being fixed by identifying application structure rather than the CLR structure in which the event exist in. This will become convention based and configurable. With this we will enable RabbitMQ as the first supported scale out mechanism. First implementation will not include all partitioning, but enabling us to move forward and get that in place quite easily. It will also set up for a more successful way of storing events in an event store. All of this is in the middle of development right now. In addition there are minor details related to the build pipeline and automating everything. Its a sound investment getting all versioning and build details automated. This is also related to the automatic building and deployment of documentation, which is crucial for the future of the project. We’ll also get an Azure Table Storage event store in place for this release, which should be fairly straight forward.

1.1.4

Code quality has been set as the focus for this release. Re-enabling things like NDepend, static code analysis.

1.1.5

Theme of this version is to get the Web frontend sorted. Bifrost has a “legacy” ES5 implementation of all its JavaScript. In addition it is very coupled to Knockout, making it hard to use things like Angular, Aurelia or React. The purpose of this release is to decouple the things that Bifrost bring to the table; proxy generation and frontend helpers such as regions, operations and more. Also start the work of modernizing the code to ES2015 and newer by using BabelJS. Also move away from Forseti, our proprietary JavaScript test runner over to more commonly used runners.


Inbetween minor releases

From this point to the next major – it is a bit fuzzy. In fact, we might prioritize to push the 2.0.0 version rather than do anything inbetween. We’ve defined a version 1.2.0 and 1.3.0 with issues we want to deal with, but might decide to move these to 2.0.0 instead. The sooner we get to 2.0.0, the better in many ways.


2.0.0

Version 2.0.0 is as indicated; breaking changes. First major breaking change; new name. The project will transition over to be called Dolittle as the GitHub organization we already have for it. Besides this, the biggest breaking change is that it will be broken up into a bunch of smaller projects – all separated and decoupled. We will try to independently version them – meaning they will take on a life of their own. Of course, this is a very different strategy than before – so it might not be a good idea and we might need to change the strategy. But for now, thats the idea and we might keep major releases in sync.

The brand Dolittle is something I’ve had since 1997 and own domains such as dolittle.com, dolittle.io and more related to it. These will be activated and be the landing page for the project.

Bifrost; Getting back to it…

Its been a while since I wrote anything about Bifrost. In fact the last post I did was about me not maintaining it anymore. The thing is; its been an empty year for me personally since February when I announced it. I didn’t realize it until I was at a partner who wanted to dive deep on SOLID, DDD, CQRS, EventSourcing and more and we only had a couple of days to prototype something. We talked it over and we decided that doing Bifrost would get us there quicker… what a relief… I’m so glad we did that. All of a sudden it all became very clear to me; I need to continue the work – its just too much fun. I had a hunch, but didn’t see it all that clear. A few months back I started pulling things from Bifrost into a new project called Cratis and making it more focused. Never kind of thinking that it should go back into Bifrost.

So, what am I doing about it. Well, first of all; I took down the post announcing the stop in maintenance. It didn’t make sense to have it there when coming to this realization that I need to push on. The second thing I did – in order to get back into the mood and understanding Bifrost (even though I wrote most of it) again, was to start writing the proper documentation that it deserves. This now sits here. The next thing that will happen is that development will be picked up again.

From the top of my head, this is what needs to be done:

  1. Add a support for running on Azure in a distributed manner – with a working sample
  2. Clean up. Remove platforms not being used.
  3. Simplify code. Make it more focused.
  4. Modernise it. Make it run on .NET Core
  5. Rewrite JavaScript to be ES2015+
  6. Break it apart into many small GitHub projects that can be maintained individually

In between there might be some features that sneaks in. But the majority of new development will have to happen after these things have happened.

Alongside with it all; more documentation, more samples, more videos – just simply more. 🙂

Really looking forward to getting back into this and see what 2017 have in store for Bifrost work.

The Code Lab

I’ve been wanting to try out a new format, or at least a new format for me; live interactive webcast. I got inspired when I saw some ex-colleagues of mine started something called “Kodepanelet” and figured I had to do something similar. The goal is to have a one hour show on a regular basis where anyone can chime in on social media and ask technical questions and I’ll try to be as agile as possible answering, or probably preferably show. Some things I can answer live – other things might need preparation for a later show, or a follow up in a blogpost or similar. This part is still a bit fuzzy. To be honest, since this is all new and I’m trying something out – the format will more than likely change over time.

The concept is called “The Code Lab”;

TheCodeLab.png

I will be using the Facebook live video streaming system for this and you’ll find the Facebook page for the concept here.

Details

First airing will be on Friday the 4th of November @ 13:00 CET (1PM), here.

 

What can you expect?

First of all, this is supposed to be for software developers. And I can’t do more than I know or can easily acquire knowledge on. My background stretches from games development including graphics – low level programming to line of business applications and architecting highly available durable and scalable systems. I also have a knack for process and how to build teams. I consider myself pretty open-minded, and as a result I’ve always been all over the place when it comes to platforms. These days I find that I am probably most experienced in macOS and Windows, but an apprentice in Linux – trying to gain more experience there. I try to be polyglot in languages, but have most experience in C/C++, C# and JavaScript. Thrive in the backend as well as the frontend and loves my patterns and eat my SOLIDs for breakfast. I focus a lot of my time on cloud, and specifically Azure. Containers, MicroServices and in general decoupled software is something I am really passionate about.</>

How can you help?

Content is always king – I’m hoping people find this interesting and want to jump in with good questions. To get the topics right, there is a survey on Facebook

Health warning

If you’re out to troll me, I’ll try to ignore you. 🙂

Renault Zoe: The Electric Car Lie

1st of January 2015 I started working in Oslo, Norway. About 124 kilometres from where I live; Sandefjord.

I started off with taking the bus. Did that for a couple of months and found it to be sub-optimal with trying to work on the bus on the way in and back, seing that it was constantly crammed going back. I swapped to train to try if that would be more optimal. Sure, the comfort was way better – but had the downside of me walking to take the bus to take me to the train. Ended up being an end-to-end travel of about 3 hours each way. Compared to 2.5 hours with the bus (including walking to the bus and to the office from the bus), which was still a lot. We only had one car at the time and my wife needed it during the days.

In with the electric

We have a pretty good insentivised deal in Norway for electric cars. There is no sales tax on them, making them a better deal than their carbonfuel based counterparts. In addition we have other good deals with free parking on public parking spaces and also no toll for toll based roads. I therefor started thinking about buying an electric. Did the math and figured a car loan would in fact be cheaper than taking the bus or the train. In fact a whole 1000 Norwegian Kroners (about 130 USD or 110 Euros). And the best part, I could save a lot of time. Another benefit is that electrics can drive in the bus-lane – meaning you don’t have to get stuck in traffic. There are some local changes to this, but its a question about timing – when you drive, but still in place. With all this upside, what could possibly go wrong. <>Having absolutely no experience with electric cars, but have owner a bunch of regular gas and diesel based cars over the years – I jumped into it with the wrong goggles. I didn’t look too much around as most cars really didn’t have the range I needed, except for Tesla – but even in Norway these are very expensive. I know people argue they aren’t, with all the benefits – but a car half that price is still half that price with the benefits on top. So I set out to find something not Tesla with enough range to get me one way. I was counting on being able to charge when I was at work – giving me a full 8 hour charge. Seeing that the infrastructure was in place in Oslo, I was willing to take this gamble.

After reading a bit about a few cars, I found the Renault Zoe to be the most interesting from a range perspective. Unlike the competitors, it had gotten reviews claiming the range to be accurate to what they were promising; 240 kilometers on one charge. I even stumbled upon this article, claiming it had almost unnoticable drops in performance in -25 degrees celcius. Everything was pointing towards it. They even had a deal were you could get it with a home super charger included in the price.

I had my testdrive of the car on the 28th of April 2015. Found the car a nice drive, but found it weird that I had to stop for a 45 minute charge halfway to Oslo. The battery had dropped to 20%. I charged and drove it in, parked it for charging for the day. Called the dealer and explained my situation and the salesguy told me to drive it in ECO, which would prevent the car from going faster than 96 km/h. This is were I go; OK and then WTF.. But willing to do what he said when I drove back, I managed to get back – but the display said there only was 15 kilometers left on the battery. Sure, it was a bit confused since its averaging. At this point I knew little about the car and was willing to except that it might be displaying wrong information due to me not driving ECO in and then ECO back. Messed up the statistics, kinda. Did point this out, and the salesguy was surprised.

I was skeptic, to say the least. But had the car for a couple of days for test driving, this time only driving ECO and proved a couple of times it would make it all the way in. So I decided to buy one.

Got the car…

13th of May 2015; super excited. Picked up the car. The installer of the super charger thingy came the day before – all set. Since I was going to charge in different locations, I did order the car to come with a transformer that was able to transform from regular 240 volts to the 400 volts the car needed and also convert whatever it needed to convert. This was not ready from the manufacturer and I was promised to get this in August. Which was perfectly fine for me, as I could charge on the way home if I couldn’t find a charger that was compatible in Oslo and still save time on my travel in comparison to the bus and train options. The compatibility question is related to the volts but also the fact that it needs 3 phase charging and the plug being a Type 2 plug.

A week later

I don’t travel the distance to the office every day. Some weeks 2 days, others 3 and occassional 1 day a week. So I didn’t get into trouble straight away, due to this fact. 19th of May I drove the car in and found a public charging pole and plugged my car in. Shortly after it said “Battery charging impossible” in the display.

Pretty non informative message to get, and it didn’t help to drive to a charger I knew worked. Same message. Anyways, I ended up having to get the car and me picked up by a dealer in Oslo. They plugged in the car to their charger and the message went away. Apparently this happens from time to time. The car then needs to reset its software (turn off the car and wait for a minute or so).

First death

Yes, you read that right; first… The car died on the 29th of May. Not full two weeks after I got it. They had it for almost a full month, claiming to have swapped out the engine and had visitors from France and what not. That may very well be true, but a full month. Really. For a brand new product. Impressive. They hadn’t really found the problem, just swapped out everything. I use the car till I go on my holiday on 11th of June.

Second death

Yup, there was a second one. Got back from holiday and went on a work related trip to the US. I drove the car to the airport, which is farther than Oslo – so I had to charge on the way to get there. Parked it at the airport. Got back from the trip on the 3rd of August. Drove the car to the nearest super charger. It didn’t work for some reason. Drove to a second, got the famous message. Drove to a third, still – same message. Tried “resetting” the car. Googled for help – nothing. I had enough charge to drive to the Oslo dealer, did so. No magic charge could wake the charging unit to life.<>This time it took 5 weeks in the garage. But now they had found the reason for the car dying, some condensator internally not withstanding the type of power we have in Norway is what I’m being told.

Now what

At this point I’m beginning to really become desperate and annoyed. While waiting for the car during its second death. I contact the dealer on the 17th of August to hear about the transformer thing that was supposed to show up during August. Figured I had to have some dialogue with them and try to get my own mood better. The message back is; it will be delayed for another 2-3 months.

A year has passed…

… and then some, since I bought the car. It has not died on me since its second death. But it does occassionally say “Battery charging impossible”. And its just completely random. <>I still have not gotten my transformer. But I have a massive brick with me that does the job, almost.

It weighs in at 27kilograms. So not a lightweighter.

And it does only support 10 ampére charging. Which makes it take some 16 hours to charge when I drove it winter time and had just a few percent of charge left in the battery.

The range lie

Lets bring the story back to the title. Ok. I’ve been messed with quite a bit by Renault and the local car dealer; BilService here in Sandefjord. I’ve been served lies upon lies related to both the performance of the car, range, delivery dates and all. But the biggest of them all is related to the range. And that was actually what got me started on this. I got an email yesterday with an advertisement for Zoe from the local car dealer. Again promoting the awesome 240 kilometers of range of the car. This time with a * next to the 240 kilometers. Me hoping that meant they were going to be more honest, but no.. They actually pointed out that it was 249 kilometers according to the NEDC (New European Driving Cycle). The ad is in Norwegian, but it shouldn’t be hard to get the gist of it.

Having an experience were I see the range be more in the line of 150kilometers on a sunny day with high temperatures and next to no wind resistance and on a winters day with sub zero (celsius) its more in line of 110-130 kilometers, depending on air moistness and all. Added on top that I get the summer range if I drive in ECO with maximum speed of 96 km/h and for the winter range I have to stay at maximum of 90 km/h – there is something really fishy with how these cars are at least being marketed. When I buy a car, no matter what car, I’m not expecting it to be driven in a magical way to get from A to B. I also don’t expect it to have imaginary performance numbers in the lines of “if you drove the car off a cliff and fell for 240 kilometers, you would be able to make the range”. If I were to ever make that range, I think I would have to drive the car at a speed of 50 km/h. I think this is a big fat lie. Renault has yet to come with any excuse to me as a consumer of their products.

The best part, I think… I asked the dealer about the transformer; again… Almost 3 weeks ago, with a more annoyed tone from me. The answer I got was that he could see that I was on the list for the following week. Now, 2 weeks later – I’m just writing down on the “keeping score card” – another lie.

Fantastic.

And its a shame, the car is a nice drive. But if I could turn it back in I would – in fact I was in dialogue with the authorities; Norwegian Consumer Advice – they basically said it was hard to do anything with the situation, just based on the fact that the manufacturer has responded and done “… what they can…” to help me out. I’m now stuck with a product I kinda hate. And the awesome part; the battery quality will deteriorate – lithium based batteries has a tendency to go to 70% capacity after 1000 charging cycles. So that is brilliant when I was hoping to almost make close to one full cycle per day traveling rather than 2. All my math based on a lie, and I’m the one having to pay for it. Thank you Renault. Guess I won’t be your customer for long – nor the local dealer; BilService. Its the worst consumer experience I’ve ever had – I hope you don’t run into this.

 

 

Edit: I don’t do these kinds of posts – and trust me, I would’ve preferred not to this time either. But I don’t really know any other way. I’m frustrated, I’ve got a product that is not delivering on a promise, a manufacturer who is not in dialogue mode at all, a car dealer who is just trying to push the problem away, authorities basically saying I’m stuck. I know this is not the ideal way to deal with these things. I could “lawyer up”, but that can easily become expensive. So I don’t know.