A lot of embedded software projects are implemented without modern techniques of object orientation. While in PC and server software theese are inevitable and state of the art, the embedded software industry is lacking behind. There is no justification for this on systems are driven by high speed ARM cores and hundreds of megabytes of memory.
Take your software to the next level by utilizing modern OO strategies as:
– Application Wiring
– Onion Architecture
– Code Generation
All this is taken from real-world embedded projects. Once you utilized this, people will become more productive, mainly by writing and testing code mostly on the development PC and seldom in the lab/on the target.
The Internet of Things (IoT) is on everybody’s lips these days. I’ve been confronted with IoT architectures since more or less 10 years now, and the biggest topics are:
Companies must react very fast. Windows of opportunity are opening and closing faster than ever. Today some IoT standard might be hot, and tomorrow it is superseeded by a bigger player.
Devices must support different and overlapping standards.
Testing becomes difficult when countless remote peer variants need to be compatible, and when thousands of simultaneous connections must be tested/simulated.
As a fan of SOLID, DDD and the Onion Architecture, I want to share with you, how to overcome this challenges. How to adapt new standards quickly – in the software itself and also in the test automation. And also, how to protect your core logic, your crown jewels, from an ever faster changing environment.
When using Onion Architecture one automatically regards the dependency inversion principle, and with proper application wiring, one stays automagically SOLID. It is a good idea to use DDD within the inner layers, which means (among other points) to use object instances that are named, and reflect, the Ubiqutious Language of the problem domain. The advantage of the Onion Architecture is that the Core Logic is protected (loose coupling and strong cohesion) and can easily be transferred to other environments. Furthermore all outer parts become software plugins that are easily exchangeable by e.g. unit-tests.
For an Internet of Things device, I recommend to use an abstract Protocol Translator inside the Glue Logic. This translator communicates to the outer layers by passing plain data buffers. To the core logic it communicates only by moving object instances with nice DDD Ubiqutious Language name and semantics. This way your core logic will not be polluted by the negligible details of the particular protocols, and it won’t be affected by the turmoil of IoT’s protocol wars. Such a translator can easily extended by other protocols, or just by dialects between vendors that share the same protocol.
This approach is ideal for test automation. For testing the core logic (e.g. high and concurrent traffic), the Protocol Translator can easily be replaced by a mock simulator. (Because we use the Onion model, the Protocol Translator is a replaceable plugin). And for testing the Protocol Translator itself, it can be easily surrounded by mock objects. (Again because we use the Onion model, which leads to SOLID, App-Wiring, replaceable Plugins etc.).
My Recommendation: Use a “Protocol Translator” in the middle layer of the Onion Model that speaks data buffers to the outside and DDD object instances to the core logic.
During the last decades a lot of technologies were identified of being harmful. Dijkstra began with identifying the Goto statement of being harmful in the 60s. Since then several other harmful technologies were identified. People understood that if-statements should all be replaced by polymorphism (I remember a professor who gave every exam a ‘fail’ that contained an if-statement in the late 90s). Also the new-statement is harmful in real-time systems and I remember an old boss who told the whole team not to use the new-statement in a 100.000 LOC server application written in C#, having – as it’s the nature of C# – new-operations splattered all over the place … challenging … 🙂
But that wasn’t all. Robert C. Martin interestingly mentioned the book ‘structure and interpretation of computer programs’ in one of his on-line lectures and the paradigm of a functional programming language to avoid assignment statements. Which makes perfectly sense in that context. So even an assignment can be considered as being harmful.
What else ? Recently plain threads were considered being harmful in a keynote of Hartmut Kaiser. Off course, no one likes threads in a context where it should be avoided, not to mention the thread-per-object anti pattern (which I first read in one of my most favored books, Utas “Robust Communications Software”). Once I saw an application that utilized three threads per instance of connected hardware. The original version was written for 30 remote devices, leading to 90 threads. A few years later the managers decided to use this software for a huge installation of 2000 devices, leading to 6000 threads, with 512 KB stack space for every thread, which is close to 3 GB RAM in total. That didn’t work out very well on a 32 bit processor 😉 OK, so plain threads are evil too.
Up to now several other technologies have been identified of ‘being harmful’. Things like ‘recursive make’ or static linking, csh programming … The identification of harmful technologies has even its own wikipedia page today: http://en.wikipedia.org/wiki/Considered_harmful
Well, and today is the day I will identify another harmful technology and introduce it to the whole world: Calling member functions is considered harmful !
We all know this situation. A perfectly functioning and bug-free piece of code is extended by a call to a member function of another object. It appears that this change introduces a big risk to the robustness of the function. In a significant amount of cases (as shown below) adding a function call will lead to a software failure. This is caused by the unpredictable nature of object orientated code that hides functionality in abstractions that are never able to fully reflect all underlying details (otherwise it wouldn’t be an abstraction, right ?).
Calling member functions significantly widens – and this way worsens – the fan-in and fan-out of functionality. The amount of overall system state is greatly increased because not only the local state (reflected by the local variables) is taken into account for the program’s execution at this place. Instead also the full state of the called function has to be regarded as well. This tremendously increases complexity, which this way easily rises beyond what a human brain is able to understand.
This technology becomes absolutely unrulable and high-risk, when the called member function itself again calls a member function. This increases the amount of state information even more, and the more state information is involved, the more software failures will appear, as can be shown easily by popular studies. Now imagine that this function also again calls more member functions and this functions call again other ones … Such practice renders a software totally unmanageable. Imagine what could happen if a function calls a function again, that was already part of the call-stack before. Or if a function calls itself again, leading possibly to a vicious recursion. Furthermore the same part of source code can be reached from virtually innumerable other functions, leading to a virtually random sequence of calling orders and a virtually unlimited number of possible different call-stacks. This situation is practically impossible to be tested. Such software is out of control.
It is obvious that the only way to cope with this situation is to prohibit all kinds of member function calls, and – which is most important – today is April the 1st 2015, and in our culture it is tradition to write funny articles at this date. So don’t take this text seriously and if you discover more technology to be harmful (e.g. variables, braces, objects, executables, for-loops, method parameters, void …) just let me know 🙂
In high performance code ? Or in time critical device drivers ? I mostly saw this kind of code in places that were not time critical at all. Even on slow processors (like on FPGA soft cores for example) the savings of this code style in runtime measures are in most places of the code neglectible. Not neglectible are the higher costs for software maintenance which becomes obvious when we look at a better readable alternative:
The basic problem on this topic is that not few embedded developers – esp. the ones who programmed extremely resource restricted systems over years and years in the past – are doing an up-front performance optimization. I found some to be very proud of this behavior, they feel that it makes them a better programmer.
Up-front performance optimization can also be found in other topics. Some embedded programmers don’t use object orientation because of the performance and memory impact of a V-Table. Off course there are systems that are so resource restricted, that a V-Table is not possible. (Object-orientation still is, by the way …). But on todays embedded systems this kind of restrictions usually don’t exist anymore.
Also your recent gigaherz ARM controller will be brought to it’s limits in some years, when the product manager lets you adds more and more features. It allways was this way and it allways will be. But then this will not be related to V-Tables or bit hacks. Nor would bad programming style have saved you from it.
So, how can you support that your developers don’t write useless bit hacks for your brand new gigaherz ARM controller ? I would suggest that you encourage the development team not to do any up-front performance optimizations in the code at all. At least when it is not 100%ly clear that a performance optimization is necessary. Especially if the topic is not architecture related at all and only the way the code is written is affected, like in the example above.
Well, will everyone stick to this rule ? Not off course. You will still be arguing why clean code is sometimes worth more than a piece of code that is hard to read but might be faster in the execution. Let me make a suggestion for this case:
Set up a performance profiler on every single workstation/target that can be started as easy as possible. Set up a Wiki-Page that gives clear information with screenshots how to profile. The costs for learning how to profile and for executing a profiling session must be as few as possible.
Then, when a developer applied an unnecessary optimization, it can be shown with ease wether it is benefitial for the runtime behavior at all. It might happen that written code will not be rewritten when a developer notices that her/his performance optimization was useless. But the developer will learn from this profiling-event and the probability for the next useless up-front optimization will decrease …
In some circumstances, however, like an interrupt handler or in nested loops of high performance code – or when the profiler prooved that in a particular place it is necessary – you will still have to and want to use bit hacks. Have a look at this awesome website in this case, it list up lots of examples. Bit Twiddling Hacks
Have you ever used a statemachine generator ? Maybe you’ve heard of complicated products like Rational Rose Technical (formerly known as Rational Rose Realtime) where the whole software is turned into active objects passing messages between state machines. There you write code into the model that will be executed when a state transition occurs. Code and model form a unit. Well, using such a powerful approach is really benefitial.
often this is too much overkill for a project. In that case it is not necessary to relinquish the concept of a generated state machine.
Let’s look at an example state machine. Most embedded devices that are safety critical and have some actuators and operation modes can be modeled this way. The rectangular boxes are states off course. The lines between the states are state transitions. The text near a state-transition is the event-name that moves the state-machine from one state to another.
In a project some years ago when I was a freelancer, I made good experience by using a simple state machine generator without injecting state-transition-code. Instead we used the state machine generator as a decoupled helper instance and we used only two methods:
That’s all. We only call IsInState() and SendEvent() on an object instance called stateMachine, nothing more, that’s the whole trick.
In other words we ask the statemachine about it’s state before we’re doing something that is allowed only in some certain states. And furthermore we send events to trigger state transitions, to change the current state. (I left out the error handling of SendEvent here for a better readability).
Consequently methods like SomeAction::Execute() are also secured by a state machine sanity check:
// this is a serious internal error, handle it ...
As you see this is a very straightforward and lean method that gives a temendous amount of safety and robustness to your system. It cannot happen anymore that the system executes stuff it isn’t allowed to in the current state because just typing if (stateMachine.IsInState()) is a sufficient protection.
The good news is that the code of the stateMachine object that offers the SendEvent() and IsInState() methods can easily be generated. We used SinelaboreRT to generate it directly from Enterprise Architect (it also has it’s own editor if you have no UML tool). It’s inexpensive (only 99$ some years ago) and has even a state machine simulator. There you can send events to the machine and see what it does (which actually did save me from one conceptional bug). It also offers sanity checking of your state machine.
The concept of using this kind of decoupled, lean generated statemachine code really saved our ass in an application where the user interaction came over network (Soap) commands. The Soap commands were issued by a Windows PC and the Windows programmers were unaware of Embedded or safety critical programming style and were continuously firing Soap commands at wrong times. We just used code like follows to perfectly secure the safety critical embedded system from the misplaced Soap calls of the Windows UI programmers:
Soap commands can also be processed in states different from Waiting_For_Command. For example a FailureStop command could be allowed in all states below the ‘Running’ state. Now you might notice that this is a hierarchical state machine. Have a look in the state machine diagram where the transition named Failure_Deteced is located …
//Ignore if we're allready in the Failure state
Another example could be a Soap command that is only allowed in the Moving_Something state. E.g. a command to abort the movement.
You got the point, right ? This is really straight forward, easy, lightweight and because the state machine itself is generated from UML you have to write very little code (initially and also when the statemachine changes – and in an agile project it will change – often 😉 )
Have fun 😉
Embedded Software Architecture Blog