Some days ago one of the colleagues I admire mostly for his passion for software craftmanship – Mr. Thilko Richter – needed to order his thoughts. He did it visually and the results are so awesome, that I want to share it here. Click to enlarge.
50 years ago products could be sold easily because there was little competition, so innovativeness and speed were less important than efficiency, and the return of investment (ROI) was mostly influenced by efficiency.
Today founding a company is easier than ever before and the worldwide competition demands a very high efficiency for having a competitive pricing. But … only the very attractivest value propositions and only the most innovative solutions have a chance to stand one’s ground against the ever growing competitors. Therefore focussing on efficiency alone leads to failure. A tradefoff has to be made between efficiency, innovativeness and the speed of rolling out innovations. So success means, that the magic triangle depicted above must be addressed strategically. (I never saw this triangle before, maybe it’s been discovered already, maybe I’m the first one, please add a comment below it there’s more about this approach to find anywhere allready).
And here’s why old companies fail on innovation and the speed of launching innovation sooner than other’s. Old companies are too efficient ! After years and years of optimizing every bit of the company for efficiency, the place in above’s magic triangle became inferior for today’s fast moving markets. And that’s why startup companies can beat bigger companies, why big companies found spin-off’s etc.
So, how you gonna solve this: Balance efficiency carefully !
In order to get a better place in above’s triangle, you need to sacrifice all what you achieved over the last decades. And this is difficult, because it can only be achieved by a cultural change. The most important dogma for all actions has to be changed and people will not understand what’s going on and why. Cultural change is one of the hardest and long-lasting challenge a company might ever face, and chances are high, companies will fail on that, keep their position in the triangle and go under.
So, don’t go under, follow the boy scout’s rule of innovation: Reduce one bit of unnecessary efficiency every day.
Careful, efficiency is still very important and the areas have to be selected very well.
- Plan 90% of the employee’s time, not 110%.
- Allow all people to shape the products, not just some few responsibles.
- Allow new technology. Some will fail, some will turn out being less efficient, but in the end you will have more options to move and you will focus more on customer value than on technology (see Blog: “Why not simply use the latest greatest ?”).
- Allow job rotation, where useful
- Let the people decide which task they love, don’t assign it as if they were cattle
- Enforce disruptive innovation. Some will fail, which is less efficient, but one of it might eventually change the world and be the future foundation of your company.
Since 1997 I’m a professional programmer and especially during my 12 years as freelance software development consultant I saw many, many projects, that all had one thing in common: Every technical innovation had to be justified. Usually such method innovations are pushed in an upwards direction from the engineers to the management. And almost never the other way round. Each innovation is a more or less intense struggle, a fight for trust and at last for budget.
We all witnessed this a lot of times, right ? I often wonder how it will be in the future, when software development will be as evolved as … let’s say the craftmanship of a mason. When it is clear that (and which) a unittest-framework with mock-generators and which application-wiring-framework for onion-architectured solid-code is used, for the code that is automatically generated from behavioral UML-charts with lambda-based DDD behavior injection into the microservice based active objects, that are 100%ly auto-tested, immediately deployed to the customers, continuously and immediately refactored up to every slightest change in the customers needs … and all that without any arguing, because it is so natural and everyone does so since hundreds of years. Maybe by using just one tool that covers everything from code-repo over requirements derivation up to the compiler and continuous-integration server with a perfect IDE that types everything by just pressing alt+return …
But today, when all new methods are so young, and I suppose, a lot of the future’s software methods even haven’t been discovered yet, a every slightest advancement needs so much fighting. For every advancement you propose, you reduce your management’s reputation, because if you’re right with your proposal, the question is, why the predecessor of your proposal had been chosen in the past. So the first management reaction is: “That proposal is not necessary. The status quo is an ideal environment. The accusation, our environment might not be appropriate is insulting and wrong.”
Well, I made the experience during my time as a consultant that software departments that use modern technologies are way faster. A project that uses c with printf-debugging on the target can be several times slower than a project that uses c++14 with statemachine-codegeneration and host-based unittests. In the old fashioned way the same feature might cost days and days, that costs only a few hours in a modern environment. If you ever made the experience of working in such different environments, you’d never ever doubt again, that better tools and methods pay off very soon. Unfortunately most good managers are promoted very soon, therefore have actual software developer working experience only in few (sometimes even only one) company, and then they miss this experience. But they decide, because they are responsible for the budget (usually).
And today I asked myself: Why does one has to justify every advancement so much to managers ? Why does every advancement, even the ones that all competitors are utilizing already, has to be discussed ? Why are there the usual amortization calculations, that have faked numbers, but become more than true in the result ? Wouldn’t it make sense, when the top-management simply would say: “Folks, just use latest-greatest, and keep up that level”. A company that would be so brave, would have hell of a lot more productivity/efficiency. And sure as hell, this – as a principle – would amortize. I know that from my experience when I changed between companies with different levels. I saw tasks lasting for months and months that I knew, in other companies would had been finished in weeks. This definitely would work.
This sounds so simple, almost naive, but I truly believe it will work. My vision is that every developer-team can just use latest-greatest technology without fighting for it. Everyone gets the permission to refactor, migrate, buy licenses just by saying: We need this, because it’s latest greatest, and our CTO gave the strategy ‘folks, use latest greatest, because we claim to be the greatest’.
What would be so bad in having the most efficient software development in the world ? What would be wrong in being the leader ? What is wrong about creating more customer value in less time ? At some point one has to be better than the competitors – and embedded software is where the USPs come from today !
Some blog posts ago I philosophized about the definition of software architecture and gave some examples I found in the internet, that were truly excellent … but somehow didn’t match perfectly well to me. I thought about this topic for a while and now I want to give you my definition. Not for software architecture itself, but for the role of the software architect:
A software architect observes quality attributes, questions functional requirements and acts as an advocate of the longevity of a software product. She achieves this by communicating to a variety of stakeholders, like project, risk and product managers, and by being a technical leader for the software development team. She is responsible for documenting decisions and the high-level design.
And let me add for the role of the embedded software architect:
An embedded software architect does this in the context of an embedded system, which is a device that is not typically thought of being a computer, yet still has microprocessors inside that run embedded software. For this kind of projects typically computing resources are limited and special realtime or safety requirements must be met. The embedded software architect’s responsibility is to supervise the achievement of these special requirements. It is also typical for embedded software to run on special hardware, like a dedicated PCB (printed cirquit board) with specially taylored chips. The embedded software architect is involved in the hardware/software codesign and acts as an interface to the electronics department.
A very comprehensive collection of definitions for the term software architecture can be found at http://www.sei.cmu.edu/architecture/start/glossary/index.cfm (scroll down until ‘software architecture’).
Recently I heard a keynote of the ESE congress where a famous and experienced speaker claimed, that Agile would not be appropriate for Embedded Systems, and CMMI should be preferred instead. I remembered a very big company I once worked for. We subsequently rose our CMMI level – but noting, really not one single thing changed in the way, we developers programmed, designed or wrote the specs. CMMI can be just a fake – as everything that’s based on paper.
I’m tired of the discussion whether Agile should be used, or not at all. This is too dogmatic, it is not a yes or no topic. For me the key concept of Agile was historically the existence of iterative development and the absence of an a-priori fixed feature-set – just no big-bang releases and no BDUF (big design up front). Robert C. Martin describes in his famous book ‘Agile Software Development’ why the formerly popular analogy does not match, that creating the sourcecode is comparable to building a bridge and would therefore need BDUF. Instead he introduces model of the compiler being the one that ‘builds the bridge’, which is more appropriate and I totally agree to him. In order to understand where Agile comes from, one has to understand this change in the point of view.
Based on this model where building software is different to building a bridge, because of the varying effort for the build step, I want to define two oppositional poles: On one pole are projects where the process of building is tremendously expensive and not even one additional iteration is possible, which leads to an a-priori fixed featureset. On the other pole much iterations are possible and an a-priori feature-freeze, that would make the project clumsy and resistive to change, is counterprodutive. Embedded projects utilize different components that are between both poles. PCB, Mechanics, Software … every discipline has different cost/time/effort for the process of ‘building’, which makes it necessary to adjust the level of Agility individually for each discipline.
- Bridge: Not iterative at all, features are fixed from the very beginning.
- Formed parts: A few changes are acceptable, some adjustments can be made from one prototype to another. Not all, but most ‘features’ have to be defined a-priori.
- Rapidly prototyped parts (3-D Printer): Several iterations are possible, features tend to be changeable, only the subset of the features that is related to the interaction with other parts has to be frozen at the project start.
- Software: Very iterative, the compiler can build the software in a few minutes time. Only the basic plot has to be known in the project’s beginning. Changing the feature set is easy at every project stage.
Therefore the discussion about using ‘Agile or traditional’ in the whole organization leads to a wrong direction. Instead: Be as Agile as possible and as traditional as necessary in every particular discipline. This doesn’t make it less complicated at the interfaces between the particular departments/diciplines, off course, and demands more communication. E.g. some software features will be mandatory because of mandatory hardware features. I suggest that in this case the deparment that is more agile (software usually), accepts that they have to sacrifice some of their own agility in favor of the less agile (hardware usually) department. And vice versa that the hardware people take into account that they have to split up the software features they require into mandatory and optional ones.
I wish everyone a happy new year and I wonder what might be the biggest change in Embedded Software Development next year. Off course Linux will continue its triumphal success for recent ARM-driven systems. It’s really time to move to Linux if you haven’t done so yet. A key moment for me to realize the impact of Embedded Linux was when a Lauterbach senior sales representative looked at me with pity when I told him that we’re not using Linux. He was assuming that everyone uses Linux (on the ARM based uC of this project) and for him it was out of question that something else could be used. He underlined that every other customer in his area is using Linux on that sort of controller.
I also wonder about the trends in off-shore software development. In the 90s I saw much development being shifted to Russia. But when Russian developers became similar expensive as European developers this stopped – at least that was my perception. In between 2000 until 2010 I saw much development in India and for BSP and driver development this seems to be a perfect place.
I, however, would bet on China to become the next center of off-shore soft- and also hardware development. The high economic growth is an ideal spirit for enthusiast programmers. The spirit there reminds me much of the electrifying times during the dot-com bubble at the end of the 90s – but I don’t fear that this bubble will burst, not at all. And the clever import restrictions of China force the companies to have facilities there anyway. Furthermore Chinese Programmers are highly trained, and I appreciate the pragmatic and fast approaches I saw there in the projects I heard of.
Another question I ask myself is whether a backwards movement from TDD, MDD, SOLID etc. will happen. I don’t see this technologies disappearing. But maybe only the best of it will remain (e.g. TDD without test-first, MDD without class diagram roundtrips – only state machines etc., SOLID without giving every class several interfaces, ‘if’ will not have a smell anymore …). For a more efficient usage of SOLID, I hope a Mock-Framework without need for interfaces will come out (like voodoo-mock which seems to be a somewhat abandoned tool to me). Dependency-Injection is a great technology – but injecting everything to everywhere just for the unit-tests makes the code heavier than necessary. Coupling and complexity are two poles in an optimization problem – decoupling too much is even worse as decoupling too less – but that’s a topic for another blog …
So long, happy new year, have fun, consider buying Chinese software company stocks this year 😉
Recently a colleague gave me an interesting definition of Software Architecture he heard at a conference. It was something like:
Software Architecture is the sum of all decisions made for a software product.
Maybe this is true in some way, but I’d prefer a definition that emphasizes the overview aspect of architecture. In the IEEE standard “1471-2000 – IEEE Recommended Practice for Architectural Description for Software-Intensive Systems” a definition can be found that emphasizes this aspect:
The fundamental organization of a system, embodied in its components, their relationship to each other and the environment, and the principles governing its design and evolution.
Sounds good, however, I miss the aspect that a software architecture doesn’t suddenly appear out of nowhere. It is more a process than an entity. A process that never ends, the Architecture constantly changes and erodes during a product’s lifetime. It is the outcome of a tremendous amount of communication and also used in communication to a lot of stakeholders. I found a definition with more regards to this aspect in the “Microsoft Application Architecture Guide, 2nd Edition”:
Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.
Well, after all I’m still not 100%ly confident with this definitions. What would be your definition ?