1. Design
  2. Implementation
  3. Integration

When I just started my career in professional development, I did not understand why open source was needed. I did not understand side projects, for that matter. After all, why give valuable work away for free? Over the years, having worked on open source projects, and also thanks to working with Apex.AI, ROS 2 and Autoware.Auto, I came to some understanding of open-source.

Engineers love to create. People want recognition and gratitude.

When you combine these factors together, you get the path to open source. If I am building something to satisfy my creative needs, then why not let everyone else evaluate my work and find practical benefit and value in it? In the end, I do not do it for the sake of money.

As for side projects, I understood their charm only after I began to develop professionally and more carefully approach various aspects of my work. To create a reliable product that people will pay for, you often have to artificially restrict work processes. Design analysis. Code review. Code writing standards, style guides, test coverage metrics, and so on and so forth. Do not get me wrong - all these are good things that are most likely necessary to develop a quality product. Just sometimes the developer wants a little freedom. A developer may want to create what he wants, how he wants, and when he wants. No meetings, reviews or business cases.

So how do you combine these aspects when it comes to developing safe or high-quality algorithms? A significant part of the open-source world charm is freedom, and practices that help ensure the development of more reliable code limit this freedom.

The answer I found is to follow an open and consistent discipline, as well as to liberalize the many cool tools that come from the open source world.

Planning for open source projects

To solve such issues, engineers need a certain set of skills. Engineers need to be focused, they need good problem-solving skills. Also, engineers need to be able to share problems and have solid basic skills necessary to gain knowledge of all of the above.

This particular set of skills can lead us engineers to some overload.

With all the technical abilities that engineers have, they are ultimately limited in their capabilities. I would argue that most developers are unable to keep an entire project in mind while writing individual lines of code. Moreover, I would say that most developers cannot program and keep in mind a wider project, while not forgetting about common business goals.

This is where the black magic of project management comes into play.

Although we, the developers, may have a somewhat controversial relationship with human, technical or project managers, it must be recognized that all these people do important work. The best representatives of the managerial profession ensure that developers do not lose sight of important tasks, and annoying irritants do not prevent us from shooting problems from all tools.

And although we understand the importance of different managers, people with such skills usually do not participate in a typical open-source project, the purpose of which is to have fun.

So what do we do then?

Well, we developers can get our hands a little dirty and spend some time planning ahead.

I will not go into these conversations, since in the previous post I spoke in detail about the design and planning stages development. The bottom line is that taking into account the design and architecture, for which, as a rule, they consist of many components and form some dependencies, in your own project you yourself form the design and assemble its components separately.

Going back to the project planning aspect, I like to start with the components with the least dependencies (think a minute !), And then continue by adding implementation stubs where necessary to maintain development pace. With this order of work, you can usually create many tickets (with some dependencies in them that correspond to your architectural dependencies - if your task tracker has such functionality). These tickets may contain some general notes that are useful to keep in mind before you dive into any task in more detail. Tickets should be as small and specific as possible. Let's face it - our attention and ability to hold the context is limited. The more granular the development tasks are broken down, the easier it is - so why not try to simplify difficult tasks as much as possible?

As your project develops, your job will be to take tickets in the order of their priority and solve the tasks assigned to them.

Obviously, this is a significantly simplified version of project management. There are many other aspects to real project management, such as resources, planning, competing business cases, and so on. Open-source project management can be simpler and more free. Perhaps in the open source world there are cases of full-fledged project management.

Open Source

Having collected a stack of tickets, having formed a work plan, and having understood all the details, we can proceed to the development.

However, many of the freedom factors present in the wild west of open source development should not be in the development of secure code. You can avoid many pitfalls by using open source tools and some discipline (and having friends).

I am a big supporter of discipline as a means of improving the quality of work (in the end, discipline stands on 6th place in my rating at StrengthsFinder ). Having enough discipline to use open source tools, listen to others, act in accordance with results and adhere to workflows, we can overcome many of the shortcomings that sneak up on cowboy approaches in the open source world.

I’ll briefly say that the use of the following tools and practices (which, with some caveats, can be easily applied in any project) helps to improve the quality of the code:
  1. Tests (or, even better, test-based development)
  2. Static Analysis
  3. Continuous Integration (CI/CD)
  4. Code Review

I will also give a number of principles that I adhere to when writing code directly:

  1. DRY
  2. Full use of the language and libraries
  3. The code should be readable and dazzlingly obvious.

I will try to link this text with the actual implementation of the NDT localization algorithm , which was completed in 10 merge -requests of my good colleague Yunus . He is too busy with direct work, so I can hang on myself some imaginary medals by writing about his work.

In addition, to illustrate some processes and practices, I’ll give an example of developing an open-source algorithm for the MPC controller . It was developed in a slightly looser (cowboy) style for about 30+ odd merge requests , except for additional edits and improvements made after the main job ended.


Let's talk about testing.

I had a long and complicated (by my standards) relationship with testing. When I first got the post of developer and took up the first project, I absolutely did not believe that my code worked, and therefore I was the first on the team who started writing at least some meaningful unit tests. However, I was absolutely right that my code did not work.

Since then, my turbulent relationship with testing has gone through many twists and turns worthy of films for nightly broadcasting. Sometimes I liked it. Sometimes I hated it all. I have written too many tests. Too much copy paste, too many redundant tests. Then testing became routine work, another part of the development. At first I wrote the code, and then I wrote tests for it, that was the order of things.

Now I have a normal relationship with testing. This is an integral part of my workflow, no matter what application I work on.

What has changed for me is the test-based development methods that I started using in my mpc project.

I briefly talked about development based on tests in the previous text, but once again I will describe this process:

  1. Develop a specification (use cases, requirements, etc.).
  2. Implement API/Architecture
  3. Write tests based on API and design specifications; they must fail.
  4. Embed logic; tests must pass

There is some iteration in this process (tests fail on stubs, implementation fails tests, API can be inconvenient, etc.), but in general, I think that it can be extremely useful.

I talked a lot about the fact that before implementation you need planning, and development based on tests gives you this opportunity. We noted the first point. Then you think about architecture and the API, and map them to use cases. This gives you a great opportunity to get closer to the code, but at the same time think about the problem in general terms. Marked the second point.

Next, we move on to writing tests. There are several reasons for writing tests before implementation, and I think they are all important:

  1. Tests should be written as objects of first importance, and not as an addition.
  2. Tests should be written only taking into account the specification, without detailed knowledge of the implementation - this way the test will verify the implementation.
  3. Tests let you play with your API and see how convenient it is to use it.
  4. You can make sure your tests are correct if they fail when your code does not do what you need.

In general, I think that the benefits of testing-based development are enormous, and once again I strongly recommend that everyone at least give it a try.

Back to Autoware.Auto. Yunus, although he did not adhere to test-based development methods, wrote tests for each merge request during the development of NDT. At the same time, the volume of the test code was equal (and sometimes exceeded) the volume of the implementation code, and this is good. For comparison: SQLite , which is probably the benchmark in testing (not only by the standards of open source projects), has 662 times more test code than implementation code . In Autoware.Auto, we are not quite at this stage yet, but if you look at the history of merge requests related to NDT, you can see that the volume of the test code slowly crawled upwards until the coverage reached 90% (although it has fallen since then due to other designs and external code).

And that's cool.

Similarly, my mpc project has tests for everything, including and the tests themselves . Moreover, I always carefully conduct regression testing to make sure that the error is fixed and will not appear again.

I am well done.

Static Analysis

Many concepts are curious that the definitions included in them can be greatly extended. So, testing goes far beyond the scope of hand-written functional tests.In fact, checking style compliance or finding errors can also be considered a form of testing (in fact, this is an inspection, but if you stretch the definition, it can be called testing).

Such "tests" are somewhat painful and time-consuming to work with. In the end, checking, checking for the use of tabs/spaces when aligning? No thanks.

But one of the most enjoyable and valuable things in programming is the ability to automate painful and time-consuming processes. At the same time, the code can achieve the result faster and more accurately than any person. What if we can do the same with bugs and problematic or error prone constructions in the code?

Well, we can - using static analysis tools.

For a long time I wrote about static analysis in the previous blog post , so I won’t go into its advantages and tools that you can use.

At Autoware.Auto, we use version of the ament_lint kit from our close friends from ROS 2. These tools bring us many benefits, but perhaps the most important - This is the autoformatting of our code, eliminating disputes about style - impartial tools tell us what is right and what is not. If you're interested, I’ll note that clang-format is more strict than uncrustify .

In the mpc project, I went a little further. In it, I used the Weverything flag of the Clang compiler in addition to all sorts of warnings from clang-tidy and static analyzer Clang. Surprisingly, for commercial development, it was necessary to disable several parameters (due to redundant warnings and conceptual disagreements ). When interacting with external code, many checks had to be disabled - they led to unnecessary noise.

In the end, I realized that using extensive static analysis does not greatly interfere with normal development (in the case of writing new code and after passing a certain point on the learning curve)

It is difficult to quantify the value of static analysis, especially if you use it from the very beginning. The fact is that it is difficult to guess whether an error existed before the introduction of static analysis or not.

However, I believe that the use of warnings and static analysis is one of those things where, even when used correctly, sure that they did something at all . In other words, you cannot be sure of the value of a static analyzer when it is turned on, but, damn it, you will immediately notice its absence.


No matter how I like thorough testing and static/dynamic code analysis, all tests and checks are worthless if you don't run them. CI can solve these problems with minimal overhead.

I think everyone agrees that the presence of the CI/CD infrastructure is an essential part of modern development, as well as the use of version control system and the availability of development standards (at least style guides). However, the value of a good CI/CD pipeline is that its operations must be reproducible.

The CI/CD pipeline, at a minimum, must collect code and run tests before including code in your repository.In the end, no one wants to be the guy (or girl, or person) who broke the assembly or some kind of test, and must fix everything quickly and shamefully. CI (and therefore your dear DevOps engineers) protect you from this shame.

But CI can do much more for you.

Using the robust CI pipeline, you can test any number of combinations of operating systems, compilers, and architectures (with certain limitations, given combination testing ) You can also perform builds, run tests, and perform other operations that may be too resource-consuming or cumbersome for the developer to perform manually. You can’t jump above your head.

Returning to the initial statement, the presence of the CI/CD pipeline (which we use in Autoware.Auto ) in your open-source project will help saddle development unmanageability. The code will not be able to get into the project if it does not assemble or pass the tests. If you observe strict testing discipline, you can always be sure that the code works.

At Autoware.Auto, CI:

  1. Collects code
  2. Runs tests (style tests, linter checks, functional tests).
  3. Measures test coverage
  4. Verifies that the code is documented

In turn, my hastily assembled CI in the mpc project:

  1. Collects code
  2. Performs crawl (Clang static analysis)
  3. Runs tests (but does not stop CI in case of failed tests).

CI pipeline assembled by an experienced DevOps engineer (such as our JP Samper or Hao Stump !) is capable of much more. So cherish your DevOps engineers. They make our lives (as developers) much easier.

code review

Tests, analyzers, and CI are great. You can run the tests, analyze everything, and make sure that these tests are performed using CI, right?
Unfortunately not.

I repeat, all tests in the world are worthless if they are bad. So how do you make sure your tests are good?

I, unfortunately, have no magic answers. In fact, I am returning to the old engineering methodology, to reviewing. In particular, to the code review.

It is generally believed that two heads are better than one. In fact, I would argue that this concept is supported not only by literature, but also by theory.

The ensemble of methods in machine learning illustrates this theory. It is believed that using an ensemble of methods is a quick and easy way to improve the performance of statistical models (for example, the well-known boosting method ). Similarly, from a purely statistical point of view, the variance is lower (with assumptions), the more samples you have. In other words, you are more likely to be closer to the truth if you connect more employees.

You can try this technique on a living example by doing team building exercise . A less fun version may include guessing random statistics individually and in a group.

Putting theory and team building aside, code review is an important and powerful tool. Not surprisingly, a code review is an integral part of any professional development process, and is even recommended by ISO 26262.

All this suggests that there is always a danger that one child will have seven nannies. Moreover, sometimes a review code may cause certain difficulties.

However, I think code reviews can be enjoyable and painless if both the reviewer and the reviewer remember the following:

  1. You are not your code.
  2. You are talking to another person.
  3. Be polite
  4. Everyone works to achieve the same goal; code review does not represent any competition (although sometimes in programming this happens )

Many smarter and more pleasant people than me wrote about how to conduct code reviews correctly, and I suggest you take a look at their work . The last thing I can say is that you should do a code review if you want your code to be more reliable.


I spoke in detail about the processes and tools with which you can create a development environment: checks and tools that conduct checks and make sure that the code is good enough.

Next, I would like to go on to a short talk about programming skills and share some thoughts on the processes and intentions behind writing individual lines of code.

There are a couple of concepts that have helped me greatly improve my code. One of these concepts was the ability to remember intentions, semantics and readability, which I will talk about later. Another is an understanding of OOP and the principle of separation of problems . The last important idea is DRY ( Don’t Repeat Yourself ) or the Do Not Repeat Principle.

DRY is something that is taught at school, and, like with many other things, we put this idea aside to the far shelf and do not attach any special significance to it outside of exams (at least for me). But, as is the case with many other things at school, we don't learn anything for nothing. This is actually a good practice.

Simply put, if you find that you often copy and paste code, or often write very similar code, this is a very good sign that the repeated code should become a function or part of some abstraction.

But DRY goes beyond checking that some code should be put into a function. This concept can also serve as the basis for some architectural decisions.

Although this approach intersects with some architectural concepts (such as combination, connectivity, and task separation), an example of applying DRY to an architecture can be seen in my mpc project. During development of the mpc controller , I noticed that I would have to duplicate some code, if I ever write another controller. It's about boilerplate code for tracking status, publications, subscriptions, conversions, and the like. In other words, it seemed like a separate task from the mpc controller.

This was a good sign that I should highlight the general constructions and functionality of in a separate class . The payback was twofold: mpc controller is 100% focused on mpc related code, and with the module associated with it is just a configuration template. In other words, due to architectural abstractions, I don’t have to rewrite everything when working with another controller.

The world consists of shades of gray, so you should delve into these design decisions with caution and proper thinking. Otherwise, you can go too far and start creating abstractions where they are not needed. However, if the developer is mindful of the concepts that these abstractions model, DRY is a powerful tool for building architectural decisions. In my opinion, DRY is the basic concept for maintaining the cleanliness and density of your code.

After all, one of the key strengths of code is its ability to perform repetitive tasks, so why not shift code repetition to well-designed functions and classes?

Full use of the language and library

DRY, in my opinion, is such an important and all-encompassing concept that this item is actually just a continuation of the discussion about DRY.

If your language supports something, then you should usually use the built-in implementation, unless you have very good reasons to abandon it. And C++ has very many built-in things .

Programming is a skill, and there is a big difference in levels of ownership. I just caught a glimpse of how high is the mountain of this skill and in general I think that people who create standards implement common patterns better, than me.

A similar argument can be made regarding the functionality of libraries (although perhaps not so categorically). Someone else has already done the same, and probably at a good level, so there is no reason to reinvent the wheel.

However, this, like many others, this paragraph is a recommendation, not a strict and urgent rule. Although it is not worth reinventing the wheel, and although standard implementations are usually very good, it makes no sense to try to squeeze a square piece into a round hole. Think with your own head.

Readable Code

The last concept that helped me improve my programming skill was that programming is not so much writing code as communication. If not communication with other developers, then communication with oneself from the future. Of course, you need to think about memory, mathematics, and the complexity of O-big, but as soon as you deal with this, you need to start thinking about intentions, semantics, and clarity.

There is a very famous and widely recommended book on this subject, " Clean code "so that I can add not much on this topic. Here I will list some general information that I refer to when I write code and review code:

  1. Try writing classes that are clear and focused:
    • Minimize engagement
    • Maximize cohesion
    • Shrink public interfaces to clearly defined and understandable concepts ( Example )

      • Do your interfaces or parameters match the corresponding const, noexcept labels? Are they links or pointers? ( Example , Example )
  2. Remember the state of objects

  3. Functions should only do one thing

    • If you use "and" in the documentation for a function, then this is a sign that you can separate the function.
    • Functions should be concise and readable ( Example , Example ).
    • Avoid too many function parameters (usually a sign of lack of abstraction) ( Example ).
  4. Are you repeating yourself? ( Example )

    • Maybe it needs to be put out in a function ( Example ).
    • Or, perhaps, there are some concepts inside the giant class that are worth putting into smaller classes ( Example , Example ).

Another great resource that addresses issues of this kind is the ISO C++ Core Guidelines .

I repeat once again that none of these principles is revolutionary, new or unique, but if the writing of the principles is valuable (or someone reading will say " yeah " ), then I did not waste the bits and traffic to write this post.

Looking back

These were some of the tools, principles, and processes that we used to develop and implement the NDT localization algorithm, as well as when working on the MPC controller. A lot of work was done, it was fun, talking about it is not so interesting.

Overall, we made great use of the tools and practices that I talked about, but we were not perfect.

So, for example, when working on NDT, we did not follow the idioms of development based on testing (although everything is perfectly tested with us!). In turn, in my work on MPC, I really followed the test-based development methods, but this project did not benefit from the more powerful CI built into Autoware.Auto. Moreover, the MPC project was not public, and therefore did not receive the benefits of a code review.

Both projects could benefit from the introduction of static analysis, more detailed testing and more feedback. However, we are talking about projects created by one person, so I think that the testing and feedback received are enough. As for static analysis, its better and more intimate forms as a whole fall into the sphere of interests of product development and move away from the community of open-source developers (although interesting ideas may appear on the horizon ).

I have nothing to say about the simultaneous development of two algorithms - we worked to the best of our abilities, adhering to the principles that I set out above.

I think that we did an excellent job of decomposing large tasks into smaller components (although the MR component in the NDT algorithm could have been smaller) and conducted rigorous testing. I think the preliminary results speak for themselves.


After implementation, the time comes for integration. It is necessary to connect to a larger system with its complex components. This system will take your input data, digest it and produce the results of your algorithm. Integration is perhaps the most difficult part of the development of the algorithm, since you need not to lose sight of the overall plan of the system and fix low-level problems. Any error in any of your many lines of code can interfere with the integration of your algorithm.

I will talk about this in the third and last post of this series.

As a preview, I will say that during the development process not a single big error was found during the use and integration of the mpc controller.True, there were some problems with writing scripts , assembly, tests , input check was skipped , and also there were problems with QoS settings incompatibility , but there was nothing terrible in the code.

In fact, it was able to start (bypassing QoS incompatibility and parameter settings) almost out of the box .

The same goes for the NDT algorithm, which has encountered a number of minor problems like covariance instability , string search error in existing code, and incorrectly aligned maps. Despite this, he was also able to work out of the box .

Not bad for products that are developed in full view.

Subscribe to the channels:
@TeslaHackers - community of Russian Tesla hackers, rental and drifting training on Tesla
@AutomotiveRu - automotive industry news, hardware and driving psychology


We are a large company developing automotive components. The company employs about 2,500 employees, including 650 engineers.

We are perhaps the most powerful competence center in Russia for the development of automotive electronics in Russia. Now we are actively growing and have opened many vacancies (about 30, including in the regions), such as a software engineer, design engineer, lead development engineer (DSP programmer), etc.

We have many interesting challenges from automakers and concerns driving the industry. If you want to grow as a specialist and learn from the best, we will be glad to see you in our team. We are also ready to share expertise, the most important thing that happens in automotive. Ask us any questions, we will answer, we will discuss.

Read more helpful articles: