Building Async and Cloud Native organizations - Issue #17

Detect breaking changes in openapi specifications, load testing with Microsoft Azure, CodeSpace and Documentation Search in GitHub

Welcome to my weekly newsletter! Every week, I bring you the latest news, updates, and resources from the world of coding and architecture. I'm so glad you've decided to join me, and I can't wait to share my insights and expertise with you.

I hope you'll find this newsletter to be a valuable resource, and I welcome your feedback and suggestions. If there's something you'd like to see more of, or if you have any questions or comments, please don't hesitate to contact me.

Thank you for joining me, and happy reading!


As APIs continue to grow in importance, maintaining high-standard API security is a paramount concern. As such, keeping APIs and the data flowing through them safe and only accessible to the intended user is a must. This is not something to wait on — if you haven’t already, you need to get on it now:

APIs are always evolving, which can be a good thing as this allows for new functionality. However, this can involve changes to the API definitions. If those changes are breaking changes, like the removal of an endpoint, a parameter, or changes to the type, your API is broken and no longer backward compatible.

You will need to either bump the version or make sure you do a non-breaking change to the API. But how do you know you are no longer backward compatible because you introduced a breaking change?

You will need some sort of comparison between the old and new specifications of the API. There is an old tool by Microsoft that does unfortunately not support the latest OpenAPI spec anymore. Some fine folks at Criteo updated the tool and created a new version. Read their story on how they did this:

There are more frameworks; one example is As this is a Java app, I prefer to run the Docker container instead of running it directly.

First, I will download the default petstore openapi definition and place it in a folder. I create two versions; one is the original, and in the other one, I change an operation.

I now want to run the comparison, so I start the openapi-diff container, specifying the two different files. They need to be accessible from the docker container, so a volume mapping is needed (the -v parameter to the local folder).

docker run --rm -t \
  -v $(pwd):/specs:ro \
  openapitools/openapi-diff:latest /specs/petstore_old.json /specs/petstore_new.json

The outcome is a list of changed/breaking operations. You can output the differences to markdown, HTML, json etc.

One possible use case is to include this as a step in a Pull Request build. Compare the openapi specification from the code in the pull request with a version in the main branch and attach the changes as markdown to the pull request. You can even let the build fail when the changes are incompatible.

Not only can you validate the changes you make to your API, but what about others? Download the openapi specification from your dependencies and validate it with what you have stored as the known version. Do this in a nightly workflow, and you can be made aware that third parties have changed their API without telling you.

Coding technicalities

There are various ways to perform a load test, like jMeter, ab, k6, nBomber etc. However, running it from your own machine means that you are also testing the limits of your outbound connections and capabilities. Having a dedicated service in a data center with enough capacity allows you to more reliable test your system.

One of those services is Microsoft Azure Load testing. It has been in beta for some while but is now generally available.

You can test against urls, or even import jMeter files to get started quickly. Insights into the results help you to determine the capabilities of your system, and you can compare the results against earlier runs, allowing you to see if performance is changed. Ideal when you hook this into your CICD pipeline as well.

As this is an Azure service, it does support testing against Private Endpoints and can use managed identities. Read more on the Microsoft blog:

When you just want to run a background task at certain intervals, a system like Hangfire can be overkill. With a background service, you can create a cron-based job scheduler, but be aware that persistency, recovery, singleton running etc is not implemented. Steven Giesel wrote an interesting implementation using background services in dotnet to implement scheduled jobs:

GitHub related

I m a fan of CodeSpaces, the virtualized environment with an IDE in the browser. It is excellent for trainings and workshops, or getting developers a dev machine instantly. There are more use cases and tips, from the GitHub blog:

Did you know that CodeSpaces can also be ideal for interviews? If you want people to do certain exercises, then they all get an equal start if they have the same machine and setup. No yak-shaving needed and sometimes it is not even possible to install or run specific software from corporate / locked down machines.

Learn more at the GitHub blog on how they are using CodeSpaces in their interview process.

A simple username and password are no longer enough. Account takeovers are everyday things to happen, so adding another layer of authentication is needed. GitHub wants all developers using their platform to have a second factor enabled before the end of the year.

Adding a token or security key is a simple process, and you will get some time to enable it. Try to avoid the SMS option as that can be spoofed easily.

I wrote about the new GitHub Code Search in one of my previous editions of this newsletter. However, there is another system that offers search functionality; the GitHub Docs.

An ever-growing library of documentation items, now in eight languages and five different versions per language. While it was possible to store it all in memory before, it ran into scaling issues, so another solution was needed.

With the Elasticsearch engine, all the data became searchable. To get the right result at the top, you need to boost certain fields. How that all works is explained in the GitHub blog:

Computing in general

Optimizing a system is not always about improving the technical components; you might also get a better user experience by stepping back and seeing what is really needed:

Meier shares his best innovations books drawing from his experience innovating at the highest levels for more than 10+ years at Microsoft, and as former head coach for Satya Nadella’s innovation team:

Jeremy Daly wrote how times have changed; the demand on developers to have more responsibilities is increasing. We expect them to be T shaped and cross-functional; knowing how to deploy infrastructure for the software that they need to code as well. Use the cloud, make it secure and compliant, and throw some DevOps and Agile in it as well;

Helpers and utilities

Although tools like New Relic, DataDog, Application Insights are great tools, they do have a steep price point for all the data you throw at it. If you want to control this and run tooling on your own metal, then SigNoz might be an alternative:

What if you come up with a small dataset? In how many ways can you visualize this data? A nice showcase of 100 possible variations:

Computer laws

It works better if you plug it in.

Sattinger's Law

Makes sense, does it not?

I hope you've enjoyed this week's issue of my newsletter. If you found it useful, I invite you to share it with your friends and colleagues. And if you're not already a subscriber, be sure to sign up to receive future issues.

Next week, I'll be back with more articles, tutorials, and resources to help you stay up-to-date on the latest developments in coding and architecture. In the meantime, keep learning and growing, and happy coding!

Best regards, Michiel


or to participate.