Security of Normalized Systems

I had an interesting conversation with Prof. Dr. Jan Verelst of the University of Antwerp. They (Prof. Dr. Jan Verelst together with Prof. Dr. Herwig Mannaert) created the theory of normalized systems.

A normalized system is created following a set of rules, which ends up making the software as ‘atomic’ or ‘modular’ as possible. Software modules are split up to the smallest units possible. This creates software that has no ‘ripple effects’, meaning that updating a part of it, adding functions and features, or changing underlying systems needs minimal effort. Software stays nimble and changeable.

After working many years on the theory, they started NSX to bring this concept to market. They have now had several successful projects. I am interested to see if the promise of making systems nimble and easy to change comes true.

Our conversation was about how normalized systems relate to security. In effect, a NS is not secure by default, but there are quite a few benefits when it comes to security:

  • if a flaw is found in the code expander (the ‘blueprint’ of a piece of code), fixing the expander allows for all software based on it to be easily updated to the newer, safer version
  • if there is a problem with the underlying system (OS or hardware), the code can be generated quickly for a different system
  • if a developer introduced a problem in the (limited) human generated code, it is easily tracked and replaced without effects on other modules, since the modularity is so extreme

All in all, I expect great thing from NS. Making code modular doesn’t automatically make it more secure, but it limits the damage and makes it easier to fix.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *