Critical Review: Monitoring Security of Networked Control Systems: It’s the Physics

What I learned

This article was my first dive into the world of Industrial Control Systems and my first security class, so I am learning a lot. An Industrial Control System (ICS) can be considered in two parts: the digital state and the analogue state. The analogue state represents the current and continuous state of the system. The digital state represents the discrete components within the system such as relays. If it is assumed that the physical devices have protection mechanisms to prevent destructive behaviour then the challenge lies in protecting the system as a whole.

Moreover, the instrumentation used in ICS relied on mainly serial fixed communication links. The evolution from localized networks to the Internet increased the risk of failure of these complex systems, either from an adversary or a simple human mistake. An Intrusion Detection System can be used to monitor the changes in the system as a whole with the intent to alert on physically harmful situations.

Analysis of Proposed IDS

A specification-based Intrusion Detection System (IDS) uses a model of the ICS to determine when issued commands could/can cause bad things to happen. The system model would be complex and provide ways to verify or identify expected post-state conditions. The specification-based IDS incorporates plausibility as a way of verifying the reported physical state of a system after the command is issued. The model to build up plausibility would appear to be a whole other body of research, leveraging probabilistic models perhaps such as Bayesian networks.

The IDS becomes part of the security infrastructure to validate the effect of the command on the system as a whole. The performance impact of the IDS is unclear from the document. The time between capture of the command and the determination of a good (or bad) system state needs to meet the real-time needs of the ICS. Same for the ability of the IDS to evaluate the physical post-state condition of the system so as to alert if there is an attack condition.

Validation of the post-state condition is done with a “trust but verify” method. The principle of veracity states that the individual sensors won’t knowingly send false information yet the messages can be spoofed. With that being the case, additional sensors can be drawn upon to validate conditions of the system as a whole, possibly the individual sensor.

This leaves the door open for coordinated attack to spoof not only the intended failed component but the secondary components used to vet the result of the first one. In order to do this, the attack would need to know the information used by the model for validation of the post-state condition of the system. More and more components can be brought in to validate the results of the targeted component and hopefully at some point in the chain, the real ICS differs from the IDS’s model of the post-state of the ICS. How much of a delta there needs to be to determine a difference between reality and model seems to be open for debate and perhaps this is where the human enters.

Humans are able to process more than what they see in front of them, adding their own knowledge and wisdom to the information being presented. They can recall that a location had a power outage or that a piece of equipment is being replaced, information that is overlaid on the IDS information to synthesize the true state of the system.

Another way that humans can contribute to the validation of the IDS outcome is through visual or hands-on inspection. If there is doubt over the plausibility of the IDS outcome, send a human to inspect – take readings, visually examine, monitor the traffic – whatever makes sense for the given situation. That would give weight to the outcome. The need to do this physical inspection as a way of validating individual measurements highlights that the specification-based IDS is a solid guess but not absolute.

One would think that the IDS could deny an issued command, thereby potentially violating non-repudiation, if that command would not result in a good state of the system as a whole. Said another way, the actions of an authorized user can be blocked if the result might be an bad state of the system. Perhaps the IDS alerts the control operator to this potentially bad situation. Or, perhaps the IDS allows the command to go through, keeping non-repudiation intact, and alerts only if a bad system condition is detected. As the IDS is based on a model, there is some probable error tied to each determined outcome. Therefore, it would be beneficial for users to tailor the IDS actions to require a human-in-the-loop or, alternatively, allow the human to accept the chance of going into a bad state. I’m not sure there is a best solution here given the complexity of the ICS and the knowing that the model is close but not exact. And the more instruments you put in to monitor a system, the more possible faults you put into place as well.

Additional References

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s