Designing Ethical Cyber-Physical Industrial Systems

Morals

I didn’t understand why anyone would talk about ethics for robots: it’s a machine or group of machines controlled by 1s and 0s. It just didn’t click until I read the Aeon article [3].Smart Autonomous Robots, with the ability to learn, need to be guided to make desirable choices. In other words, SARs need morals to guide their decisions.

Morals are our personal belief in what is “right”; ethics are society’s belief in what is “right”.[4] When it comes to Smart Autonomous Robots, both apply. A SAR uses morals in their determination of a correct outcome based upon past experiences. Ethics are programmed into the SAR through compliance standards and personal expectations of the researchers or developers of it.

“Morality is all about guiding rational agents who are capable of making mistaken choices.” [3]

It seems plausible that a learning system, whether it is a SAR or a child, would need to be educated on “right” and “wrong” so that they can be productive and make positive contributions into society. Else they risk going to jail or worse. Morals are trained into a robot through repetition of situation, action and consequence. Over time, that machine is able to arrive at the conclusion desired by the teacher.

From a performance requirement, there would need to be storage to keep the data points, processing time to crunch on the data to determine probability of achieving the desired result, etc. The area of machine learning comes to bare here. Fast storage and retrieval, and quick action times lead to high performance requirements for an individual SAR. When a collection of SARs is considered, add on the fast data transfer times as well as a way to resolve disputes between machines.

Storage requirements would be based on the number of dimensions (random variables) the SAR considers worth evaluating. Let’s say that the machine is in a fixed group. It seems reasonable that the dimensionality of achieving a desired result, where good is based upon the fixed morals of the group, could be reduced. [5] This would allow the SAR to proceed forward in learning in a fixed environment but lose contextual information that may or may not be important. How much data is kept for how long would need to be considered as well as how often and how quickly the algorithm is updated.

Morals, however, are not permanent. They vary over time and with changes in society (group). So two identical SARs, placed in two different groups, may develop different morals based on their teacher. Standards may be one way to enforce the ability to “play nicely on the playground” when a new SAR is brought into a Level 4 CPIS. These standards could be considered the ethics from which the group morality is developed. A period of time might be required for the new SAR to re-train itself to the new environment. Or perhaps wipe the memory and start fresh.

Hacking the Systems

Understanding how an individual SAR makes a decision can lead to techniques to pollute the data, leading to an undesirable result. Metadata pollution for a reinforced learning machine can cause the result to drift from expected to unexpected, for example, training an algorithm to identify a house as a car. [6]

Expanding upon that, where a corrupted SAR is introduced into a collection of SARs in a CPIS, it could affect the overall outcome. In the situation where the group-think is wrong, how does the group identify that it is wrong? Veracity and reliability are thrown out the window when wrong cannot be distinguished from right. There would need to be a way to identify a significant drift from a historical norm, alert a human, who then can make a decision. A requirement for development of a SAR CPIS could be identification of a potential fault and the intervention of a human.

Because the impacts are so great and the change from good to bad is subtle, there is even more import placed upon the CIANA principles. Small changes to the underlying data set could have drastic changes in the output. The SAR data set will need to be protected both from humans and from influencing robots. Thus, authenticating humans and robots alike will be key.

On the other hand, it might be more advantageous to toss out the small changes especially if they occur within a very small time frame. The sampling rate of the system could be adjusted and evaluated with respect to the desired output. Basically, you could cut out jitter in the signal.

Integrity and non-repudiation are required to confirm that an action requested by an authorized user was actually carried out to a standard. For example, if a learning system needed to be purged of biased data, is it good enough to reboot the system? To delete/free the memory? Or does the memory need to be overwritten or erased? The system designer would need to take the affects of each under consideration such that these old data – these memories – are not accessible by the new incarnation of the machine. (See Westworld on HBO, [7] .)

Ramifications

[1] proposes an insurance-like system be established for collections of learning systems. How that is funded is unclear – is it the user, the owner, the manufacturer or the developer – but if the car insurance model is an example, it would be the user that would hold the insurance. Others, such as the owner, could also hold insurance on the same property as could others.

A risk analysis could be based upon the severity and likelihood of the potential outcomes coupled with the amount of autonomy the system possesses. For example, looking at autonomous cars, if the user of the vehicle can still control the vehicle then perhaps they get a discount for being in control whereas a vehicle that is completely autonomous (the user cannot hope to control the vehicle), they pay more for the potential risk. [2]

Conclusion

Machines can and will be reused in industry. Standards can help the re-usability of systems through agreed upon criteria for actions to regulate, maintain and protect the systems. Understanding how the resulting algorithm is created helps to shape the CPS security methods to pay attention to as well as the system requirements.

References

[1] Trentesaux, Damien, and Rault, Raphael. “Designing Ethical Cyber-Physical Industrial Systems.” International Federation of Automatic Control, 2017.

[2] Dreier, Thomas, and Indra Spiecker genannt Döhmann. “Legal Aspects of Service Robotics.” Poiesis & Praxis 9, no. 3 (December 1, 2012): 201–17. https://doi.org/10.1007/s10202-012-0115-4.

[3] Aeon. “Creating Robots Capable of Moral Reasoning Is like Parenting – Regina Rini | Aeon Essays.” Accessed May 18, 2020. https://aeon.co/essays/creating-robots-capable-of-moral-reasoning-is-like-parenting.

[4] 7 E S L. “ETHICS Vs MORALS: Difference Between Morals Vs Ethics In English,” August 20, 2019. https://7esl.com/ethics-vs-morals/.

[5] Raj, Judy T. “Dimensionality Reduction for Machine Learning.” Medium, March 14, 2019. https://towardsdatascience.com/dimensionality-reduction-for-machine-learning-80a46c2ebb7e.

[6] Han, Yi, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, and Paul Montague. “Reinforcement Learning for Autonomous Defence in Software-Defined Networking.” ArXiv:1808.05770 [Cs, Stat], August 17, 2018. http://arxiv.org/abs/1808.05770.

[7] Hearn, Mike. “The Science of Westworld.” Medium, January 6, 2017. https://blog.plan99.net/the-science-of-westworld-ec624585e47.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s