Category: safety

Machine Learning Security

Seven security concerns in Machine Learning (ML) –

  1. Data privacy and security: ML requires large amounts of data to be trained, and this data may contain sensitive or personal information. Appropriate measures need to be put in place to prevent data from being accessed by unauthorized parties.
  2. Notebooks security: ML typically requires Jupyter or similar notebooks to be served for data scientists to work on data, code, and models, both individually and collaboratively. These notebooks need to be access controlled and protected from unauthorized access. This includes the code and git repos that host the code, and the model artifacts that the notebook uses or creates.
  3. Model serving and inference security: ML models in production are commonly served and accessed over inference endpoints and such endpoints need authentication, authorization, encryption for protection against misuse. During model upgrades to an endpoint or changes to an endpoint and its configuration, a number of attacks are possible that are typical of a devops/devsecops pipeline. These need to be protected against.
  4. Model security: Models can be vulnerable to attacks such as adversarial inputs, such as when an attacker intentionally manipulates the input to the model in order to cause it to make incorrect predictions. Another example is when the model makes an egregiously bad decision on an input, for example a self-driving car hitting an obstacle instead of avoiding it. It is important to harden the model and bound the decisions that come from its use.
  5. Misuse: Even if a model works as designed, it can be misused, for example by generating fake or misleading content. It is important to consider the potential unintended consequences of using models and to put safeguards in place to prevent their misuse.
  6. Bias: ML models can sometimes exhibit biases due to the data they are trained on. There should be a plan to identify biases in a model and take steps to mitigate them.
  7. Intellectual property: ML models may be protected by intellectual property laws, and it is important to respect these laws and obtain the appropriate licenses when using language models developed by others.

Safety Concepts – Functional safety, SOTIF

I want to capture Functional Safety concepts. This is an important topic given the number of vehicles and their increasing automation. It was a major topic at the recent ArmTechCon.

Airbags, seat-belts, ABS, and tire-pressure-monitoring-systems are some of the safety features in a car.

Safety Function or Safety Instrumented Function (SIF): A function to take a system to a safe outcome when certain prerequisites on system inputs are not met. For example, turn on a warning indicator when seat-belt is not used or when the tire-pressure is below safe level, or deploy airbags when a collision is detected.

Safety Related Control Function (SRCF): This is the control mechanism by which the safety function is achieved. For example, collision above a certain impact threshold leads to airbag deployment by an airbag control module. Quote: “The airbag control module is installed inside the center console and contains a safety sensor, G sensor, ignition judgment circuit, and a backup power supply.”

Safety Integrity Level (SIL): The reliability of a safety-related-control-function is captured with a Safety Integrity Level or SIL. SIL level go from 1 to 4 (highest). SILs are derived from a risk estimation process and are used in estimating risk of a system built using pre-built components.

A safety standard is ISO26262. The V shaped functional safety process diagram describes the design steps flowing from OEM to supplier and verification steps flowing back from supplier to OEM (here). This process is used to achieve a Safety Integrity Level (SIL) where the SIL1, SIL2, SIL3, SIL4 reduce risk by a progressive factor of 10, i.e. by 10x, 100x, 1000x and 10000x. A hazard and operability study ( HAZOP ) is undertaken to understand the risks of the mechanism behaving incorrectly.

A good reference, from SIMATIC is here. Software aspects of safety function are discussed in in this whitepaper.

Safety systems for robotics are discussed here – this has a table of typical safety issues when a person enters a robot safeguarded area. Industrial robots security was briefly discussed in this blog here.

Another concept is SOTIF or Safety of the Intended Function, which comes up in functional safety discussions of AI-controlled vehicles. More links on it here.

An Nvidia safe driving report is here.

Industrial Robot Safety Systems

Robot safety systems must be concerned with graceful degradation in the face of component failures, bad inputs and extreme operating conditions. With the increasing complexity and prevelance of robots, one can expect these requirements to grow.

This document “Robot System Safety” describes the following safety features of Kuka robots-
— Restricted envelope
— EMERGENCY STOP
— Enabling switches
— Guard interlock

An interlock system described here for general control systems, is a mechanism for guaranteeing that an undesired combination of states does not occur – for e.g. robot moving when the cell door is open, or an elevator moving when the door is open. The combination of the controlled stop with the guard interlock is described as follows:

“The robot controller features a two–channel safety input, to which the guard interlock can be connected. In the automatic modes, the opening of the guard connected to this input causes a controlled stop, with power to the drives being maintained in order to ensure this controlled stop. The power is only disconnected once the robot has come to a standstill. Motion in Automatic mode is prevented until the guard connected to this input is closed. This input has no effect in Test mode. The guard must be designed in such a way that it is only possible to acknowledge the stop from outside the safeguarded space.”

This Stanford Linear Accelerator (SLAC) paper describes the advantages of PLCs for interlock design as:

. flexible system configuration due to modular hardware and software
. regularly scheduled background tests of PLC system and sensitive I/O
. comprehensive system self-tests
. intelligent fault diagnostics simplify trouble-shooting
. easy reconfiguration of the interlock logic
. no mechanical wear and tear
. improved security due to logic encapsulation in firmware

The SLAC use case is to lock the doors unless (a) the power has been shutoff or (b) the power is on but there is an explicit bypass for hot maintenance. They implement it on a Siemens PLC using statement lists (vs ladder logic or control system flowchart programming).
GE has a number of industrial interlocks for various safety functions – http://www.clrwtr.com/PDF/GE-Security/Sentrol-Catalog.pdf

A very useful article on interlocking devices – http://machinerysafety101.com/2012/06/01/interlocking-devices-the-good-the-bad-and-the-ugly/

TechCrunch Disrupt Hackathon – Safety for Connected Cars

The Ford Hackathon at Techcrunch Disrupt (San Francisco, 2015) encouraged use of the Ford SmartDeviceLink (SDL) iOS SDK to talk to their in-car head units. The apps can be submitted to Ford for cars supporting Ford Sync. Toyota was present to lend support to this open source effort. With a joint open source effort  the number of cars targetted by such apps could be higher.

The SDL SDK can be useful for insurance applications for measuring ride and driver quality. Many applications were built at the hackathon around this idea.

My team built an iOS application to synchronize brakes between two cars in real-time to prevent vehicle pileups in low visibility conditions. It alerted the driver that another car is braking ahead of his car, by acting as a virtual brake light that turns on if a connected car ahead is braking. Our goal was distraction-free safe driving, so it used voice commands to alert the driver and automatic brake detection from the SmartDeviceLink SDK, instead of manual alert generation.

A previous SDK supported by Ford was OpenXC, an open API for connected cars. Another popular SDK at the hackathon was Vin.li SDK.

There was discussion of a Waze like app that is built into cars. Talking to people I learnt of the role the Department of Transportation is playing to bring Intelligent Transportation to reality.

DSRC is a communication standard for such use cases – Dedicated Short Range Communications.