Category: Uncategorized

Containers and Privileges

Cgroups limit how much you can do. Namespaces limit how much you can see.

Linux containers are based on cgroups and namespaces and can be privileged or unprivileged. A privileged container is one without the “User Namespace” implying it has direct visibility into all users of the underlying host. The User Namespace allows remapping the user identities in the container so even if the process thinks it is running as root, it is not.

Using cgroups to limit fork bombs from Jessie’s talk:

$ sudo su
# echo 2 > /sys/fs/cgroup/pids/parent/pids.max
# echo $$ > /sys/fs/cgroup/pids/parent/cgroups.procs  // put current pid
# cat /sys/fs/cgroup/pids/parent/pids.current
# (echo "foobar" | cat )
bash: for retry: No child processes

Link to the 122 page paper on linux containers security here.  Includes this quote on linux kernel attacks.

“Kernel vulnerabilities can take various forms, from information leaks and Denial of Service (DoS) risks to privilege escalation and arbitrary code execution. Of the roughly 400 Linux system calls, a number have contained privilege escalation vulnerabilities, as recently as of 2016 with keyctl(2). Over the years this included, but is not limited to: futex(2), vmsplice(2), mremap(2), unmap(2), do_brk(2), splice(2), and modify_ldt(2). In addition to system calls, old or obscure networking code including but not limited
to SCTP, IPX, ATM, AppleTalk, X.25, DECNet, CANBUS, Econet and NETLINK has contributed to a great number of privilege escalation vulnerabilities through various use cases or socket options. Finally, the “perf” subsystem, used for performance monitoring, has historically contained a number of issues, such as perf_swevent_init (CVE-2013-2094).”

Which makes the case for seccomp, as containers both privileged and unprivileged can lead to bad things –

““Containers will always (by design) share the same kernel as the host. Therefore, any vulnerabilities in the kernel interface, unless the container is forbidden the use of that interface (i.e. using seccomp)”- LXC Security Documentation by Serge Hallyn, Canonical”

The paper has several links on restricting access, including grsecurity, SELinux, App Armor and firejail. A brief comparison of the first three is here. SELinux has a powerful access control mechanism – it attaches labels to all files, processes and objects; however it is complex and often people end up making things too permissive, instead of taking advantage of available controls.  AppArmor works by labeling  files by pathname and applying policies to the pathname – it is recommended with SUSE/OpenSUSE, not CentOS.  Grsecurity policies are described here, its ACLs support process–based resource restrictions, including memory/cpu/files open, etc.

Blockchain ideas

I think of bitcoin as a self-securing system. Value is created by solving the security problem of verifying the last block of bitcoin transactions. This verification serves as a decentralized stamp of approval, and the verification step consists of hashing a nonce and a block of transactions through a one way hash function and arriving at a checksum with a certain structure (which is hard because hash is random and meeting the structure requirement is a low probability event).

What happens if parties collude to get greater hashing power and increase their share of mining ? This is what happened with GPU mining farms on bitcoin. It was one of the motivations behind Ethereum, which enables code to run as part of transactions, and for the hashing algorithm to be not easily parallelized over a GPU. But it is not economical to mine on desktops as the motivation seems to suggest.

The important aspect I think is the self-securing idea – how can a set of computational systems be designed so that they are incentivized to cooperate and become more secure as a result of that cooperation.

At a recent blockchain conference, some interesting topics of discussion were zero knowledge proofs, consensus algorithms,  greater network-member-ownership in a network with network-effects instead of a centralized rent collection, game theoretic system designs and various etherereum blockchain applications.

Update: this self securing self sustaining view is explored further here.

Ethical considerations in Autonomous Vehicles

A recent talk discussed ethics for autonomous vehicles, as an optimization problem. There can be several imperatives for an AV which are all “correct”, yet be in conflict for an autonomous vehicle which relies on hard coded logic.

For example: Follow Traffic safety rules. Stick to the lane. Avoid obstacles. Save most human lives. Save passengers.

How can a vehicle prioritize these ? Instead of a case by case design, the proposal is to cast it in an ethics framework based on optimization of various ideals and constraints with weighted coefficients. Then test the outcomes.

The optimization equation looks to minimize ( path_tracking + steering + traffic_laws ) subject to constraints ( avoid_obstacles ). The equations produce different behaviour when the coefficients are changed.

Another consideration is the Vehicle intent: is it fully in control or can the human override it. This affects the software assumptions and system design.

The talk was presented by Sarah Thornton, PhD. Stanford. A related discussion on safety is  here : Who gets the blame when driverless cars crash ?.

Somewhat related is the idea of computer vision itself operating correctly. There can be adversarial inputs as discussed in the paper Intriguing properties of neural networks which discusses blind spots. Generative Adversarial Models are a way to improve the generalization capabilities of a network by pitting generative against discriminative models. The European Conference on Computer Vision starts today:


Neural Network Training and Inferencing on Nvidia

Nvidia just announced the Tesla P40 and P4 cards for Neural network inferencing applications. A review is at Comparing it to the Tesla P100 released earlier this year, the P40 is targeted to inferencing applications. Whereas the P100 was targeted to more demanding training phase of neural networks. P4o comes with the TensorRT (real time) library for fast inferencing (e.g. real time detection of objects).

Some of the best solutions of hard problems in machine learning come from neural networks, whether in computer vision, voice recognition, games such as Go and other domains. Nvidia and other hardware kits are accelerating AI applications with these releases.

What happens if the neural network draws a bad inference, in a critical AI application ? Bad inferences have been discussed in the literature, for example in the paper: Intriguing properties of neural networks.

There are ways to minimize bad inferences in the training phase, but not foolproof – in fact the paper above mentions that bad inferences are low probabalility yet dense.

Level 5 autonomous driving is where the vehicle can handle unknown terrain. Most current systems are targeting Level 2 or 3 autonomy. The Tesla Model S’ Autopilot is Level 2.

An answer is to pair it with a regular program that checks for certain safety constraints. This would make it safer, but this alone is likely insufficient either for achieving Level 5 operations, or for providing safely for them.

Automotive and Process Safety Standards

ISO 26262 is a standard for Automotive Electric/Electronic Systems safety, that is adopted by car manufacturers. Its V shape consists of two legs, the first comprising definition, analysis, design, architectural design, development and implementation. The second leg consists of verification and validation of the software, starting from unit tests to functional tests, safety tests and system-wide tests. Model based design is used to reduce the complexity. These models are now fairly complex. Model based design is the one of the value adds that Mentor Graphics automotive kit provides is help with achieving compliance with this standard.

ISO 26262 is derived from its parent, the IEC 61508 standard, which is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems. This parent standard has variants for safety of automotive, railway, nuclear, manufacturing processes (refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power) and machinery related electrical control systems. An associated, upcoming standard is the SAE J2980.

An excellent talk today by MIT fellow Ricardo Dematos discussed more comprehensive approaches to automotive safety. This is building up from his work with safety research at MIT, AWS IoT and our SyncBrake entry for V2V safety at TechCrunch Disrupt 2015.

MICROS Point-of-Sale POS attacks

Oracle MICROS Point of Sale systems are again reported to be attacked. The support site of MICROS was infected by malware which was able to access usernames and passwords and send them to a remote server. This remote server was identified as one previously known to be used by ‘Carbanak’, a cybercrime group.

ZDNet reports that in the last year, dozens of machines at Starwood and Hilton hotels were impacted by malware, with the aim of poaching payment and card data, which can be used or sold on to the highest bidder. The attack on MICROS systems may be behind these.

An advisory by VISA here, discusses two previous malware threats attacking POS systems, Carbanak and MalumPOS.

This report by Symantec describes POS attacks as multi-stage attacks, with threats and mitigation strategies.

In August 2014, Department of Homeland Security had issued an advisory for ‘Backoff’ a POS malware, which affected a large number of systems in retailers including Target, PF Chang’s, Neiman Marcus, Michaels, Sally Beauty Supply and Goodwill Industries International.  PF Chang had a cyber insurance policy which covered direct damages, but not claims filed by Mastercard/Bank of America for credit card fraud reimbursements and reissuance via a “PCI DSS assessment”.

These trends are interesting for a few reasons. First, the recurrent attack are reason to accelerate the move to EMVs. Second, it gives rise to new architectures for payments. Third it draws attention to Blockchain technologies.

Verifone has a Secure Commerce Architecture which sends the payment data directly from the terminal system which receives the card, to the merchant (acquirer) bank, without touching the POS system (the windows computer handling invoicing). This reduces payment fraud and also makes certification for EMVs much easier.


EMV stands for Europay, Mastercard, Visa. After the EMV deadline of Oct 1, 2015, the liability for credit card fraud shifted to whichever party is the least EMV-compliant in a fraudulent transaction. Automated fuel dispensers have until 2017 to make the shift to EMV. ATMs face two fraud liability shift dates: MasterCard’s in October 2016 and Visa’s in October 2017.

Because of EMV securing the terminals, there is predicted to be a rise in online payments fraud.  While counterfeit opportunity dwindles, the next top three types of credit card fraud, Account takeover, Card-not-present fraud, and (fake new) Application fraud are rising. The recent MICROS attack fits the pattern of attackers probing for and finding the weakest links in the payments chain.

A blackhat talk –

ASN.1 buffer overflow puts networks at risk

An ASN.1 vulnerability has been found in a security advisory of 7/18 here. It has to do with length bounds checking in the LTV triplet.  A fix is provided but updating it on a large number of GSM devices is not practical. “It could be triggered remotely without any authentication in scenarios where the vulnerable code receives and processes ASN.1 encoded data from untrusted sources, these may include communications between mobile devices and telecommunication network infrastructure nodes, communications between nodes in a carrier’s network or across carrier boundaries, or communication between mutually untrusted endpoints in a data network.”

A discussion at Arstechnica here, brings up a real exploit against GSM base station software, that operates below the application layer and so can be exploited against a  large number of devices.

A quote from the paper – “When GSM radio stacks were implemented, attacks against end devices were not much of a concern. Hence checks on messages arriving over the air interface were lax as long as the stack passed interoperability tests and certifications. Open-source solutions such as OpenBTS (Base Transceiver Station)allow anyone to run their own GSM network at a fraction of the cost of carrier-grade equipment, using a sim- ple and cheap software-defined radio. This development has made GSM security explorations possible for a significantly larger set of security researchers. Indeed, as the reader will see in the following, insufficient verification of input parameters transmitted over the air interface can lead to remotely exploitable memory corruptions in the baseband stack.”

The cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. GSM does not provide mutual authentication, there is no protection against fake BTSs.

Lexus bad-update details

A bad update on Lexus cars crashed its car entertainment system, affecting cars from California to Massachusetts. Details were reported  by SecurityLedger this week.

Users need to bring in their car to a Toyota/Lexus dealer to solve the problem via a forced reset and clear the bad data from the system. The bad behavior is due to incorrect handling of error data returned from third party web services.

“As more automakers embrace over-the-air software updates as a way to push out necessary fixes to vehicle owners, the prospect of unreliable and malicious updates causing real world disruptions has grown. In a March report to Congress (PDF), the U.S. Government Accountability Office (GAO) noted that modern vehicles feature many communications interfaces that are vulnerable to attack, but that measures to address those threats are likely years away, as automakers work to design more secure in-vehicle systems.” – SecurityLedger quote.

Updates and secure updates have not been a well-solved problem in the software world. A backup is usually recommended, but not always possible. Bringing the same solutions to cars and IoT seems like a bad idea.  The need for secure OTA auto updates to work has been noted, for e.g. at the “Five Star Automotive Cyber Safety Program” here and here. Yet it has not been a prominent part of the automotive manufacturers’ lexicon.

As we move towards autonomous cars and a corresponding increase in complexity, these problems would need to be solved in more elegant way.

ICSA Internet of Things Security Certification Requirements

ICSA recently announced an Internet of Things testing and certification program. It has six components (highlights in brackets) –

  1. cryptography (FIPS 140-2 crypto algos by default, secure PRNGs)
  2. communications (PKI auth, all traffic must be authorized)
  3. authentication (secure auth, protect auth data, no privilege escalation)
  4. physical security (tamper detection, defense, disable)
  5. platform security (secure boot, secure remote upgrade, DoS defense)
  6. alert/logging (log upgrades, attacks, tampering, admin access)

Their IoT security requirements framework is found here.

This is a great list. I think another dimension to think about is usability of the security – many products come with security options buried so deep in documentation or UI, that a regular user may not configure the device securely and leave it more open than intended – this has historically been true of a variety of webcams, SCADA systems, wifi routers and other devices.

Security Competition Open Sourced

Facebook made a Capture The Flag (CTF) cybersecurity competition open source and available this week  at .

There are several other CTF projects on github. I like that this approach to cybersecurity gets one thinking like an attacker. The problem is that the attack surface in highly connected systems is not obvious or easily modeled.

How about CTF competitions for IoT Security? There was one in March –



Reactive Microservices

I came across CQRS and attended a talk by James Roper on Lagom for micro services called “Rethinking REST” this week. The idea put forth in the talk was that REST services being synchronous are not ideal for microservices. Microservices should not be blocking , so something that emphasizes this async aspect would be preferable to REST.

How is this notion of async services different from a pub-sub model ? One way it goes beyond that is by proposing polyglot persistence. Different microservices should use their own persistence layer that is optimal – relational, nosql, timeseries, event log.

Lagom architecture is based on the book Reactive Services Architecture and that also suggests CQRS. The book proposes service isolation and that composition of systems with microservices should be done asynchronously via message passing.

A quote from the book on why bulkheads failed to save the Titanic:

“The Titanic did use bulkheads, but the walls that were supposed to isolate the compartments did not reach all the way up to the ceiling. So when 6 out of its 16 compartments were ripped open by the iceberg, the ship started to tilt and water spilled over from one compartment to the next, until all of the compartments were filled with water and the Titanic sank, killing 1500 people.”

The suggestion is that for higher availability there should be stricter isolation. The individual (micro)services may fail but the overall system should not be affected. Looking at it this way I think requires one to examine more closely the system design and its invariants.

Take CQRS as an example. In a query system, data is being read and not being modified; a query is a search operation on accumulated data – potentially a very large set with speed (availability) demands. Whereas in a command system, there is more real-time,  perhaps collaborative aspect which leaves most of the data untouched, but creates some new data which needs to be recorded (upload this image, send this message). Why should these two very different operations be served by the same backend ?

One of the systems that has done separation of concerns remarkably well is  AWS. Let’s take the separation of S3 and the database (say Dynamodb). Neither filesystem vendors nor database vendors came up with simple services to solve the problem of an app trying to upload an object and update a database that the object has been uploaded. The failure modes of each service are exposed to the client and the client is not forced to upload files to S3 via a server (bottleneck). S3 offers a read-write consistency for new uploads and eventually consistency for updates. Here the storage object, its universally addressable name, and its properties (backup, encryption, access, versioning, retention policies) are an invariant -a client does not need to fiddle with them apart from being assured of a certain level of service. One can call it an ingestion system.  More on that here.

The takeaway is that microservices should do one thing and do it very well, in a highly available, non-blocking manner. REST services can certainly be non-blocking, but they can also be blocking as described here, which is a problem.

A cornerstone of asynchronous services is messaging. Here’s a talk on Riak, a masterless database and messaging system, running on CloudFoundry, at GE – . It is followed by talks on microservices.


In several crypto systems, there are P-boxes and S-boxes. P-boxes are permutations of input to outputs. What are S-boxes ? I’ve known that they are static maps, used for “confusion” of input to output, have something to do with Galois fields, and that small changes in them affect the crypto properties of a cryptosystem; so they are analyzed by cryptographers in great detail.  Their design is a bit of a black box. Even in cryptol documentation, they are described as a kind of gift from above.

S-box stands for a substitution box.  The goal of the S-box is that for a small change of input there should be a large change in the output – an avalanche.  This property is clearly not true of pure transposition, substitution, vignere pad ciphers  – where a change of an input letter changes exactly one output letter. Because this property is not true, the simple crypto systems can be broken easily by analyzing output/input pairs.

A P-box can be described by a multiplication of the input (symbols, not sequence) with a matrix which is the identity matrix  with its rows transposed (scrambled) according to the permutation order.  So the P-box transform is an invertible linear transform. Can the S-box be another linear transform ? Suppose it were linear. One could give it unit input vectors and determine the rows of the transform, or use Gauss elimination with known input/output pairs.

So one would want this to be non-linear.  In fact, the AES S-box is a sophisticated permutation map – for each byte another unique byte is mapped in a lookup table . Isn’t a permutation linear ?  The key is that the bits of an output byte are not linearly related to the bits of the input byte. Looking at the Substitution-Permutation Network design makes it clear that this property will break Gauss elimination techniques on input/output pairs .

With that, here’s a reference which discusses the finite field math and the transform f_affine( g_finite_field_inverse( input_x ) ) which comprises the S-box design of AES – . The design goal is explicitly to maximize the non-linearity and beat differential attacks which analyze differences of output with input.

The design considers bits as coefficients of a polynomial (1111 = x^3+x^2+x^1+x^0), bytes as degree 7 polynomials and defines an invertible multiplication of two bytes as a map to another byte. Irreducible polynomials over a field are not factorizable over the field. A polynomial presupposes multiplication and addition of elements (multiplication is distinct from addition, and specifically is not repeated addition – adding a number to itself gives zero). For a modulo(prime) finite group, multiplication can be defined using the same modulo operator that defines the group (In G7, 3*5 = 15%7 = 1). A polynomial representation of a byte is a polynomial of (max) degree 7 with coefficients 0 or 1. The multiplication that maps (byte * byte -> byte) can be defined by taking the product polynomial (a polynomial of max degree 14)  then taking modulus via a irreducible byte polynomial of degree 7 or higher (in analogy with modulo group above) to give back a polynomial of max degree less than 7, i.e. a byte. Note the modulus used in AES is a specific polynomial of degree 8, m(x), which is not itself a byte. The division is done with polynomial long division. For more on GF(28) see

The irreducibility of m(x) of degree 8 implies that the GCD of the product of any two bytes p(x)=a(x).b(x) and m(x) is 1, so m(x) does not cleanly divide any p(x), so the division p(x)/m(x) always has a non-zero remainder for non-zero p(x), hence a non-zero product according to above definition of multiplication. If m(x) were degree 7, then x*m(x)%m(x) = x, so x would map to itself, which is not good. The GCD=1 also implies that each element has a multiplicative inverse. The m(x) chosen is a pentanomial with non-zero terms (8,4,3,1,0) or 100011011 or 0x11B. This design yields modulus math on bytes. So you can raise a byte to the power of n and get back another (well-scrambled) byte.

What a cool secret language. Raising to a power is non-linear. But why choose n=-1 ?

The choice of n=-1 is described in the Rijndael paper here with the reference to Nyberg’94 paper here which carries the line “The author’s attention to the mapping x ->x^-1 was drawn by C. Carlet. He observed that the high nonlinearity property (i) was actually proven in the work of Carlitz and Uchiyama.” The non-linearity  is aimed at defeating linear cryptanalysis where a few input-output pairs can be used to guess the key. The design of the scheme in Nyberg paper is aimed at defeating differential cryptanalysis, by flattening out the variation in the output, so it is close to noise.

Yet, if the letter “s” is common, the corresponding S-box letter (from the lookup table) will also be common. This S-box must be only part of an encryption scheme.

The Nyberg paper has the following line: “However,the high nonlinear order of the inversion mapping and .. comes into effect if these mappings are combined with appropriately chosen linear or affine permutations which may vary from round to round and depend on the secret key.”  A counter paper is here –On Exact Algebraic [Non-]Immunity of S-boxes Based on Power Function

A clear explanation of nonlinearity of a function F as the minimum hamming distance between bytes reachable by linear functions and bytes reachable by the non-linear function F is found in The Design Of S-Boxes by Jennifer Miuling Cheung (2010) p.13.  For two input bits, there are 8 linear functions (1, x1, x2, x1+1, x2+1, x1+x2, x1+x2+1) and 4 input sequences of 00,01,10,11; the output of each of the 8 functions on the 4 input sequences gives 8 output vectors of length 4. But there are 16 vectors of length 4; of these the linear functions reach only 8, the remaining 8 unreachable by linear functions are are termed nonlinear.

This paper, discusses constant time implementations of AES S-Boxes. Instead of a lookup table, they rely on doing the actual multiplication in the finite field: “During the offline phase, we precompute values H,X ·H,X2 ·H,…,X127 ·H. Based on this precomputation, multiplication of an element D with H can be computed using a series of xors conditioned on the bits of D”

AES Pipeline

AES divides the input into chunks of 128 bits. 128 bits is 16 bytes. 16 bytes can be arranged in a 4×4 matrix and are represented in this two dimensional form by AES. This matrix is called the “state”.

Using cryptol, the AES algorithm is described with the following functions

AESRound : (RoundKey, State) -> State

AESRound (rk, s) = AddRoundKey (rk, MixColumns (ShiftRows (SubBytes s)))

The final round leaves out the MixColumns transform.

AESFinalRound : (RoundKey, State) -> State
AESFinalRound (rk, s) = AddRoundKey (rk, ShiftRows (SubBytes s))

The inverse transform is as follows

AESInvRound : (RoundKey, State) -> State

AESInvRound (rk, s) =
InvMixColumns (AddRoundKey (rk, InvSubBytes (InvShiftRows s)))

AESFinalInvRound : (RoundKey, State) -> State

AESFinalInvRound (rk, s) = AddRoundKey (rk, InvSubBytes (InvShiftRows s))

The ‘DROWN’ attack on SSL and its extension to QUIC

The DROWN attack was recently shown to cause HTTPS sites supporting SSLv2 to be vulnerable to MITM attacks. An attack extension to QUIC is discussed in section 7 of their paper, along the same lines referenced earlier, which makes it relevant to current discussions of TLS.

Some background. Consider a simple encryption scheme that encrypts a message M to a ciphertext C. Should it always encrypt M to C ? One might think yes. But if yes, then an attacker could create his own extensive tables of encryptions of messages (Mi->Ci) , and then lookup the encrypted (intercepted) message Ci and infer what Mi was. Second, even if there is not a lookup hit, then the maps so collected could be used to deterministically modify Ci, to match another valid Mi.  This latter property is called malleability of encryption. Malleability may be desirable in some cases (homomorphic encryption), but in general it weakens encryption.

So determinism is bad and we want to make the encryption non-deterministic. We can do so by adding some random bytes to beginning of the message every time we encrypt. After decryption, we remove those random bytes and get back our message. These random bytes are called an initialization vector, or also called ‘padding‘.

The way padding is added, is important. If the attacker is able to infer that certain messages do not decrypt to the expected padding format (via error messages returned from the server), then he can narrow the range of valid messages. A ‘padding oracle’ is a server that freely answers whether an encrypted message is correctly padded or not. This is the basis of the Bleichenbacher attack.

The SSLv2 servers are using an old form of the RSA which is unpadded, deterministic and malleable. This property is the basis of many attacks, that involve a protocol downgrade (e.g. a PFS cipher to RSA cipher, or TLS to SSL) , or attack a known weak protocol. The Breach attack used the fact that compression changed the size of a message in a predictable way to infer bytes of a key. Compression is disabled in TLS1.2.

But why should this attack be possible with newer protocols like QUIC ? QUIC does not even support PKCS#1 v1.5 encryption.

QUIC achieves low latency by caching TLS state on the client. If any part of the protocol is using a deterministic behaviour it can be exploited. The Jager paper makes the following observations. 1. The server authentication is performed only in the connect phase, not in the repeat phase.  and 2. The signed SCFG message is made independent of the client request, which makes it possible for the server or an attacker to precompute it. So, there is exploitable determinism.

Certain fixes have been proposed in QUIC v31, to switch the server signature from a static signature of the server config to a signature of the server config with a hash of the client hello message. This is claimed to eliminate the DROWN vulnerability.