Zero Trust Networks

Instead of  the “inside” and “outside” notion of traditional firewalls and perimeter defense technologies, the Zero Trust Network notion has its origin in the Cloud+Mobile first world where a person carrying a mobile device can be anywhere in the world (inside/outside the enterprise) and needs to be seamlessly and securely connected to online services.

The essential idea appears to be device authentication coupled with a second factor in the shape of an easy to remember password, with backend security smarts to identify the accessing device. More importantly, every service that is access externally needs to be authenticated, instead of some services being treated as internal services and being less protected.

Some properties of zero trust networks:

  • Network locality based access control is insufficient
  • Every device, user and service is authenticated
  • Policies are dynamic – they gather and utilize data inputs for making access control decisions
  • Attacks from trusted insiders are mitigated against

This is a big change from many networks which have network based defense at the core (for good reason, as it was cost effective). To create a zero-trust network, a startin point is to identify, enumerate and sequence all network flows.

I attended a talk by Centrify on this topic, which resonated with experiences in cloud, mobile and fog systems.

Related effort in Kubernetes – Progress Toward Zero Trust Kubernetes Networks, Istio Service Mesh , API Gateway to Service Mesh.  One can contrast the API gateway as being present only at the ingress point of a cloud, whereas with a Zero-trust/Service-mesh/Sidecar approach every microservice building-block has its own external proxy and ‘API’ for management added to it. The latter would add to latency concerns for real-time applications, as the new sidecar proxies are in the data path. One benefit of the service mesh is a mechanism to put in service to service security in a uniform manner.

The key original motivation behind Istio, in the second presentation by Lyft above, was greater observability and reliability across a complex cluster of microservices. This strikes me as a greater motivating use-case of this technology, than added security.  From the security point of view, there is a parallel of the Istio approach with the SDN problem statement of a horizontal and ubiquitous security layer.   Greater visibility is also a motivation behind the P4 programming language presented in disaggregated storage talk on protocol independant switch architecture or PISA here – one of the things it enables is inband telemetry.

SCRAM: Salted Challenge Response Authentication Mechanism

SCRAM is an interesting proposal (RFC-5802) that aims to remove passwords being commonly sent across the wire. It does not appear to create additional requirements for certificates or shared secrets, so let’s see how it works.

The server is required to know the username in advance, but not the password, instead a hash of the password and a (per-user) salt and an iteration count which is used to create a challenge.

The client sends the username and a nonce. The server retrieves the salt and updates the iteration count and sends these back to the client as a challenge. The client hashes the password with the agreed upon hash function, and uses the salt and the iteration count in the calculation, and send it back to the server. The server is able to validate correctness of the hashed password with the information it has.  The server then sends back a hash which the client can check to validate the server.

There are several issues with it – the initial registration flow is left out, the requirements of the client and server to issue good nonces and maintain unique salts and iterations are high, and also the requirement for the server database itself to be secure – an exfiltration could enable brute force attacks.  Then it uses SHA-1 which is weak. The password is fixed and an update method would need to be designed for a full system.

Still it is interesting as a way to remove passwords being sent over the wire.

The protocol is used in XMPP as a standard mechanism for authentication.

 

Traffic limits with HAProxy stick-table

A traffic rate limiting feature is required to keep an HTTP website backend safe from abusive or malfunctioning clients.  This requires the ability to track user sessions of a particular type and/or from a given IP address. HAProxy is an HTTP proxy which (when configured as reverse proxy to protect a website), receives client requests in its frontend and sends those requests to servers in its backend.   The config file has corresponding frontend and backend sections. Haproxy also has an in-memory table to store state related to incoming HTTP connections, indexed by a key such as client IP address.  This table is called a stick-table – it is enabled using the ‘stick-table’ directive in the haproxy config file.

The stick-table directive allows  specifying the key, the size of the table, the duration an entry (key) is kept in seconds and various counts such as currently active connections, connection rate, http request rate, http error rate etc.

Stick tables are very useful for rate-limiting traffic and tagging traffic that meets certain criteria such as a high connection or error rate with a header which can be used by the backend to log the traffic.

The origin of this rate-limiting feature request along with an example is at https://blog.serverfault.com/2010/08/26/1016491873/ . Serverfault is a high traffic website so it is a good indication if the feature works for them.

frontend http
    bind *:2550

stick-table type ip size 200k expire 10m store gpc0

# check the source before tracking counters, that will allow it to
# expire the entry even if there is still activity.
acl whitelist src 192.168.1.154
acl source_is_abuser src_get_gpc0(http) gt 0
use_backend ease-up-y0 if source_is_abuser
tcp-request connection track-sc1 src if ! source_is_abuser

acl is_test1 hdr_sub(host) -i test1.com
acl is_test2 hdr_sub(host) -i test2.com

use_backend test1  if is_test1
use_backend test2  if is_test2

backend test1 
stick-table type ip size 200k expire 30s store conn_rate(100s),bytes_out_rate(60s) 
acl whitelist src 192.168.1.154

# values below are specific to the backend
tcp-request content  track-sc2 src
acl conn_rate_abuse  sc2_conn_rate gt 3
acl data_rate_abuse  sc2_bytes_out_rate  gt 20000000

# abuse is marked in the frontend so that it's shared between all sites
acl mark_as_abuser   sc1_inc_gpc0 gt 0
tcp-request content  reject if conn_rate_abuse !whitelist mark_as_abuser
tcp-request content  reject if data_rate_abuse mark_as_abuser

server local_apache localhost:80

Note that the frontend and backend sections have their own stick-table sections.

A general strategy would be to allow enough buffer for legitimate traffic to pass in, drop abnormally high traffic and flag intermediate risk traffic to the backend so it can either drop it or log the request for appropriate action, including potentially adding the IP to an abusers list for correlation, reverse lookup and other analysis. These objectives are achievable with stick-tables.

An overview of the HAProxy config file with the sections global, defaults, frontend, backend is here.

Stick tables use elastic binary trees-

https://github.com/haproxy/haproxy/blob/master/include/types/stick_table.h

https://github.com/haproxy/haproxy/blob/master/src/stick_table.c

https://wtarreau.blogspot.com/2011/12/elastic-binary-trees-ebtree.html

Related, for analysis of packet captures in DDoS context, a useful tool is python dpkt – https://mmishou.wordpress.com/2010/04/13/passive-dns-mining-from-pcap-with-dpkt-python .

 

 

Hatman, Triton ICS Malware Analysis

A Triconex Industrial controller allows triple modular redundancy and 2/3 consensus vote based control.  The design has its origins in the 80’s industrial needs for safety for industrial controllers. The product was acquired by Schneider via Invensys in 2014. The Hatman/Triton malware framework targeting this specific controller came to light, late 2017. The Triconex is programmed with a TriStation, a Windows application which integrates with Windows directory and allows programming in FBD, LD, ST, CEM.

From the SchneiderElectric, Accenture and Mandiant analyses of the malware, more technical details appeared recently. A previous paper appeared in IEEE, Jan 2017. A brief summary is below.

Access to the controller network is necessary. The Triconex controller needs to be in Program mode. A malware program agent, TriLogger, running on Windows in the same network talks over a Tricon protocol to program the Triconex controller to install/deploy the control payload program. The malware payload program then runs like a regular program on the controller, on every scan cycle –  running in parallel in three versions.

Once on the controller, the malware looks for a way to elevate its privilege level. It starts observing the runtime, including memory inspections. There is a memory backdoor attempted, but there is a probable error handling mistake which prevents this. Now to be able to access the firmware, it takes advantage of a zero-day vulnerability in the firmware.  It is able to install itself in the firmware, overwriting a network function call. In the end it installs a Remote Access Terminal to allow remote access of the controller. This could have been a vector to download further payloads, but no evidence was found that this RAT was actually used. It attempts to remove traces of itself after installation.

Source code of the program is at  https://github.com/ICSrepo/TRISIS-TRITON-HATMAN .

Zero day attacks are a continuing challenge as by definition they are not widely known before they are used for an attack. However a secure by design approach reduces the attack surface for exploits. There were opportunities to detect the malware on the network and the windows host.

Update: A cert advisory for Triton appears in https://ics-cert.us-cert.gov/advisories/ICSA-18-107-02 and “Targeted Cyber Intrusion Detection and Mitigation Strategies” in https://ics-cert.us-cert.gov/tips/ICS-TIP-12-146-01B

Javascript Timing and Meltdown

In response to meltdown/spectre side-channel vulnerabilities, which are based on fine grained observation of the CPU to infer cache state of an adjacent process or VM, a mitigration response by browsers was the reduction of the time resolution of various time apis, especially in javascript.

The authors responded with alternative sources of finding fine grained timing, available to browsers. An interpolation method allows obtaining of a fine resolution of 15 μs, from a timer that is rounded down to multiples of 100 ms.

The javascript  high resolution time api is still widely available and described at https://www.w3.org/TR/hr-time/ with a reference to previous work on cache attacks in Practical cache attacks in JS

A meltdown PoC is at https://github.com/gkaindl/meltdown-poc, to test the timing attack in its own process. The instruction RDTSC returns the Time Stamp Counter (TSC), a 64-bit register that counts the number of cycles since reset, and so has a resolution of 0.5ns on a 2GHz CPU.

int main() {
 unsigned long i;
 i = __rdtsc();
 printf("%lld\n", i);
}

One Time Passwords for Authentication

A MAC or Message Authentication Code protects the integrity and authenticity of a message by allowing verifiers to detect changes to the message content. It requires a random key generation algo that produces a per-message random key K, a signing algorithm which takes K and message M as input and produces signature S, and a verifying algorithm with takes K,  M and S as input and produces a binary decision to accept or reject the message. Unlike a digital signature a MAC typically does not provide non-repudiation. It is also called a protected checksum. Both sender and recipient of the K and M share a secret key.

HMAC: So called Hashed-MAC because it uses a cryptographic hash function, such as MD/SHA to create a MAC. The computed value is something only someone with the secret key can compute (sign) and check (verify). HMAC uses an inner key and an outer key to protect against length extension and collision attacks on simple MAC signature implementations. RFC 2104. It is a type of Nested MAC (NMAC) where both inner and outer keys are derived from the same key, in a way that keeps the derived keys independent.

HOTP: HMAC-based One Time Password.   HOTP is based on an incrementing counter. The incrementing counter serves as the message M, and when run through the HMAC it produces a random set of bytes, which can be verified by the receiving party. The receiving party keeps a synchronized counter, so the message M=C does not need to be send on the wire. RFC 4226.

TOTP: Time-based One-time Password Algorithm . TOTP combines a secret key with the current timestamp using a cryptographic hash function to generate a one-time password. Because network latency and out-of-sync clocks can result in the password recipient having to try a range of possible times to authenticate against, the timestamp typically increases in 30-second intervals. Here the requirement to keep the counter synchronized is replaced with time synchronization. RFC 6328.

 

ROCA, TPM chips and Fast Primes

This issue affects IoT devices relying on TPMs for security. Especially those that are remote and hard to update using a firmware update.

ROCA: Return Of the Coppersmith Attack.

A ‘Fast Primes’ algo is used to generate primes, but since they are not uniformly distributed, one can use knowledge of the public keys to guess the private keys. Primality testing was discussed in this blog here.

From the full paper here:  Practical Factorization of Widely Used RSA Moduli

  • All RSA primes (as well as the moduli) generated by the RSALib have the following form: p=k∗M+(65537^a modM). Because of this the public key, p*q is of the form 65537^c mod M.
  • The integers k, a are unknown, and RSA primes differ only in their values of a and k for keys of the same size. The integer M is known and equal to some primorial M = Pn# (the product of the n successive primes) and is related to the size of the required key. The attack replaces this with a smaller M’ which still satisfies the above prime generation equation, enabling guessing of a.
  • Coppersmith algorithm based RSA attacks are typically used in scenarios where we know partial information about the private key (or message) and we want to compute the rest. The given problem is solved in the three steps:

    problem → f(x)≡0modp → g(x)=0 → x0

    p is an unknown prime and x0 is a small root of the equation. Code based on SageMath (python) – https://github.com/mimoo/RSA-and-LLL-attacks/blob/master/coppersmith.sage

  • The LLL is a fascinating algorithm which ‘reduces’ a lattice basis b0, · · · ,bn−1. The algorithm computes an alternative basis b0 ′ , · · · , bn-1′ of the lattice which is smaller than the original basis. Coppersmith’s algorithm utilizes the LLL algorithm.

 

Discussion of Coppersmith attack in “Twenty Years of Attacks on the RSA Cryptosystem”.

A statement by Infineon is here. A discussion on this appeared on crypto.stackexchange.com at https://crypto.stackexchange.com/questions/52292/what-is-fast-prime .

The vulnerability affects HP, Lenovo and Fujitsu which OEM Infineon TPMs. It also affected Gemalto IDPrime smartcards which used the Infineon library.

Unrelated to this but a wifi attack appeared around the same time.