Category: Uncategorized

AES Pipeline

AES divides the input into chunks of 128 bits. 128 bits is 16 bytes. 16 bytes can be arranged in a 4×4 matrix and are represented in this two dimensional form by AES. This matrix is called the “state”.

Using cryptol, the AES algorithm is described with the following functions

AESRound : (RoundKey, State) -> State

AESRound (rk, s) = AddRoundKey (rk, MixColumns (ShiftRows (SubBytes s)))

The final round leaves out the MixColumns transform.

AESFinalRound : (RoundKey, State) -> State
AESFinalRound (rk, s) = AddRoundKey (rk, ShiftRows (SubBytes s))

The inverse transform is as follows

AESInvRound : (RoundKey, State) -> State

AESInvRound (rk, s) =
InvMixColumns (AddRoundKey (rk, InvSubBytes (InvShiftRows s)))

AESFinalInvRound : (RoundKey, State) -> State

AESFinalInvRound (rk, s) = AddRoundKey (rk, InvSubBytes (InvShiftRows s))

The ‘DROWN’ attack on SSL and its extension to QUIC

The DROWN attack was recently shown to cause HTTPS sites supporting SSLv2 to be vulnerable to MITM attacks. An attack extension to QUIC is discussed in section 7 of their paper, along the same lines referenced earlier, which makes it relevant to current discussions of TLS.

Some background. Consider a simple encryption scheme that encrypts a message M to a ciphertext C. Should it always encrypt M to C ? One might think yes. But if yes, then an attacker could create his own extensive tables of encryptions of messages (Mi->Ci) , and then lookup the encrypted (intercepted) message Ci and infer what Mi was. Second, even if there is not a lookup hit, then the maps so collected could be used to deterministically modify Ci, to match another valid Mi.  This latter property is called malleability of encryption. Malleability may be desirable in some cases (homomorphic encryption), but in general it weakens encryption.

So determinism is bad and we want to make the encryption non-deterministic. We can do so by adding some random bytes to beginning of the message every time we encrypt. After decryption, we remove those random bytes and get back our message. These random bytes are called an initialization vector, or also called ‘padding‘.

The way padding is added, is important. If the attacker is able to infer that certain messages do not decrypt to the expected padding format (via error messages returned from the server), then he can narrow the range of valid messages. A ‘padding oracle’ is a server that freely answers whether an encrypted message is correctly padded or not. This is the basis of the Bleichenbacher attack.

The SSLv2 servers are using an old form of the RSA which is unpadded, deterministic and malleable. This property is the basis of many attacks, that involve a protocol downgrade (e.g. a PFS cipher to RSA cipher, or TLS to SSL) , or attack a known weak protocol. The Breach attack used the fact that compression changed the size of a message in a predictable way to infer bytes of a key. Compression is disabled in TLS1.2.

But why should this attack be possible with newer protocols like QUIC ? QUIC does not even support PKCS#1 v1.5 encryption.

QUIC achieves low latency by caching TLS state on the client. If any part of the protocol is using a deterministic behaviour it can be exploited. The Jager paper makes the following observations. 1. The server authentication is performed only in the connect phase, not in the repeat phase.  and 2. The signed SCFG message is made independent of the client request, which makes it possible for the server or an attacker to precompute it. So, there is exploitable determinism.

Certain fixes have been proposed in QUIC v31, to switch the server signature from a static signature of the server config to a signature of the server config with a hash of the client hello message. This is claimed to eliminate the DROWN vulnerability.

Bytecode VMs

I came to be looking at different language VMs recently- Erlang BEAM and the Smalltalk VM. Smalltalk VM was a bytecode based stack VM (just like Java VM), and likely influenced the VM of Erlang. 

Java bytecode verification checks (and their vulnerabilities in bytecode loaders) are well known, but the security characteristics of other VMs and even languages running on JVM is less well known (clojure/scala ?).

BEAM is the Erlang VM (Bogdan/Björn’s Erlang Abstract Machine). It is the successor of JAM (Joe’s Abstract Machine) which was inspired by Prolog WAM. Details on it are found in the Hitchhiker’s Tour of BEAM, with ways to crash the BEAM VM such as creating too many atoms (which never get deleted). BEAM can run on bare-metal on XEN – http://erlangonxen.org . The format is described as based on EA IFF 85 – Standard for Interchange Format Files with a FOR1 starting 4 bytes. Here’s the full 6k lines of beam_load.c .

The code-to-BEAM-bytecode processing pipeline is described here as including a preprocessing step (see -E, -S, -P options to erlc). An interesting problem of peeking and pattern-matching inflight messages is discussed here. I find it interesting to think what would happen if one froze an erlang VM to see all inflight messages – like putting a breakpoint in the kernel. The way to get a stacktrace is erlang:get_stacktrace()

The hot-swapping capability is worrying, coming from the objective-c world that had code-signing to lock down changes. Erlang does have a strong process isolation model.

This article on Dart makes the case for non-bytecode VMs with the following  line “When your VM’s input is program source code, it can rely on the grammar of the language itself to enforce important invariants”. Sounds relevant to byte-code VMs as well.

Brief history of cloud time

Amazon Web Services (AWS) started with elastic and scalable compute and storage offerings – an elastic compute cloud (EC2) and a simple storage service (S3). EC2 was based on virtual machines. S3 was a simple key value store with an object store backend.

Heroku came up with a deploy an app with a single command line model – heroku deploy. It would allow automation of deployment of rails apps. It was an early example of a platform as a service (PAAS). The app ran in a virtualized linux container called a dyno.

NASA engineers wanted to store and process big data and tasked an internal team and Rackspace to build a web application framework for it. When they tried to use the existing aws offerings, they found the time to upload the data to AWS would take too long. They came up with their own implementation to manage virtual machines and handle storage, with openstack. It provided EC2 and S3 like apis and worked with certain hypervisors.

Greenplum which is a big data company acquired by EMC, wanted a framework to make use of big data easily. EMC had brought VMware which brought Cloudfoundry. Greenplum worked with VWware and a dev company pivotal labs to build up Cloudfoundry as a mechanism to deploy apps so they could be scaled up easily. Pivotal got acquired by VMware and later spun out as Pivotal Cloudfoundry to provide enterprise level support to the open source cloudfoundry offering.  This had participation from EMC, VMWare (both were one already), and a third participant – GE with an interest to build an industrial cloud.   Cloudfoundry forms the basis of the GE predix cloud.

Cloudfoundy is now a 3rd gen PAAS which works on top of an IAAS layer such as openstack or vmware vsphere or even docker containers running locally via a Cloud Provider Interface (CPI).

Another interesting project is Apache Geode, an in-memory database along the lines of Hana.

Meanwhile Amazon and Azure increase the number of webservices  available rapidly as well as reducing their costs.

There was a meetup on Ranger recently at Pivotal labs which discussed authorization for various components in the Hadoop ecosystem, including Kafka.

OAuth Threat Model, Mobile and Oracle Mobile Security Suite

Researchers in Germany recently found two previously unknown vulnerabilities in OAuth2.0, in the publication A Comprehensive Formal Security Analysis of OAuth 2.0.

An extensive OAuth thread model RFC 6819 exists. A SAML threat model by OWASP is discussed in – On Breaking SAML: Be whoever you want to be. OAuth and SAML protocols serve as prototypes for OpenId Connect protocol – which implements an authentication layer as an extension of authorization layer of OAuth2 and has a similar threat model. The above two vulnerabilities occur in both OAuth2 and OpenID Connect.

In the Microsoft paper, OAuth demystified for Mobile Application Developers, the authors discussed why OAuth is so easy to get wrong for mobile apps – it was originally designed for websites, not mobile. The browser redirection step that OAuth heavily relies on, cannot be performed securely on iOS in the general case. Moreover the complexity of the protocol means that out of 149 apps tested, 60% were faulty and vulnerable.

This is interesting because in the Oracle Mobile Security Suite we developed at BitzerMobile, these problems were addressed in a bulletproof manner for enterprise applications, by wrapping web access protocols into a simpler challenge response protocol that was injected securely into mobile apps. The app and the device are authenticated independently of the user. This reduces the attack surface considerably, for a common use.

A useful summary of attacks and countermeasures from OAuth and OpenID Connect Risks and Vulnerabilities follows:

Attack Category Countermeasures
Extracting credentials or tokens in captured traffic TLS encryption
Impersonating authorization server or resource server TLS server authentication
Manufacturing or modifying tokens Issue tokens as signed JWTs
Redirect manipulation Require clients to declare redirect URIs during registration
Guessing or interception of client credentials Use signed JWTs for client authentication
Client session hijacking or fixation Use the State parameter to ensure continuity of client session throughout the OAuth flow

The Relying Party (RP), aka Client (in unfortunately overloaded OAuth parlance), aka ‘Relying’ Service Provider (in Kerberos parlance), is a third party service which needs access. It may not have an incentive to follow the protocol intent (e.g. enforce adequate controls). Or it may willing to accept more data than the protocol intended.

The latter is the case in the first attack, where the IDP sends a 307 temporary redirect resulting in password form data being sent to the RP. The RP receives the user creds. The fix is to use a 302 Found.

The second attack has to do with HTTPS not being used in the path between the user and the RP, which allows a MITM to manipulate the request and send it to an Attacker IDP instead of the Honest IDP. The fix is for the RP to use HTTPS instead which is immune to MITM (but who enforces this ? the redirect is too quick for the user to inspect the site certs).

Note: OAuth terminology becomes clearer to an enterprise mindset by expanding the terms auth server, resource owner, resource server, client. The “client” is an untrusted service w.r.t. user creds but needs temporary access to HTTP resources. Instead of using the “resource owners” username/password, it  must acquire an access_token for this access.  In correspondence with Kerberos/SAML, the functional roles are

1. Identity Provider (=IDP =auth server = KDC. e.g. twitter, fb as a pure login service),

2. Trusted Server (=resource server, that is in same trust zone as IDP. e.g. twitter tweets/profile, fb photos/profile),

3. Relying Service ( a delegate service that wants to provide additional functionality using data from Trusted Service and user identity from IDP, e.g. filter tweets, mash photos, partner HR app; called client in OAuth). The client_id/client_secret are created for this entity – a bit like an independent KDC principal name. The client is a Relying Service which is in a different trust domain from the IDP/Trusted Service and needs an access token. This Relying Service may need to authenticate itself to the Trusted Service – which is done using a client_secret, in cases where the RS can keep a secret (i.e. in web apps, not mobile and browser based apps). One would need as many clients_ids as the number of clients that lie in independently revocable auth domains.

4. the User (=resource owner, a person or organization who has credentials/data stored in the IDP/TS and desires use of the Relying Service with existing IDP credentials).  The “authorization grant” is a  credential that identifies the user and correspond to the Kerberos Ticket Granting Ticket aka TGT.

There are four oauth authorization grant types: authorization_code, implicit, resource owner, client_credentials. Authorization_code authenticates both user and client. Implicit authenticates only the user, not the client (no client_secret, no authorization grant). In Resource_owner the client does authenticate itself, but gets the access token directly. Client credentials (RS-TS) is the simplest one and is used for M2M communications, here the client is acting on its own behalf, not the user. The type depends on the method used by the client to request authorization and the types supported by the authorization server.

The difference from the NFS/KDC case is that in the NFS case the additional (server) principals is created for the NFS server, not the NFS client. The focus in NFS is on not sending the password over the wire. In OAuth the focus is on not giving the password to the client.

Let’s Encrypt. Less Green ?

Letsencrypt.com is a service conceived to reduce the friction in enabling HTTPS on a website, by automating SSL certificate creation, validation, signing, installation and renewal. The server certificate setup which used to take hours can be done in a minute. Encryption will reduce the incidence of man-in-the-middle (MITM) attacks, which can easily insert or modify the javascript in transit.

Some of this is driven by Mozilla and its large public backers with perhaps an interest in showing the green bar and lock for more websites. A self-signed cert would also provide free encryption, prevent MITM attacks and be easy to setup but would throw an untrusted connection alert to the user.

So is LetsEncrypt encryption enough to show a green bar for a website ? Because regular certification schemes require a purchase, one has to go through a credit card verification step before being issued their cert. Certs with Extended Validation have more steps to go through. There are three types of certs based on level of validation – DV, OV, EV. Doman Validation (DV) does not try to check identity of the user and is what LetsEncrypt automates using a challenge-response scheme. Clicking on websites which use LetsEncrypt DV confirms that they display a green lock/bar (using firefox).

The problem with a widely accepted CA which has a zero cost barrier for setting up HTTPS is similar to that with the free precursor to OpenDNS.  A number of less than trustworthy websites can set themselves up as mirror images of trustworthy websites and send phishing attacks by email or sms, and an end-user has no way of telling the difference. Here’s a link on how to do just such a phishing attack with LetsEncrypt. So is LetsEncrypt making the web less secure ?

It’s true that the large number of CAs with their diverse validation mechanisms makes the existing scheme not so great – especially when CAs are compromised and/or issue bad certs (e.g Superfish, Comodo, NIC). However one could inspect the CA trusted authority and if there was reason to believe it is not trustworthy – e.g. see this pic (Chris Palmer), one could avoid clicking the link.

I think the average user should receive a better visual indication on the level of trust provided by a LetsEncrypt cert that has undergone a lower level of validation by design. Use a less green color ?

End users should be more aware of the certification process and get into the habit of explicitly checking Cert chains for HTTPS by clicking on the green lock displayed next to the URL.

Update: The owner field is not defined in a Domain Validated cert like ones issued by LetsEncrypt. https://superuser.com/questions/1042383/how-to-set-the-owner-of-certificate-generated-by-lets-encrypt

Cloud Access Security Brokers

Skyhigh Networks which was mentioned in my previous post  is one example of a Cloud Access Security Broker. There are several other companies in this field – NetSkope, Bitglass, Palerra, CipherCloud, Elastica, Adallom, Zscaler, some more familiar than others.

Gartner says the penetration will be 25% in the enterprise in 2016. Here’s a comparison table.

There is considerable scope for innovation in this space based on different user requirements – for instance the amount and type of sharing of the data affects the architecture. Homomorphic encryption while theoretically possible is not yet practical. I expect to see more differentiation before consolidation of different technologies in the domain.

Real World Crypto 2016

SSL/TLS dominated the conference with talks on its use at FB, Google, Amazon and modifications in the direction of TLS 1.3, with presentations on  QUIC, OPTLS, S2N and more, covering things like lower latency, forward secrecy, better ciphers. Some of the optimizations can have an impact on data center operations efficiency.

The timing attack on S2N protocol was interesting – the KL divergence measures the difference between two probability distributions and can be used to leak information from an SSL stream.

Privacy preserving operations on encrypted data (Skyhigh) talk was also interesting. Paul Grubbs discussed searchable symmetric encryption tradeoffs and open questions around a stateless SSE. In case of an encryption proxy, who maintains it ? The client would find it cumbersome. So an encrypted index is maintained by Skyhigh. This is not easy to manage. Also if one wants *both* security and privacy the search times and/or the number of roundtrips goes up.

Cryptol is a software from Galois to simulate ciphers and is useful to model, verify and even implement them. It is written in Haskell and is open source. It comes with several examples including the Enigma cipher. I tried this and will blog about it later.

I expected some presentations on DNS security – e.g. DNSSEC and DANE; talked to attendees from Verisign on their offerings (ddos monitoring, threat intelligence graph). DNS operates over IP (vs an out-of-band method for updates/insertions); with DNSSEC the DNS server needs to trust the same CA as the origin server which feeds it the DNS record. The general feeling I think is the trust problem has to be solved at the application layer and attacks like the Kaminsky attack have been mitigated against.

Here’s a diagram of the QUIC protocol. The claim is zero RTT for a repeat (secure) connection to the server (75% of the time), by combining TCP and SSL handshakes into one and caching state on the client. A practical attack on QUIC is discussed in this paper, a type of adaptive chosen ciphertext attack, which references this paper by Zhang, Reiter et al, discussing more general PAAS attacks including an SAML SSO attack.

Perfect forward secrecy definition: A public-key system has the property of forward secrecy if it generates one random secret key per session to complete a key agreement, without using a deterministic algorithm  .

If the attacker starts recording SSL sessions and later gets a compromised server private key, he can decrypt the sessions, without forward secrecy. With TLS 1.2 forward secrecy is optional and with session resumption optimization it is effectively disabled. TLS1.3 mandates forward secrecy with DH key exchanges.

Spark and Scala

Spark is a general-purpose distributed data processing engine that is used for for variety of big data use cases – e.g. analysis of logs and event data for security, fraud detection and intrusion detection. It has the notion of Resilient Distributed Datasets. The “resilience” has to do with lineage of a datastructure, not to replication. Lineage means the set of operators applied to the original datastructure. Lineage and metadata are used to recover lost data, in case of node failures, using recomputation.

Spark word count example discussed in today’s meetup.

val textfile = sc.textFile("obama.txt")
val counts = textFile.flatMap(line=>line.split(" ")).filter(_.length>4).map(word=>(word,1)).reduceByKey(_+_)
val sortedCounts = counts.map(_.swap).sortByKey(false)
sortedCounts.take(10)

Scala is a functional programming language which is used in Spark. It prefers immutable datastructures. Sounds great! How are state changes done then ? Through function calls. Recursion has a bigger role to play because it is a way for state changes to happen via function calls. The stack is utilized for the writes, rather than the heap. I recalled seeing a spiral scala program earlier and found one here on the web. Modified it to find the reverse spiral. Here’s the resulting code. The takeaway is that functional programs are structured differently – one could do some things more naturally. It feels closer to how the mind works. As long as one get the base cases right, one can build large amount of complexity trivially. On the other hand, if one has to start top down and must debug a large call stack, it could be challenging.

// rt annotated spiral program.
// source http://www.kaiyin.co.vu/2015/10/draw-plain-text-spiral-in-scala.html
// reference: http://www.cis.upenn.edu/~matuszek/Concise%20Guides/Concise%20Scala.html
// syntax highlight: http://bsnyderblog.blogspot.com/2012/12/vim-syntax-highlighting-for-scala-bash.html
import java.io.{File, PrintWriter}
import scala.io.Source

object SpiralObj {   // object keyword => a singleton object of a class defined implicitly by the same name
  object Element {   // subclass. how is element a singleton ? there are several elements. has 3 subclasses which are not singetons
    private class ArrayElement(  // subsubclass, not a singleton
                                val contents: Array[String]  // "primary constructor" is defined in class declaration, must be called
                                ) extends Element
    private class LineElement(s: String) extends Element {
      val contents = Array(s)
    }
    private class UniformElement(  // height and width of a line segment. what if we raise width to 2. works.
                                  ch: Char,
                                  override val width: Int,   // override keyword is required to override an inherited method
                                  override val height: Int
                                  ) extends Element {
      private val line = ch.toString * width  // fills the characters in a line
      def contents = Array.fill(height)(line) // duplicates line n(=height) times, to create a width*height rectangle
    }
    // three constructor like methods
    def elem(contents: Array[String]): Element = {
      new ArrayElement(contents)
    }
    def elem(s: String): Element = {
      new ArrayElement(Array(s))
    }
    def elem(chr: Char, width: Int, height: Int): Element = {
      new UniformElement(chr, width, height)
    }
  }


  abstract class Element {
    import Element.elem
    // contents to be implemented
    def contents: Array[String]

    def width: Int = contents(0).length

    def height: Int = contents.length

    // prepend this to that, so it appears above
    def above(that: Element): Element = {      // above uses widen
      val this1 = this widen that.width
      val that1 = that widen this.width
      elem(this1.contents ++ that1.contents)
    }

    // prefix new bar line by line
    def beside(that: Element): Element = {     // beside uses heighten
      val this1 = this heighten that.height
      val that1 = that heighten this.height
      elem(
        for ((line1, line2) <- this1.contents zip that1.contents)
          yield line1 + line2
      )
    }

    // add padding above and below
    def heighten(h: Int): Element = {          // heighten uses above
      if (h <= height) this
      else {
        val top = elem(' ', width, (h - height) / 2)
        val bottom = elem(' ', width, h - height - top.height)
        top above this above bottom
      }
    }

    // add padding left and right
    def widen(w: Int): Element = {             // widen uses beside
      if (w <= width) this
      else {
        val left = elem(' ', (w - width) / 2, height)
        val right = elem(' ', w - width - left.width, height)
        left beside this beside right
      }
    }

    override def toString = contents mkString "\n"
  }


  object Spiral {
    import Element._
    val space = elem("*")
    val corner1 = elem("/")
    val corner2 = elem("\\")
    def spiral(nEdges: Int, direction: Int): Element = { // clockwise spiral
      if(nEdges == 0) elem("+")
      else {
        //val sp = spiral(nEdges - 1, (direction + 1) % 4) // or (direction - 1) % 4, but we don't want negative numbers
        val sp = spiral(nEdges - 1, (direction + 3) % 4) // or (direction - 1) % 4, but we don't want negative numbers
        var verticalBar = elem('|', 1, sp.height)        // vertBar and horizBar have last two params order switched
        var horizontalBar = elem('-', sp.width, 1)
    val thick = 1
        // at this stage, assume the n-1th spiral exists and you are adding another "line" to it (not a whole round)
        // use "above" and "beside" operators to attach the line to the spiral
        if(direction == 0) {
          horizontalBar = elem('r', sp.width, thick)
          (corner1 beside horizontalBar) above (sp beside space) //  order is left to right
        }else if(direction == 1) {
          verticalBar = elem('d',thick, sp.height)
          (sp above space) beside (corner2 above verticalBar)
        } else if(direction == 2) {
          horizontalBar = elem('l', sp.width, thick)
          (space beside sp) above (horizontalBar beside corner1)
        } else {
          verticalBar = elem('u',thick, sp.height)
          (verticalBar above corner2) beside (space above sp)
        }
      }
    }

    def revspiral(nEdges: Int, direction: Int): Element = { // try counterclockwise
      if(nEdges == 0) elem("+")
      else {
        //val sp = spiral(nEdges - 1, (direction + 1) % 4) // or (direction - 1) % 4, but we don't want negative numbers
        val sp = revspiral(nEdges - 1, (direction + 3) % 4) // or (direction - 1) % 4, but we don't want negative numbers
        var verticalBar = elem('|', 1, sp.height)        // vertBar and horizBar have last two params order switched
        var horizontalBar = elem('-', sp.width, 1)
    val thick = 1
        // at this stage, assume the n-1th spiral exists and you are adding another "line" to it (not a whole round)
        if(direction == 0) { // right
          horizontalBar = elem('r', sp.width, thick)
          (sp beside space) above (corner2 beside horizontalBar)
        }else if(direction == 1) { // up
          verticalBar = elem('u',thick, sp.height)
          (space above sp) beside (verticalBar above corner1)
        } else if(direction == 2) { // left
          horizontalBar = elem('l', sp.width, thick)
          (horizontalBar beside corner2 ) above (space beside sp) 
        } else { // down
          verticalBar = elem('d',thick, sp.height)
          (corner1 above verticalBar) beside (sp above space)
        }
      }
    }
    def draw(n: Int): Unit = {
      println()
      println(spiral(n, n % 4))  // %4 returns 0,1,2,3 .    right, down, left, up
      println()
      println(revspiral(n, n % 4))  // %4 returns 0,1,2,3   
    }
  }
}

object Main {
  def usage() {
      print("usage: scala Main szInt");
  }

  def main(args: Array[String]) {
    
    import SpiralObj._
    if(args.length > 0) {
        val spsize = args(0)
        Spiral.draw(spsize.toInt)
    } else {
    usage()
        println()
    }
  }
}


A note on tail-call recursion. If the last statement of function is a call to another function, then the return position of the called function is the same as that of the calling function. The current stack position is valid for the called function. Such a function is tail recursive and the effect is that of a loop – a series of function calls can be made without consuming stack space.

Uber Security (Keys on Github)

As information-driven physical-world services like Uber, AirBnB and Square become more common they bring up some unique security issues for the interacting parties. To make the service effective they collect and store a large amount of user data. This data can be compromised as data needs to be shared not only with users but also with third party apps.  Then there is a threat of physical assault, physical damage and stolen card data.

At minimum, it is imperative to have a comprehensive information security program that protects the core data collection/processing pipeline and extends outwards to a) services built on top of the data and b) physical identities of the parties involved to assure them of trust in a brief interaction enabled by the information.

This article discusses how 50,000 driver information were compromised at Uber. The driver database keys were found on github ? How is that possible ?

If it is possible then it is a security incident that needs visibility, not just into the information within an enterprise but also outside it. The security incident and event monitoring products that exist (e.g. ArcSight, Bit9, CrowdStrike, Tanium) barely scratch the surface of this requirement – the haystack is bigger than we think it is and the needle we don’t know in advance.

The physical security is harder to deal with. One thing becomes apparent is that the reason the supply of hotels, cabs, even credit card issuers was constrained was due to legislation and regulations that were designed to create a high bar for an offering and build a high level of trust between the interacting parties.

Those lines are being redrawn with technology. The people impacted by the technology should be part of the conversation in coming up with appropriate ways to regulate the offerings to maintain security and safety.

Biometric User Identification for IOT

Two-Factor authentication solutions are based on the premise that the combined verification of (i) a thing possessed (a card) and (ii) a piece of information known to the user (a pin or password) provides a high degree of assurance to authenticate users. For financial and enterprise transactions it gives a high level of security. But 2FA is not a seamless solution – as the number and variety of services and devices for a user increases – it requires the user to carry a number of cards/tokens/devices and remember several passwords (that are unique, complex, updated). It is also not a foolproof solution as the identity theft continues to be a problem.

With the large number of IOT applications and devices appearing, the problem will become worse. Consider a health monitoring device that needs to periodically share information of a patient with her family members and doctor, while keeping the information safe from cloud attacks. Or consider keyless entry to vehicles or homes. For such common use cases entering complex passwords would be cumbersome.

With biometric authentication methods, as present with fingerprint based authentication on Apple and Samsung phones, there is a more direct identification of the user. But the way this is commonly used is not to eliminate passwords completely – it is typically used to

  1. store existing passwords securely,
  2. reduce repeated password entry by extending session created by an existing password
  3. combined with a user identifier such as a phone number or email address
  4. combined with a password (e.g. for byod deployments where multiple users can register fingerprints)

One can imagine a two factor auth where both factors are biometric, such as multiple fingerprints, or fingerprint and iris authentication. Such a two factor biometric approach could eliminate the need to remember passwords and reduce friction in accessing services securely. An example is the combination of facial recognition and fingerprint recognition.

Biometric authentication methods being worked on include gait recognition and voice biometrics. These can be included in a continuous authentication method.

SecureAuth and BehavioSec Auth Presentation, Palo Alto

IDC gave a good security landscape overview at the SecureAuth executive luncheon today in Palo Alto.

SecureAuth provides a flexible adaptive authentication system that balances security with user experience.

BehavioSec does biometric authentication based on user behavior such as the pattern of keystrokes when entering a password. It builds a statistical profile and them determines if the password is entered anomalously. It provides collector SDKs to collect this information from mobile apps and websites.

In case of a large difference between the expected pattern and the current pattern, the SecureAuth integration forces a step up auth to a second factor.

There is adoption of this kind of technology in banking, retail and other verticals.

Security Acquisitions Oct 2015

Lancope, Viewfinity, Vormetric, LogEntries, Boxer, Secure Islands, Silanis

http://www.infoworld.com/article/3000479/security/security-acquisitions-reach-a-fever-pitch.html

Lancope – StealthWatch provides a visual representation of the network to detect anomalies that could signify an attack. In the event of an infection, StealthWatch analyzes traffic between servers to determine which hosts were affected. Acquired by Cisco, $453m.

Viewfinity – Endpoint security for windows. App control features and administrative privilege capabilities to protect against zero-day attacks, malware and threats.

Vormetric – Filesystem encryption, keeping metadata in clear and enterprise key-management for third party encryption keys. Acquired by Thales Security for $400m

LogEntries – machine data search technology to help security teams  investigate security incidents deeply. Spun out of University College Dublin (UCD). Acquired by Rapid7, $68m. 3k customers.

Boxer – Android email app, acquired by VMWare

Secure Islands –  IQProtector looks at content and wraps/protects it based with policy based DRM automatically. “Secure Islands’ Data Immunization uniquely embeds protection within information itself at the moment of creation or initial organizational access. This process is automatic and accompanies sensitive information throughout its lifecycle from creation, through usage and collaboration to storage.” Acquired by Microsoft.

Silanis – e-Signatures with strong crypto algorithmic and keys

Cloud Security and Compliance Standards

Cloud processing of information affects existing information processing flows, controls and compliance standards. Cloud service providers show the level to which they support diverse compliance standards that are specific to verticals such as payments, health, finance, enterprise. A reference is https://aws.amazon.com/compliance/ .

CSA-CCM Cloud Security Alliance Cloud-Controls Matrix. The foundations of the Cloud Security Alliance Controls Matrix rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as the ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum and NERC CIP. Part of Governance, Risk Management and Compliance (GRC) stack – CloudAudit, CCM, CAIQ, CTP.

PCI-DSS Payments Card Industry-Data Security Standards released guidelines for storing credit card data in the cloud. See https://www.pcisecuritystandards.org/pdfs/PCI_DSS_v2_Cloud_Guidelines.pdf

SSAE16 Statement on Standards for Attestation Engagements (SSAE) 16. SSAE 16 reporting can help service organizations comply with Sarbanes Oxley‘s requirement. It is not limited to financial reporting; it can also be applied to other sectors, and is useful for datacentres. SSAE 16 is one of the most widely known tools for providing assurances to data center customers. See discussion at http://www.datacenterknowledge.com/archives/2012/01/19/aicpa-fumbles-audit-standards-at-the-5-yard-line/ and related SOC1, SOC2.

ISO27001 Specification for an information security management system, released 2013.

FedRAMP U.S. federal agencies have been directed use a process called FedRAMP (Federal Risk and Authorization Management Program) to assess and authorize federal cloud computing products and services. See https://aws.amazon.com/compliance/fedramp/

UK G-Cloud Framework for faster procurement of IT Services over the cloud. See also CESG Communications-Electronics Security Group

IRAP Information Security Registered Assessors Program is an Australian Signals Directorate initiative to provide high-quality information and communications technology services to government in support of Australia’s security. A list of certified clouds – http://www.asd.gov.au/infosec/irap/certified_clouds.htm

HIPAA Health Insurance Portability and Accountability Act. The privacy rule ensures patients access to their health information and Protected Heath Information data (PHI) and de-identification of such data before health information being shared publicly. The security rule covers physical and technical safeguards such as control and monitoring of information against intrusions, encryption over networks etc.

DIACAP DoD Information Assurance Certification and Accreditation Process.  United States Department of Defense (DoD) process to ensure that companies and organizations apply risk management to information systems. Aligns with NIST Risk Management Framework (RMF).

GLBA Financial Services Modernization act of 1999. Removed barriers in the market among banking companies, securities companies and insurance companies acting as one,  allowing them consolidate. GLBA compliance is mandatory; whether a financial institution discloses nonpublic information or not, there must be a policy in place to protect the information from foreseeable threats in security and data integrity.

NIST-SP800 30 NIST Risk Management Guide for Information Technology Systems

FISMA Federal Information Security Management Act of 2002. The act requires each federal agency to develop, document, and implement an agency-wide program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source.[

UK Data Protection Act Governs the protection of personal data in the UK

EU Data Privacy Directive Officially officially Directive 95/46/EC. European Union directive adopted in 1995 which regulates the processing of personal data within the European Union. It is an important component of EU privacy and human rights law. On 25 January 2012, the European Commission unveiled a draft European General Data Protection Regulation that will supersede the Data Protection Directive.

FIPS 140-2  Federal Information Processing Standard  Publication 140-2 is a U.S. government computer security standard used to accredit cryptographic modules.