Weeping Angel Remote Camera Monitor

Sales of hardware camera blockers and similar devices should increase, with the Weeping Angel disclosure. Wikileaks detailed how the CIA and MI5 hacked Samsung TVs to silently monitor remote communications. Interesting to read for the level of technical detail: https://wikileaks.org/vault7/document/EXTENDING_User_Guide/EXTENDING_User_Guide.pdf . The attack was called ‘Weeping Angel’, a term borrowed from Doctor Who.

Other such schemes are described at https://wikileaks.org/vault7/releases/#Weeping%20Angel , including a iPhone implant to get data from your phone – https://wikileaks.org/vault7/document/NightSkies_v1_2_User_Guide/NightSkies_v1_2_User_Guide.pdf .

Golang interface ducktype and type assertion

The interface{} type in Golang is like the duck type in python. If it walks like a duck, it’s a duck – so the type is determined by an attribute of the variable. This duck typing support in python often leaves one searching for the actual type of the object that a function takes or returns. But with richer names, or a naming convention, one gets past this drawback. Golang tries to implement a more limited and stricter duck typing – the programmer can get define the type of a variable as an interface{} – which should have been called a ducktype{}. But when it comes time to determine the type of the duck, one must assert it explicitly – and in that process can receive an error indicating if there was a type mismatch.

explicit ducktype creation

var myVariableOfDuckType interface{} = “thisStringSetsMyVarToTypeString”

var mySecondVarOfDuckType interface{} = 123  // sets type of mySecondVar to int

ducktype assertion

getMystring, isOk := myVariableOfDuckType.(string) // isOk is true, assertion to string passed

getMystring, isOk := mySecondVarOfDuckType.(string) // isOk is false, assertion to string  failed

In python, int(123) and int(“123”) both return 123.

In Golang, int(mySecondVarOfDuckType) will not return 123, even though the value actually happens to be an int. It will instead return a “type assertion error”.

cannot convert val (type interface {}) to type int: need type assertion

ICS Threat Landscape

Kaspersky labs released a report on Industrial and Control System (ICS) security trends. The data was reported to be gathered using Kaspersky Security Network (KSN), a distributed antivirus network.

https://ics-cert.kaspersky.com/reports/2017/03/28/threat-landscape-for-industrial-automation-systems-in-the-second-half-of-2016/

The report is here – https://ics-cert.kaspersky.com/wp-content/uploads/sites/6/2017/03/KL-ICS-CERT_H2-2016_report_FINAL_EN.pdf

ics_cert_en_1

The rising trend is partly due to the isolation strategy currently being followed for ICS network security no longer being effective in protecting industrial networks.

Git Merge. You are in the middle of a merge. Cannot Amend.

Let’s say I made changes to branch “abc”, committed and pushed them.  This fired a build and a code-review after which the code is ready to be merged. But before these changes are merged, another party merged changes from a branch “xyz” into “abc”.

How do I replay my changes on top of changes from “xyz” in “abc” ? Here’s a seemingly obvious command to try:

$ git pull origin abc // tldr don't do this. try git pull --rebase
Auto-merging file1
CONFLICT (content): Merge conflict in file1
Automatic merge failed; fix conflicts and then commit the result.

At this point, I resolve the commits and attempt a git commit –amend. But this gives an error.

$ git commit --amend
fatal: You are in the middle of a merge -- cannot amend.
$ git rebase 
First, rewinding head to replay your work on top of it...
Applying: change name
Using index info to reconstruct a base tree...
M file1
Falling back to patching base and 3-way merge...
Auto-merging file1
CONFLICT (content):

The problem is the pull step, which implicitly does a fetch+merge and where the merge fails. (check git –help pull).

To recover from this do, squash the unnecessary merge and do a rebase.

$ git reset --merge 
$ git rebase
First, rewinding head to replay your work on top of it..

This shows a list of conflicting files. Find the conflicting files and edit them to resolve conflicts.

$ git rebase --continue
file: needs merge
You must edit all merge conflicts and then
mark them as resolved using git add

$ git add file1 file2
$ git rebase --continue
Applying: change name
$ git commit --amend
$ git push origin HEAD:refs/for/abc

Here’s the git rebase doc for reference. The rebase command is composed of multiple cherry-pick commands. Cherry-pick is a change-parent operation and changes commit-id = hash(parent-commit-id + changes). Here it re-chains my commit on top of a different commit than the one I started with. The commit –amend would not have changed the commit-id, and so it would not change the parent, so it fails.

Git merge is an operation to merge two different branches. It is the opposite of git branch, which creates a branch. In the above example we are applying our local changes to the same branch, which has undergone some changes since our last fetch.

Another error sometimes seen during a cherry-pick is the “fatal: You are in the middle of a cherry-pick — cannot amend”.  This happens on a “git commit –amend”  and here it expects you to do a plain “git commit” to first complete the cherry-pick operation.

A git commit references the current set of changes and the parent commit. A git merge references two parent commits. A git merge may involve a new commit if the two parents have diverged and conflicts need to be resolved.

Some idempotent (safe) and useful git commands.

$ git reflog [--date=iso]  # show history of local commits to index
$ git show  # or git show commit-hash, git show HEAD^^
$ git diff [ --word-diff | --color-diff ]
$ git status # shows files staged for commit (added via git add to the index). different from git diff
$ git branch [-a]
$ git tag -l   # list all tags 
$ git describe # most recent tag reachable from current commit
$ git config --list
$ git remote -v # git remote is a repo that contains the same proj 
$ git fetch --dry-run 
$ git log --decorate --graph --abbrev-commit --date=relative --author=username
$ git log --decorate --pretty="format:%C(yellow)%h%C(green)%d%Creset %s -> %C(green)%an%C(blue), %C(red)%ar%Creset"
$ git log -g --abrev-commit --pretty=online 'HEAD@{now}'
$ git grep login -- "*yml"
$ tig    # use d, t, g to change views; use / to search
$ git fsck [--unreachable]
$ git verify-pack -v .git/objects/pack/pack-*.idx | grep -v chain | sort -k3nr | head  # list of largest files
$ git cat-file -p <hash> # view object - tree/commit/blob
$ git clean -xdn

Difference between ‘origin’ and ‘upstream’ terms is explained here. An interactive git cheat sheet is here. There is only one origin, the one you cloned your repo from, there can be many remotes, which are copies of your project.

Update sequence: working_directory -> index -> commit -> remote

Some terms defined:

origin = main remote repository

master = default (master/main) branch on the remote repository

HEAD = current revision

git uses the SHA-1 cryptographic hash function for content-addressing individual files (blobs) and directories (trees)  as well as commits – which track changes to the entire tree. Since each tree is a SHA-1 hash over the SHA-1 hashes of the files it contains, the SHA-1 change of a child that is n-levels deep, propagates to the root of the tree.

The index used in ‘git add’ is under .git/index . It can be viewed with

$ git ls-files --stage --abbrev --full-name --debug

This does not store the file contents, just their paths. The content for a new commit is stored under .git/objects/<xy>/<z>    where xy is the first two digits of the hash and z is the rest of 40 char hash.  If you do a fresh checkout the .git/object is empty except for the .git/objects/pack directory which stores the entire commit history in compressed form in a .pack file and a .idx index file. An mmap is done on these files for fast searching.

RSA World 2017

I was struck by the large number of vendors offering visibility as a key selling point. Into various types of network traffic. Network monitors, industrial network gateways, SSL inspection, traffic in VM farms, between containers, and on mobile.

More expected are the vendors offering reduced visibility – via encryption, TPMs, Secure Enclaves, mobile remote desktop, one way links, encrypted storage in the cloud etc. The variety of these solutions is also remarkable.

Neil Degrasse’s closing talk was so different yet strangely related to these topics, the universe slowly making itself visible to us through light from distant places, with insights from physicists and mathematics building up to the experiments that recently confirmed gravitational waves – making the invisible, visible. . This tenuous connection between security and physics left me misty eyed. What an amazing time to be living in.

Industrial Robot Safety Systems

Robot safety systems must be concerned with graceful degradation in the face of component failures, bad inputs and extreme operating conditions. With the increasing complexity and prevelance of robots, one can expect these requirements to grow.

This document “Robot System Safety” describes the following safety features of Kuka robots-
— Restricted envelope
— EMERGENCY STOP
— Enabling switches
— Guard interlock

An interlock system described here for general control systems, is a mechanism for guaranteeing that an undesired combination of states does not occur – for e.g. robot moving when the cell door is open, or an elevator moving when the door is open. The combination of the controlled stop with the guard interlock is described as follows:

“The robot controller features a two–channel safety input, to which the guard interlock can be connected. In the automatic modes, the opening of the guard connected to this input causes a controlled stop, with power to the drives being maintained in order to ensure this controlled stop. The power is only disconnected once the robot has come to a standstill. Motion in Automatic mode is prevented until the guard connected to this input is closed. This input has no effect in Test mode. The guard must be designed in such a way that it is only possible to acknowledge the stop from outside the safeguarded space.”

This Stanford Linear Accelerator (SLAC) paper describes the advantages of PLCs for interlock design as:

. flexible system configuration due to modular hardware and software
. regularly scheduled background tests of PLC system and sensitive I/O
. comprehensive system self-tests
. intelligent fault diagnostics simplify trouble-shooting
. easy reconfiguration of the interlock logic
. no mechanical wear and tear
. improved security due to logic encapsulation in firmware

The SLAC use case is to lock the doors unless (a) the power has been shutoff or (b) the power is on but there is an explicit bypass for hot maintenance. They implement it on a Siemens PLC using statement lists (vs ladder logic or control system flowchart programming).
GE has a number of industrial interlocks for various safety functions – http://www.clrwtr.com/PDF/GE-Security/Sentrol-Catalog.pdf

A very useful article on interlocking devices – http://machinerysafety101.com/2012/06/01/interlocking-devices-the-good-the-bad-and-the-ugly/

Neural Network learning

Notes from Autonomous Driving Udacity course.

Weights

 

 

IOT security vulnerabilities and hacks

Mirror of https://github.com/nebgnahz/awesome-iot-hacks :

A curated list of hacks in IoT space so that researchers and industrial products can address the security vulnerabilities (hopefully).

Thingbots

RFID

Home Automation

Connected Doorbell

Hub

Smart Coffee

Wearable

Smart Plug

Cameras

Traffic Lights

Automobiles

Airplanes

Light Bulbs

Locks

Smart Scale

Smart Meters

Pacemaker

Thermostats

Fridge

Media Player & TV

Toilet

Toys

Software Defined Networking Security

Software Defined Networking seeks to centralize control of a large network. The abstractions around computer networking evolved from the connecting nodes via switches, to applications that run on top with the OSI model, to the controllers that manage the network. The controller abstraction was relatively weak – this had been the domain of telcos and ISPs, and as the networks of software intensive companies like Google approached the size of telco networks, they moved to reinvent the controller stack.  Traffic engineering and security which were done in disparate regions were attempted to be centralized in order to better achieve economies of scale. Google adopted openflow for this, developed by Nicira, which was soon after acquired by VMWare; Cisco internal discussions concluded that such a centralization wave would reduce Cisco revenues in half, so they spun out Insieme networks for SDN capabilities and quickly acquired it back. This has morphed into the APIC offering.

The centralization wave is a bit at odds with the security and resilience of networks because of their inherent distributed and heterogenous nature. Distributed systems provide availability, part of the security CIA triad, and for many systems availability trumps security. The centralized controllers would become attractive targets for compromise. This is despite the intention of SDN, as envisioned by Nicira founder M. Casado, to have security as its cornerstone as described here. Casado’s problem statement is interesting: “We don’t have a ubiquitous and horizontal security layer that provides both context and isolation. Where do we normally put security controls? We put it in one of two places. We might put it in the physical infrastructure, which is great because you have isolation. If I have ACLs [access control lists] or a firewall or an IDS [intrusion detection system], I put it in a separate box and I put it away from the applications so that the attack surface is pretty small and it’s totally isolated… you have isolation, but you have no context. ..  Another place we put security is in the end host, an application or operating system. This suffers from the opposite problem. You have all the context that you need — applications, users, the data being accessed — but you have absolutely no isolation.” The centralization imperative comes from the need to isolate and minimize the trusted computing base.

In the short term, there may be some advantage to be gained by complexity reduction through centralized administration, but the recommendation of dumb switches that respond to a tightly controlled central brain, go against the tenets of compartmentalization of risk and if such networks are put into practice widely they can result in failures that are catastrophic instead of isolated.

What the goal should be is a distributed system which is also responsive.

Lessons from SF Muni Ransomware – malware

On Nov 25, a hacker going by “andy saolis” infected the San Francisco Municipal Transportation Agency’s (SMFTA) network with ransomware that encrypted data on 900 office computers, spreading through the system’s Windows operating system. Saolis threatened to publish 30 gigabytes of data, including contracts, employee data, customer information.  SMFTA’s ticketing system was shut down to prevent the malware from spreading. The attacker demanded a 100 Bitcoin ransom, around $73,000, to unlock the affected files. Salted hash reported the malware is likely a variant of HDDCryptor, which uses commercial tools to encrypt hard drives and network shares.

The service was restored due to backups . However consider these systems were in an ICS scenario. An unexpected downtime would result, which would be unacceptable.

Containers and Privileges

Cgroups limit how much you can do. Namespaces limit how much you can see.

Linux containers are based on cgroups and namespaces and can be privileged or unprivileged. A privileged container is one without the “User Namespace” implying it has direct visibility into all users of the underlying host. The User Namespace allows remapping the user identities in the container so even if the process thinks it is running as root, it is not.

Using cgroups to limit fork bombs from Jessie’s talk:

$ sudo su
# echo 2 > /sys/fs/cgroup/pids/parent/pids.max
# echo $$ > /sys/fs/cgroup/pids/parent/cgroups.procs  // put current pid
# cat /sys/fs/cgroup/pids/parent/pids.current
2
# (echo "foobar" | cat )
bash: for retry: No child processes

Link to the 122 page paper on linux containers security here.  Includes this quote on linux kernel attacks.

“Kernel vulnerabilities can take various forms, from information leaks and Denial of Service (DoS) risks to privilege escalation and arbitrary code execution. Of the roughly 400 Linux system calls, a number have contained privilege escalation vulnerabilities, as recently as of 2016 with keyctl(2). Over the years this included, but is not limited to: futex(2), vmsplice(2), mremap(2), unmap(2), do_brk(2), splice(2), and modify_ldt(2). In addition to system calls, old or obscure networking code including but not limited
to SCTP, IPX, ATM, AppleTalk, X.25, DECNet, CANBUS, Econet and NETLINK has contributed to a great number of privilege escalation vulnerabilities through various use cases or socket options. Finally, the “perf” subsystem, used for performance monitoring, has historically contained a number of issues, such as perf_swevent_init (CVE-2013-2094).”

Which makes the case for seccomp, as containers both privileged and unprivileged can lead to bad things –

““Containers will always (by design) share the same kernel as the host. Therefore, any vulnerabilities in the kernel interface, unless the container is forbidden the use of that interface (i.e. using seccomp)”- LXC Security Documentation by Serge Hallyn, Canonical”

The paper has several links on restricting access, including grsecurity, SELinux, App Armor and firejail. A brief comparison of the first three is here. SELinux has a powerful access control mechanism – it attaches labels to all files, processes and objects; however it is complex and often people end up making things too permissive, instead of taking advantage of available controls.  AppArmor works by labeling  files by pathname and applying policies to the pathname – it is recommended with SUSE/OpenSUSE, not CentOS.  Grsecurity policies are described here, its ACLs support process–based resource restrictions, including memory/cpu/files open, etc.

Blockchain ideas

I think of bitcoin as a self-securing system. Value is created by solving the security problem of verifying the last block of bitcoin transactions. This verification serves as a decentralized stamp of approval, and the verification step consists of hashing a nonce and a block of transactions through a one way hash function and arriving at a checksum with a certain structure (which is hard because hash is random and meeting the structure requirement is a low probability event).

What happens if parties collude to get greater hashing power and increase their share of mining ? This is what happened with GPU mining farms on bitcoin. It was one of the motivations behind Ethereum, which enables code to run as part of transactions, and for the hashing algorithm to be not easily parallelized over a GPU. But it is not economical to mine on desktops as the motivation seems to suggest.

The important aspect I think is the self-securing idea – how can a set of computational systems be designed so that they are incentivized to cooperate and become more secure as a result of that cooperation.

At a recent blockchain conference, some interesting topics of discussion were zero knowledge proofs, consensus algorithms,  greater network-member-ownership in a network with network-effects instead of a centralized rent collection, game theoretic system designs and various etherereum blockchain applications.

Update: this self securing self sustaining view is explored further here. https://medium.com/@jordanmmck/cryptocurrencies-as-abstract-lifeforms-9e35138d63ed

The Dyn DNS DDOS Attack Oct 21

DYN is a DNS provider internet infrastructure company. It’s the name behind widely used DynDNS. It supports DNS for twitter, visa, github, mongo, netflix and several other big tech sites.

Doug Madory a researcher at DYN, presented a talk  on DDoS attacks in Dallas at a meeting of the North American Network Operators Group (NANOG) 68 – his was the last talk on Oct 19, wednesday. The talk discussed the attack on Krebs on Security last month and details other such attacks.

On Friday several sites serviced by DYN were attacked in a distributed denial-of-service (DDoS) attack.

The distributed denial-of-service (DDoS) attack involved malicious DNS lookup requests from tens of millions of IP addresses including a botnet on a large number of IoT devices infected with the Mirai malware, which is designed to brute force security on any IoT device. There are cameras involved with fixed passwords that are burned into the firmware, that cannot be changed.

The implications of IoT devices that are 1) unsecure 2) impossible to secure and 3) infected by malware and 4) controlled by a botnet that is controlled by malicious intent are made clearer with this attack.

Update on DDOS mitigation:  RFC 3882,  Configuring BGP to block Denial-of-Service attacks, discusses Remote Triggered Black Hole (RTBH) method, to configure certain routers to selectively stop malicious high volume traffic which is targeting a particular IP.  The target site is made inaccessible but the rest of the network service stays active. An example configuration is at https://networkengineering.stackexchange.com/questions/10857/use-bgp-to-defend-against-a-ddos-attack-originating-from-remote-as .  The improvement is to reduce the side-effects of such a delisting, to enable faster recovery when the attack is over.