Author: Ruchir Tewari

Reinforcement learning

An Agent is in an Environment. a) Agent reads Input (State) from Environment. b) Agent produces Output (Action) that affects its State relative to Environment c) Agent receives reward (or feedback) for the Output produced. With the reward/feedback it receives it learns to produce better Output for given Input.

Where do neural networks come in ?

Optimal control theory considers control of a dynamical system such that an objective function is optimized (with applications including stability of rockets, helicopters). In optimal control theory, Pontryagin’s principle says: a necessary condition for solving the optimal control problem is that the control input should be chosen to minimize the control Hamiltonian. This “control Hamiltonian” is inspired by the classical Hamiltonian and the principle of least action. The goal is to find an optimal control policy function u∗(t) and, with it, an optimal trajectory of the state variable x∗(t) which by Pontryagin’s maximum principle are the arguments that maximize the Hamiltonian.

Derivatives are needed for the continuous optimizations. Deep learning models are capable of performing continuous linear and non-linear transformations, which in turn can compute derivatives and integrals. They can be trained automatically using real-world inputs, outputs and feedback. So a neural network can provide a system for sophisticated feedback-based non-linear optimization of the map from Input space to Output space.

The above could be accomplished by a feedforward neural network that is trained with a feedback (reward). Additionally a recurrent neural network could encode a memory into the system by making reference to previous states (likely with higher training and convergence costs).

Model-free reinforcement learning does not explicitly learn a model of the environment.

Manifestations of RL: Udacity self-driving course – lane detection. Karpathy’s RL blog post has an explanation of a network structure that can produce policies in a malleable manner, called policy gradients.

Practical issues in Reinforcement Learning –

Raw inputs vs model inputs: There is the problem of mapping inputs from real-world to the actual inputs to a computer algorithm. Volume/quality of information – high vs low requirement.

Exploitation vs exploration dilemma: https://en.wikipedia.org/wiki/Multi-armed_bandit. Simple exploration methods are the most practical. With probability ε, exploration is chosen, and the action is chosen uniformly at random. With probability 1 − ε, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). ε is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.

AWS DeepRacer. Allows exploration of RL. Simplifies the mapping of camera input to computer input, so one can focus more on the reward function and deep learning aspects. The car has a set of possible actions (change heading, change speed). The RL task is to predict the actions based on the inputs.

What are some of the strategies applied to winning DeepRacer ?

Reward function input parameters – https://docs.aws.amazon.com/deepracer/latest/developerguide/deepracer-reward-function-input.html

DeepRacer: Educational Autonomous Racing Platform for Experimentation with Sim2Real Reinforcement Learning” – https://arxiv.org/pdf/1911.01562.pdf

RL is not a fit for every problem. Alternative approaches with better explainability and determinism include behavior trees, vectorization/VectorNet, …

DeepMind says reinforcement learning is ‘enough’ to reach general AIhttps://news.ycombinator.com/item?id=27456315

Richard Sutton and Andrew Barto’s book on RL: An introduction.

This paper explores incorporating Attention mechanism with Reinforcement learning – Reinforcement Learning with Attention that Works: A Self-Supervised Approach. A video review of the ‘Attention is all you need’ is here, the idea being to replace an RNN with a mechanism to selectivity track a few relevant things.

Multi agent Deep Deterministic Policy Gradients – cooperation between agents. https://www.youtube.com/watch?v=tZTQ6S9PfkE. Agents learn a centralized critic based on the observations and actions of all agents. https://arxiv.org/pdf/1706.02275.pdf .

Multi-vehicle RL for multi-lane driving. https://arxiv.org/pdf/1911.11699v1.pdf

Reinforcement learning in chip design

Deep learning is being applied to combinatorial optimization problems. A very intriguing talk by Anna Goldie discussed an application of RL to chip design that cuts down the time for layout optimization and which in turn enables optimizing of the chip design for a target software stack in simulation before the chip goes to production. Here’s a paper – graph placement methodology for fast chip design.

A snippet on how the research direction evolved to a learning problem.

Chip floorplanning as a learning problem

The underlying problem is a high-dimensional contextual bandits problem but, as in prior work, we have chosen to reformulate it as a sequential Markov decision process (MDP), because this allows us to more easily incorporate the problem constraints as described below. Our MDP consists of four key elements:
(1) States encode information about the partial placement, including the netlist (adjacency matrix), node features (width, height, type), edge features (number of connections), current node (macro) to be placed, and metadata of the netlist graph (routing allocations, total number of wires, macros and standard cell clusters).
(2) Actions are all possible locations (grid cells of the chip canvas) onto which the current macro can be placed without violating any hard constraints on density or blockages.
(3) State transitions define the probability distribution over next states, given a state and an action.
(4) Rewards are 0 for all actions except the last action, where the reward is a negative weighted sum of proxy wirelength, congestion and density, as described below.

We train a policy (an RL agent) modelled by a neural network that, through repeated episodes (sequences of states, actions and rewards), learns to take actions that will maximize cumulative reward (see Fig. 1).
We use proximal policy optimization (PPO) to update the parameters of the policy network, given the cumulative reward for each placement.”

Their diagram:

“An embedding layer encodes information about the netlist adjacency, node features and the current macro to be placed. The policy and value networks then output a probability distribution over available grid cells and an estimate of the expected reward for the current placement, respectively. id: identification number; fc: fullyconnected layer; de-conv: deconvolution layer”

A graph placement methodology for fast chip design | Nature

“Fig. 1 | Overview of our method and training regimen.In each training iteration, the RL agent places macros one at a time (actions, states and rewards are denoted byai, si and ri, respectively). Once all macros are placed, the standard cells are placed using a force-directed method. The intermediate rewards are zero. The reward at the end of each iteration is calculated as a linear combination of the approximate wirelength, congestion and density, and is provided as feedback to the agent to optimize its parameters for the next iteration.”

The references mention a number of applications of ML to chip design. A project exploring these is at https://github.com/The-OpenROAD-Project at https://theopenroadproject.org/wp-content/uploads/2021/11/demo-lounge-slides.pdf

ML for Forecasting

In this paper – “DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks”, the authors discuss a method for learning a global model from several individual time series.

Let’s break down some aspects of the approach and design.

“In probabilistic forecasting one is interested in the full predictive distribution, not just a single best realization, to be used in downstream decision making systems.”

The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term).

Recurrent Neural Network is used to refer to NNs with an infinite impulse response, and are used for speech recognition, handwriting recognition and such tasks involving sequences. https://en.wikipedia.org/wiki/Recurrent_neural_network

An LSTM or The Long Short-Term Memory (LSTM) is a type of RNN, that came about to solve a problem of vanishing gradients in previous RNN designs. An LSTM cell can process data sequentially and keep its hidden state through time.

A covariate is an independant random variable, with which the target random variable is assumed to have some covariance.

The approach has distinct features described in this snippet

“In addition to providing better forecast accuracy than previous methods, our approach has a number key advantages compared to classical approaches and other global methods: (i) As the model learns seasonal behavior and dependencies on given covariates across time series, minimal manual feature engineering is needed to capture complex, group-dependent behavior; (ii) DeepAR makes probabilistic forecasts in the form of Monte Carlo samples that can be used to compute consistent quantile estimates for all sub-ranges in the prediction horizon; (iii) By learning from similar items, our method is able to provide forecasts for items with little or no history at all, a case where traditional single-item forecasting methods fail; (vi) Our approach does not assume Gaussian noise, but can incorporate a wide range of likelihood functions, allowing the user to choose one that is appropriate for the statistical properties of the data.
Points (i) and (iii) are what set DeepAR apart from classical forecasting approaches, while (ii) and (iv) pertain to producing accurate, calibrated forecast distributions learned from the historical behavior of all of the time series jointly, which is not addressed by other global methods (see Sec. 2). Such probabilistic forecasts are of crucial importance in many applications, as they—in contrast to point forecasts—enable optimal decision making under uncertainty by minimizing risk functions, i.e. expectations of some loss function under the forecast distribution.”

Facebook Prophet is an open-source library for forecasting – https://facebook.github.io/prophet/

ARMA – AutoRegressive Moving Average Estimator

ARIMA estimator – AutoRegressive Integrated Moving Average is a generalization of ARMA and can better handle non-stationarity in a time series.

DevSecOps – Securing the Software Supply Chain

A position paper from CNCF on securing the software supply chain, talks about hardening the software construction process by hardening each of the links in the software production chain –

https://github.com/cncf/tag-security/blob/main/supply-chain-security/supply-chain-security-paper/CNCF_SSCP_v1.pdf

Quote – “To operationalize these principles in a secure software factory several stages are needed. The software factory must ensure that internal, first party source code repositories and the entities associated with them are protected and secured through commit signing, vulnerability scanning, contribution rules, and policy enforcement. Then it must critically examine all ingested second and third party materials, verify their contents, scan them for security issues, evaluate material trustworthiness, and material immutability. The validated materials should then be stored in a secure, internal repository from which all dependencies in the build process will be drawn. To further harden these materials for high assurance systems it is suggested they should be built directly from source.

Additionally, the build pipeline itself must be secured, requiring the “separation of concerns” between individual build steps and workers, each of which are concerned with a separate stage in the build process. Build Workers should consider hardened inputs, validation, and reproducibility at each build. Finally, the artifacts produced by the supply chain must be accompanied by signed metadata which attests to their contents and can be verified independently, as well as revalidated at consumption and deployment.”

The issue is that software development is a highly collaborative process. Walking down the chain and ensuring the ingested software packages are bug-free is where it gets challenging.

The Department of Defense Enterprise DevSecOps Reference design, speaks to the aspect of securing the build pipeline –

https://dodcio.defense.gov/Portals/0/Documents/DoD%20Enterprise%20DevSecOps%20Reference%20Design%20v1.0_Public%20Release.pdf?ver=2019-09-26-115824-583

The DoD Container Hardening Guide referenced in the CNCF doc is at –

https://software.af.mil/wp-content/uploads/2020/10/Final-DevSecOps-Enterprise-Container-Hardening-Guide-1.1-Public-Release.pdf

which has a visual Iron Bank flow diagram on p.20

Distributed Training

Distributed training aims to reduce the training time of a model in machine learning, by splitting the training workload across multiple nodes. As both the data and the model sizes have grown, distributed training has become an area of focus in ML. Training consists of iteratively minimizing an objective function by running the data through a model and determining a) the error and the gradients with which to adjust the model parameters (forward path) and b) the updated model parameters using calculated gradients (reverse path). The latter step always requires synchronization between the nodes, in some cases the first also requires communication.

There are three approaches to distributed training – data parallelism, model parallelism and data-model parallelism. Data parallelism is more common and preferred if the model fits in GPU memory.

In data parallelism, we partition the data on to different GPUs and and run the same model on these data partitions. The same model is present in all GPU nodes and no communication between nodes is needed on the forward path. The calculated parameters are sent to a parameter server, which averages them, and updated parameters are retrieved back by all the nodes to update their models to the same incrementally updated model.

In model parallelism, we partition the model itself into parts and run these on different GPUs.

To communicate the intermediate results between nodes the MPI primitives are leveraged, including AllReduce.

The amount of training data for BERT is ~600GB. BERT-Tiny model is 17MB, BERT-Base model is ~400MB. During training a 16GB memory GPU sees an OOM error.

Some links to resources –

https://andrew.gibiansky.com/blog/machine-learning/baidu-allreduce/

https://github.com/horovod/horovod/blob/master/docs/concepts.rst

https://medium.com/pytorch/how-lyft-uses-pytorch-to-power-machine-learning-for-their-self-driving-cars-80642bc2d0ae

https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html

https://aws.amazon.com/blogs/machine-learning/launching-tensorflow-distributed-training-easily-with-horovod-or-parameter-servers-in-amazon-sagemaker/

https://openai.com/blog/scaling-kubernetes-to-2500-nodes/

https://towardsdatascience.com/distributed-deep-learning-training-with-horovod-on-kubernetes-6b28ac1d6b5d

https://mccormickml.com/2019/11/05/GLUE/ Origin of General Language Understanding Evaluation.

https://github.com/google-research/bert

Horovod core principles are based on the MPI concepts size, rank, local rank, allreduce, allgather, and broadcast. These are best explained by example. Say we launched a training script on 4 servers, each having 4 GPUs. If we launched one copy of the script per GPU:

  • Size would be the number of processes, in this case, 16.
  • Rank would be the unique process ID from 0 to 15 (size – 1).
  • Local rank would be the unique process ID within the server from 0 to 3.
  • Allreduce is an operation that aggregates data among multiple processes and distributes results back to them. Allreduce is used to average dense tensors. Here’s an illustration from the MPI Tutorial:
Allreduce Illustration
  • Allgather is an operation that gathers data from all processes in a group then sends data back to every process. Allgather is used to collect values of sparse tensors. Here’s an illustration from the MPI Tutorial:
Allgather Illustration
  • Broadcast is an operation that broadcasts data from one process, identified by root rank, onto every other process. Here’s an illustration from the MPI Tutorial:

Multimodal neurons typographic attacks

https://openai.com/blog/multimodal-neurons/

ML Training on images and text together leads to certain neurons holding information of both images and text – multimodal neurons.

When the type of the detected object can be changed by tricking the model into recognizing a textual description instead of a visual description- that can be called a typographic attack.

Intriguing concepts indicating that a fluid crossover from text to images and back is almost here.

Bitcoin market cap reaches $1T

Bitcoin reached a $1T market cap last month. https://www.msn.com/en-us/news/technology/bitcoin-reaches-dollar1-trillion-valuation-twice-as-fast-as-amazon/ar-BB1fF3Bl

A Bitcoin halving event is scheduled to take place every 210,000 blocks. This reduces the payoff of securing a block by half. Three Bitcoin halvings have taken place so far in 2012, 2016, 2020. The next halving is predicted to occur in 2024. The corresponding block reward went from 50btc in 2009 to 25 in ‘12, 12.5 in ‘16, 6.25 in ‘20 and 3.125 in ‘24. https://www.coinwarz.com/mining/bitcoin/halving

The rate of production of bitcoin over time is shown below. Mining will continue until 21million btc are created.

VeChain is a blockchain proposal/implementation for supply chain tracking.

https://cdn.vechain.com/vechainthor_development_plan_and_whitepaper_en_v1.0.pdf

EdgeChain is an architecture for placement of applications on the edge amongst multiple resources from multiple providers. It is built on Vechain for Mobile and Edge Computing use cases.

https://vechainofficial.medium.com/vechain-introduces-michigan-state-university-as-first-veresearch-participant-and-pioneering-the-mec-c8dec3015914

Disaster Recovery: Understanding and designing for RPO and RTO

Let’s take a disaster scenario where a system loses its data-in-transit (i.e. not yet persisted) at a certain point in time. and some time after this point, a recovery process kicks in, which restores the system back to normal functioning.

Recovery Point Objective refers to the amount of tolerable data loss measured in time. It can be measured in time based on the fact that it is in-transit data of a certain max velocity, so bounding the time bounds the amount of data that can be lost. This time figure, the RPO, is used to determine how frequently the data must be persisted and replicated. An RPO of 10 minutes implies the data must be backed up every 10 minutes. If there’s a crash the system can be restored to a point not more than 10 minutes prior to the time of crash. RPO determines frequency of backups, snapshots or transaction logs.

Recovery Time Objective refers to the amount of time required to restore a system to normal behavior after a disaster has happened. This includes restoration of all infrastructure components that provide a service, not just the restoration of data.

Lower RPO/RTO is higher cost.

Matrix of RPO – high/low vs RTO – high/low can be used to categorize applications.

Low RPO, Low RTO. Critical online application like a storefront.

Low RPO, High RTO. Data sensitive application but not online, like analytics.

High RPO, Low RTO. Redundantly available data or no data. Compute clusters that are highly available.

High RPO, High RPO. Non-prod systems – dev/test/qa ?

Amount of acceptable data loss <= App (data?) criticality.
One can expect a pyramid of apps – large number with less criticality, small number with high criticality

Repeatability. Backup and recovery procedures. Must be written. Must be tested. Automation.

HA/DR spectrum of solutions:

  • Backups, save transaction logs
  • Snapshots
  • Replication – synchronous, asynchronous
  • Storage only vs in-memory as well. Application level crash consistency of backups.
  • Multiple AZs
  • Hybrid

Tech: S3 versioning and DDB streams, Global tables.

Rules of thumb:

Related terms: RPA and RTA

3 types of disasters.

  • Natural disaster – e.g. floods, earthquakes, fire
  • Technical failure – e.g. loss of power, cable pulled
  • Human error – e.g. delete all files as admin

Replication – works for first two. Continuous snapshots/backup/versioning – for the last one. Replication will just delete the data on both sides. Need the ability to go back in time and restore data.

Cost – how to optimize cost of infrastructure and its maintenance.

Which region to choose ? Key considerations: What types of disasters are the concern (Risk). How much proximity is needed to end-customers and to primary region (Performance). What’s the cost of the region (Cost) ?

Processors for Deep Learning: Nvidia Ampere GPU, Tesla Dojo, AWS Inferentia, Cerebras

The NVidia Volta-100 GPU released in Dec 2017 was the first microprocessor with dedicated cores purely for matrix computations called Tensor Cores. The Ampere-100 GPU released May’20 is its successor. Ampere has 84 Streaming Multiprocessors (SMs) with 4 Tensor Cores (TCs) each for a total of 336 TCs. Tensor Cores reduce the cycle time for matrix multiplications, operating on 4×4 matrices of 16bit floating point numbers. These GPUs are aimed at Deep Learning use cases which consist of a pipeline of matrix operations.

Here’s an article on choosing the right EC2 instance type for DL – https://towardsdatascience.com/choosing-the-right-gpu-for-deep-learning-on-aws-d69c157d8c86 (G4 for inferencing, P4 for training).

How did the need for specialized DL chips arise, and why are Tensors important in DL ? In math, we have Scalars and Vectors. Scalars are used for magnitude and Vectors encode magnitude and direction. To transform Vectors, one applies Linear Transformations in the form of Matrices. Matrices for Linear Transformations have EigenVectors and EigenValues which describe the invariants of the transformation. A Tensor in math and physics is a concept that exhibits certain types invariance during transformations. In 3 dimensions, a Stress Tensor has 9 components, which can be representated as a 3×3 matrix; under a change of basis the components of the tensor change however the tensor itself does not.

In Deep Learning applications a Tensor is basically a Matrix. The Generalized Matrix Multiplication (GEMM) operation, D=AxB+C, is at the heart of Deep Learning, and Tensor Cores are designed to speed these up.

In Deep Learning, multilinear maps are interleaved with non-linear transforms to model arbitrary transforms of input to output and a specific model is arrived by a process of error reduction on training of actual data. This PyTorch Deep Learning page is an excellent resource to transition from traditional linear algebra to deep learning software – https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html .

Tesla Dojo is planned to build a processor/computer dedicated for Deep Learning to train on vast amounts of video data. Launched on Tesla AI Day, Aug’20 2021, a video at https://www.youtube.com/watch?v=DSw3IwsgNnc

AWS Inferentia is a chip for deep learning inferencing, with its four Neuron Cores.

AWS Trainium is an ML chip for training.

Generally speaking the desire in deep learning community is to have simpler processing units in larger numbers.

Updates: Cerebras announced a chip which can handle neural networks with 120 trillion parameters, with 850,000 AI optimized cores per chip.

SambaNova, Anton, Cerebras and Graphcore presentations are at https://www.anandtech.com/show/16908/hot-chips-2021-live-blog-machine-learning-graphcore-cerebras-sambanova-anton

SambaNova is building 400,000 AI cores per chip.

Delta Lake and Spark for threat detection and response at scale

Notes on a talk on the data platform for Threat detection and response at scale at Spark+AI Summit, 2018.

The threat detection problem, use-cases and scale.

  • It’s important to focus on and build the data platform first else one can get siloed into narrow set of addressable use-cases.
  • we want to detect attacks,
  • contextualize the attacks
  • determine root cause of an attack,
  • determine what the scope of the incident might be
  • determine what we need to contain it
  • Diverse threats require diverse data sets
  • the Threat signal can be concentrated or spread in time
  • Keylines visualization library is used to build a visualization of detection, contextualization, containment

Streaming is a simple pattern that takes us very far for detection

  • Streams are left-joined with context and filtered or inner-joined with indicators
  • Can do a lot with this but not everything
  • Graphs are key. Graphs at scale are super hard.

Enabling triage and containment with search and query

  • to triage the detection, it comes down to search and query.
  • ETM does 3.5million records/sec. 100TB of data a day. 300B events a day.
  • 11 trillion rows, 0.5PB of data.

Ingestion architecture – tries to solve all these problems and balance issues.

  • data comes into s3 in a consistent json wrapper
  • there’s a single ETL job that takes all the data and writes it into a single staging table which is partitioned by date and event-type, has a long retention
  • table is optimized to stream new data in and stream data out of, but can be queried as well. you can actually go and query it using sql function
  • highest value data – we write parsers, we have discrete parsing streams and put them into a common schema and put it into separate delta tables. well parsed, well structured.
  • use optimizations from delta, z-odering.. to index over columns that are common predicates. search by IP address, domain names – those are what we order by
  • indexing and z-ordering – take advantage of data skipping
  • sometimes we parser code gets messed up.
  • single staging table.. is great . we just let the fixed parser run forward, we have all the data corrected, then we are back-corrected. don’t have to repackage code and run as a batch job. we literally just fix code and run it in the same model that’s it.
  • off of these refined tables or parsed data sets, this is where the detection comes in.
  • we have a number of detection streams in batches, that do the logic and analysis. facet-faced or statistical.
  • alerts that come out of this go to their own alert table. goes to delta again. long retection, consistent schema. another streaming job then does de-duplication and whitelisting and writes out alerts to our alert management system. we durably store all the alerts, whether or not de-duped/whitelisted
  • allows us to go back and fix things if things are not quite correct, accidentally.
  • all this gives us operational sanity, and a nice feedback loop

Thanks to z-ordering, it can go from scanning 500TB to 36TB.

  • average case is searching over weeks or months. it makes it usable for ad-hoc refinements.
  • simple, unified platform.

Michael: Demo on interactive queries over months of data

  • first attempt is sql SELECT on raw data. takes too long, cancelled. second attempt uses HMS, still too long, cancelled. why is this so hard ?
  • observation: every data problem is actually two problems 1) data engineering and 2) data science. most projects fail on step 1.
  • doing it with delta – the following command takes 17s and fires off spark job to put the data in a common schema.

CREATE TABLE connections USING delta AS SELECT * from json.'/data/connections'

then

SELECT * FROM connections WHERE dest_port = 666

this is great to query the historical data quickly.. however batch alone is not going to cut it as we may have attacks going on right now. but delta plugs into streaming as well:

INSERT INTO connections SELECT * from kafkaStream

Now we’ve Indexed both batch and streaming data.

We can run a python netviz command to visualize the connections.

Here’s a paper on the Databricks Delta Lake approach – https://databricks.com/wp-content/uploads/2020/08/p975-armbrust.pdf .

Supply Chain Logistics and SAP TM

SAP Transportation Management or SAP TM is a module used for Supply Chain Optimization.

SAP TM has four different optimizer engines –

VSR Optimizer: Plan Shipments in the best possible way on available Vehicles via available routes. TVSR (Vehicle scheduling and routing), TVSS, TVRG Applications come under this.

Load Optimizer: Arrange pallets or packages on the vehicle considering rules like Stackability, etc. (TVSO Application)

Carrier Selection: Rank carriers[1] for each shipment considering costs, Business Shares, Allocations. (TSPS Application)

Strategic Freight Management: Rank bids by carriers for long-term contracts based on Cost, Capacity & Risk. (TSFM Application)

The need for Transportation Management as a service is justified by several use cases.

Many recent announcements from leading car manufacturers and other companies whose business models are susceptible to disruption are adopting TaaS platforms (through in-house development efforts, partnerships, or acquisitions) to provide services:

The role of APIs in modernizing supply chain systems from legacy EDI based designs – https://www.coupa.com/blog/supply-chain/tech-forward-apis-emerging-player-supply-chain

A comparison of API vs EDI systems – https://arcb.com/blog/edi-vs-api-which-is-right-for-my-business

Some definitions from Wikipedia to clarify concepts-

Logistics is generally the detailed organization and implementation of a complex operation. In a general business sense, logistics is the management of the flow of things between the point of origin and the point of consumption to meet the requirements of customers or corporations.

The resources managed in logistics may include tangible goods such as materials, equipment, and supplies, as well as food and other consumable items.

Logistics management is the part of supply chain management and supply chain engineering that plans, implements, and controls the efficient, effective forward, and reverse flow and storage of goods, services, and related information between the point of origin and point of consumption to meet customer’s requirements. The complexity of logistics can be modeled, analyzed, visualized, and optimized by dedicated simulation software.

The minimization of the use of resources is a common motivation in all logistics fields.

A supply chain is the connected network of individuals, organizations, resources, activities, and technologies involved in the manufacture and sale of a product or service.

How can we be better prepared for a future crisis relative to supply chains?

Private companies have playbooks for supply chain disruptions in their network. In supply chain management, it is crucial to diversify your source of supplies so that when one supplier is impacted, you can turn to the other.

Kubernetes security

Kubernetes is a Platform-as-a-Service (PAAS) build on top of (docker) containers, but with an additional unit of abstraction called a pod, which a) is its smallest unit of execution b) has a single external IP address, c) is a group of one or more containers where d) the group of containers are connected over a network namespace and e) each pod is isolated from others by network namespaces. Within a pod, different containers can see each other over different ports over a loopback interface. Within an instance, while different pods can see each other as different IP addresses. It has a control plane built on top of etcd, a consistent, distributed, highly available key-value store, which is an independent opensource CNCF project.

It is conceptually similar to Cloud Foundry, Mesos, OpenStack, Mirantis and similar abstraction layers.

A threat matrix for Kubernetes by MS – https://www.microsoft.com/security/blog/2020/04/02/attack-matrix-kubernetes/

From RSA’20, here’s a talk on ‘The future of Kubernetes attacks’ – https://youtu.be/CH7S5rE3j8w

From Coinbase, a blog on ‘Why Kubernetes is not part of our stack’https://blog.coinbase.com/container-technologies-at-coinbase-d4ae118dcb6c makes these points

  • it needs a full-time compute team to maintain
  • securing it is neither trivial nor well understood. SPIFFE, SPIRE, Envoy, Istio, OPA, service mesh are a few of the technologies.

This blog links to – https://k8s.af/

Another viewpoint – https://pythonspeed.com/articles/dont-need-kubernetes/

A counterpoint to the Coinbase blog – https://blog.kumina.nl/2020/07/in-response-to-container-technologies-at-coinbase/

Scratch notes:

K8S is based on a Controller pattern:

  • Resources capture the desired state
  • Current state is kept centralized in etcd, a distributed key-value store, similar to Consul
  • Controllers reconcile current state with desired state

Pod is a top level resource, is the smallest deployment unit, and is a group of one or more containers described by a yaml file, similar to docker-compose.yml .

K8S Operator is a kind of resource manager, for Custom resources.

https://blog.frankel.ch/your-own-kubernetes-controller/1/

https://pushbuildtestdeploy.com/when-do-kubernetes-operators-make-sense

Spinnaker is a Continuous Delivery platform that itself runs on k8s as a set of pods which can be scaled up

A kubectl cheat sheet:

https://kubernetes.io/docs/reference/kubectl/cheatsheet

An article on cloud security https://medium.com/xm-cyber/having-fun-with-cloud-services-e281f8a7fe60 , which I think makes the point of why things are relatively complex to begin with.

One comes across the terms helm and helm charts. Helm is a way to package a complex k8s application. This adds a layer of indirection to an app – https://stepan.wtf/to-helm-or-not/ .

A repo to list failing pods – https://github.com/edrevo/suspicious-pods

Exploring networking in k8s – https://dustinspecker.com/posts/how-do-kubernetes-and-docker-create-ip-addresses/

Plugin for Pod networking on EKS using ENIs – https://github.com/aws/amazon-vpc-cni-k8s

RSA World 2020 – QRNG based crypto keys

Creating my first Quantum Crypto keys. The QC hardware provides the entropy source for generation of a key, which is encapsulated using one of several available mechanisms. Key encapsulation mechanism chosen is was Classic McEliece. Keys were emailed to me after generation at the Cambridge Quantum Computing booth at RSA World 2020.

Quantum-Proof Cryptography with IronBridge, TKET and Amazon Braket

Cambridge Quantum Computing has developed the first provable QRNG, known as IronBridge, which uses quantum computers to generate unbiased private entropy. IronBridge generates cryptographic keys using this entropy, resulting in quantum-proof keys (for both classical and post-quantum algorithms).

Their paper on “Practical randomness and privacy amplification” – https://arxiv.org/pdf/2009.06551.pdf

Links on McEliece, Goppa codes, FrodoKEM and PQC –

https://en.wikipedia.org/wiki/McEliece_cryptosystem

https://en.wikipedia.org/wiki/Binary_Goppa_code

FrodoKEM: https://eprint.iacr.org/2018/686.pdf : In 2016, Bos et al. proposed the key exchange scheme FrodoCCS, that is also a submission to the NIST post-quantum standardization process, modified as a key encapsulation mechanism (FrodoKEM). The security of the scheme is based on standard lattices
and the learning with errors problem

PQ Crypto Catalog: https://github.com/kriskwiatkowski/pqc  has implementations of quantum-safe signature and KEM schemes submitted to NIST PQC standardization process

AWS Builders Library

A few interesting ideas from AWS Builders Library

Toyota’s Five-why’s approach to root cause a problem – is good but not enough to find all other root causes that might also cause a problem.

Couple great talks on serverless

Smithy is an Apache-2.0 licensed, protocol-agnostic IDL for defining APIs, generating clients, servers and documentation. https://github.com/awslabs/smithy