LLM evolution – Anthropic , AI21, Cohere, GPT-4

https://github.com/Mooler0410/LLMsPracticalGuide

Source paper – Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

Pink branch is encoder only. Green branch is encoder-decoder. Blue branch is decoder-only.

This is consistent with the Generative aspect of the blue branch. But it does not explain the emergent properties at the top of the blue tree.

LLM leaderboard – https://chat.lmsys.org/?leaderboard

Stanford HELM (holistic evaluation of LMs) – https://crfm.stanford.edu/helm/latest/?models=1

More on emergent properties in links below.

https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1

https://openai.com/research/solving-math-word-problems : Autoregressive models, which generate each solution token by token, have no mechanism to correct their own errors. Solutions that veer off-course quickly become unrecoverable, as can be seen in the examples provided. We address this problem by training verifiers to evaluate the correctness of model-generated solutions. Verifiers are given many possible solutions, all written by the model itself, and they are trained to decide which ones, if any, are correct.

Language Models are Few-Shot Learners – https://openai.com/research/language-models-are-few-shot-learners

LLM inferencing tools/techniques were discussed here.

LLM Inferencing is hard – tools and techniques

Large Language Models are big with the bigger ones far exceeding GPU memory, and model parallelism is hard.

Let’s say the foundation models are available such that no training is needed and and one wants to inference against them. This is no small challenge, and a number of techniques have been explored

https://lilianweng.github.io/posts/2023-01-10-inference-optimization/

  • student-teacher knowledge distillation training, leading to DistilBert
  • quantization, quantization-aware training, post-training quantization
  • pruning
  • architectural optimization, efficient transformers

https://blog.gopenai.com/how-to-speed-up-llms-and-use-100k-context-window-all-tricks-in-one-place-ffd40577b4c

High-throughput Generative Inference of Large Language Models with a Single GPU https://arxiv.org/pdf/2303.06865.pdf, discusses 3 strategies with a focus on third on a single GPU.

  • model compression
  • collaborative inference
  • offloading to utilize memory from CPU and disk

They then show 3 contributions

  • definition of the optimization search space for offloading, including weights, activations, KV cache, and an algorithm to get an optimal offloading strategy within the search space
  • quantization of the parameters to 4 bits with small loss of accuracy
  • run a OPT-175B model on a single T4 GPU with 16GB memory (!)

PEFT – Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning – https://arxiv.org/pdf/2303.15647.pdf

“expanding the context size leads to a quadratic increase in inference costs”

identify identify three main classes of PEFT methods:

  • Addition-based, ( Within additive methods, we distinguish two large included groups: Adapter-like methods and Soft prompts)
  • Selection-based, and
  • Reparametrization-based.

General strategies for inference concurrency, courtesy chatgpt:

To process multiple concurrent inference requests without interference between them, a model can use techniques such as parallelization and batching.

Parallelization involves splitting the workload across multiple processing units, such as CPUs or GPUs, so that multiple requests can be processed simultaneously without interfering with each other. This can be achieved using frameworks such as TensorFlow or PyTorch, which provide support for parallel processing.

Batching involves grouping multiple requests together and processing them as a single batch. This can increase the efficiency of the model by reducing the overhead associated with processing each request individually. Batching can be particularly effective for models that are optimized for throughput rather than latency.

Another technique that can be used is dynamic scheduling, which involves assigning resources to requests based on their priority and the availability of resources at a given time. This can help ensure that high-priority requests are processed quickly without interfering with lower-priority requests.

Feature Vectors, Embeddings, Vector Databases, Feature Stores

An ML model consists of a set of weights (or a set of numerical values) that transform inputs to outputs (along with a nonlinear transform such as a sigmoid function). The weights are often organized as vectors or matrices. Consider neural networks, decision trees and support vector machines as types of ML models for this discussion.

The weights representing features of the data (input or intermediate data) are also called feature vectors or vectors. They are also called embeddings, that is embeddings of vectors in a vector space. We discussed such vectors in https://securemachinery.com/2019/05/24/transformer-gpt-2/.

The term “embedding” comes from the idea that the vectors “embed” the original data into a lower-dimensional space. The embedding process involves a combination of statistical and computational techniques, such as factorization and neural networks, that learn to map the input data into the vector space in a way that preserves the relevant properties of the original data.

The use of vectors to represent words in machine learning research started in 2013 with the publication of the paper “Distributed Representations of Words and Phrases and their Compositionality” by Tomas Mikolov et al. This paper introduced the word2vec algorithm, which generates dense vector representations of words based on their distributional properties in a large corpus of text. The size of the vector or embedding in a word embedding model is a hyperparameter that needs to be determined before training the model. It is typically chosen based on the size of the vocabulary and the complexity of the task at hand. In practice, the vector size is often set to be between 100 and 300 dimensions, but this can vary depending on the specific application and the available computational resources. The optimal vector size can be determined through experimentation and tuning of hyperparameters.

One difference between embeddings and feature vectors is that embeddings are typically learned automatically from the data, while feature vectors are typically chosen based on domain knowledge or feature engineering. However these two terms are often used interchangeably. Here is a video going over how the embeddings are obtained from words in a sentence with a bag of words approach- https://www.youtube.com/watch?v=viZrOnJclY0 .

Pinecone, Milvus, Facebook AI Similarity Search (FAISS), Google Vertex Matching engine are examples of Vector databases.

The challenge in implementing a vector database is that traditional databases are not optimized for handling high-dimensional vector data, which is often used in machine learning and data science applications.

Vector data is typically represented as arrays of numbers, where each number represents a feature or attribute of the data. For example, an image might be represented as a high-dimensional vector where each dimension represents the color value of a specific pixel. In contrast to traditional databases, where each record consists of a set of fields or columns, vector databases need to store and index large volumes of high-dimensional data in a way that supports efficient similarity search.

In traditional databases, queries are typically based on simple comparisons of scalar values, such as equality or range queries. However, in vector databases, similarity search is the primary operation, which requires specialized algorithms and data structures to efficiently compute the similarity between vectors. These algorithms are designed to handle high-dimensional data and minimize the amount of computation needed to compare vectors, which can be computationally expensive.

There are several specialized algorithms that are commonly used in vector databases to support efficient similarity search. Here are some examples:

  1. Euclidean Distance: This is a distance metric that measures the straight-line distance between two points in Euclidean space. It is commonly used in vector databases to compute the distance or similarity between vectors.
  2. Cosine Similarity: This is a similarity metric that measures the cosine of the angle between two vectors. It is commonly used in text-based applications to measure the similarity between documents or word embeddings.
  3. Locality-Sensitive Hashing (LSH): This is a technique used to hash high-dimensional vectors into lower-dimensional buckets based on their similarity. It is commonly used in vector databases to speed up similarity search by reducing the number of comparisons needed to find similar vectors.
  4. Product Quantization: This is a technique used to divide high-dimensional vectors into smaller subvectors and quantize them separately. It is commonly used in vector databases to reduce the dimensionality of the data and speed up similarity search.
  5. Inverted Indexing: This is a technique used to index the vectors based on the values of their individual dimensions. It is commonly used in text-based applications to speed up search queries by indexing the terms in the document.

Pinecone provides several indexing and search algorithms, including approximate nearest neighbor search, that are selected automatically based on the properties of the data and the search requirements. However, you can also specify a specific algorithm or tuning parameters when creating an index or performing a query by passing in the appropriate arguments. For example, you can use the method parameter when creating an index to specify the indexing method, or the distance parameter when performing a query to specify the distance metric to use.

While OpenSearch is not specifically designed as a vector database like Pinecone, it provides vector search capabilities through its support for nearest neighbor search. OpenSearch uses the K-Nearest Neighbor (K-NN) algorithm to perform nearest neighbor search for vector data. K-NN is a machine learning algorithm that can be used to find the K nearest neighbors of a query vector in a high-dimensional space. OpenSearch also provides support for approximate nearest neighbor search using algorithms such as Annoy and Hnswlib. To use vector search in OpenSearch, you first need to index your vector data using the appropriate data type (e.g., float or double). You can then perform a nearest neighbor search by specifying the query vector and the number of nearest neighbors to return. OpenSearch also provides support for vector scoring, which allows you to rank search results based on their similarity to a query vector. You can use vector scoring to boost or filter search results based on their similarity to a query vector.

What kind of vectorization schemes are useful for log processing ?

When processing log data, the goal is typically to extract useful information from the log entries and transform them into a format that can be easily analyzed and searched. Vectorization is a common technique used for this purpose, and there are several vectorization schemes that are applicable to log processing. Here are some examples:

  1. Bag-of-words: This is a vectorization scheme that represents a document as a bag of words, where each word is represented by a dimension in the vector and the value of the dimension is the frequency of the word in the document. Bag-of-words can be used to represent log entries as a vector of words, which can be used for tasks such as text classification and anomaly detection.
  2. TF-IDF: This is a vectorization scheme that represents a document as a weighted combination of its term frequency and inverse document frequency. TF-IDF can be used to represent log entries as a vector of weighted words, which can be used for tasks such as information retrieval and text mining.
  3. Word embeddings: This is a vectorization scheme that represents words as dense vectors in a high-dimensional space, where the distance between vectors reflects the semantic similarity between the words. Word embeddings can be used to represent log entries as a vector of word embeddings, which can be used for tasks such as text classification and entity recognition.
  4. Sequence embeddings: This is a vectorization scheme that represents a sequence of words as a dense vector in a high-dimensional space, where the distance between vectors reflects the similarity between the sequences. Sequence embeddings can be used to represent log entries as a vector of sequence embeddings, which can be used for tasks such as sequence classification and anomaly detection.
  5. One-hot encoding: This is a vectorization scheme that represents categorical data as binary vectors, where each dimension corresponds to a possible category and the value of the dimension is 1 if the data belongs to that category and 0 otherwise. One-hot encoding can be used to represent log entries as a vector of categorical features, which can be used for tasks such as classification and clustering.

By using a suitable vectorization scheme, log data can be transformed into a format that can be easily analyzed and searched, enabling tasks such as anomaly detection, root cause analysis, and performance optimization.

Vector database versus Feature store – what’s the difference ?

Both vector databases and feature stores are used to manage and serve high-dimensional data, such as embeddings, vectors, and other numerical representations, but there are some key differences between the two.

A vector database is a database optimized for storing and querying high-dimensional vector data. It provides efficient indexing and search algorithms, such as approximate nearest neighbor search, that allow for fast and scalable similarity search. Vector databases are commonly used in machine learning applications, such as recommendation systems and natural language processing, where the goal is to find similar items or entities based on their vector representations.

A feature store, on the other hand, is a centralized repository for machine learning features that provides a way to store, manage, and share feature data across different applications and teams. It is designed to help data scientists and machine learning engineers build, test, and deploy machine learning models more efficiently by providing a unified interface for accessing and managing features.

While both vector databases and feature stores can store and serve high-dimensional data, the main difference is their focus and use case. Vector databases are designed for efficient similarity search, while feature stores are designed for feature management and sharing across different applications and teams. In practice, they can complement each other in many machine learning workflows, with the vector database providing the efficient similarity search capabilities and the feature store providing a centralized and standardized way to manage and share feature data.

Anyscale – Using an embeddings database to train an LLM using Ray – https://www.anyscale.com/blog/llm-open-source-search-engine-langchain-ray

OpenAI embeddings example – https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb

HuggingFace sentence embeddings article – https://medium.com/huggingface/universal-word-sentence-embeddings-ce48ddc8fc3a

AWS – https://medium.com/@shankar.arunp/augmenting-large-language-models-with-verified-information-sources-leveraging-aws-sagemaker-and-f6be17fb10a8

Langchain example

langchain enables agentic code to invoke one or more agents

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("how can one fine-tune a generative ai llm model ?")

here’s the output, showing it “thinking” through the steps to answer the question posed.

$ python langchain-test.py


> Entering new AgentExecutor chain...
 I need to understand the process of fine-tuning a generative ai llm model
Action: Search
Action Input: "fine-tuning generative ai llm model"
Observation: A beginner-friendly introduction to fine-tuning Large language models using the LangChain framework on your domain data.
Thought: I need to understand the specific steps of fine-tuning a generative ai llm model
Action: Search
Action Input: "steps to fine-tune generative ai llm model"
Observation: This step involves training the pre-trained LLM on the task-specific dataset. The training process involves optimizing the model's weights and ...
Thought: I now know the final answer
Final Answer: The process of fine-tuning a generative ai llm model involves training the pre-trained LLM on the task-specific dataset and optimizing the model's weights and
 parameters.

> Finished chain.

This required Python 3.10.10

Langchain interface to Vector Stores – https://python.langchain.com/en/latest/modules/indexes/vectorstores.html

Langchain gallery – https://github.com/kyrolabs/awesome-langchain

https://blog.langchain.dev/going-beyond-chatbots-how-to-make-gpt-4-output-structured-data-using-langchain/

EC2 P5 UltraClusters

Each P5 EC2 instances has

  • eight NVIDIA H100 GPUs capable of 16 petaFLOPs of mixed-precision performance
  • 640 GB of high-bandwidth memory, 80GB in each GPU
  • 3,200 Gbps networking connectivity (8x more than the previous generation)

The increased performance of P5 instances accelerates the time-to-train machine learning (ML) models by up to 6x (reducing training time from days to hours), and the additional GPU memory helps customers train larger, more complex models.

P5 instances are expected to lower the cost to train ML models by up to 40% over the previous generation, providing customers greater efficiency over less flexible cloud offerings or expensive on-premises systems.

https://nvidianews.nvidia.com/news/aws-and-nvidia-collaborate-on-next-generation-infrastructure-for-training-large-machine-learning-models-and-building-generative-ai-applications

Nvidia H100 GPU overview and data sheet – https://resources.nvidia.com/en-us-tensor-core/gtc22-whitepaper-hopper

Diagram of P4d UltraClusters

P4d consists of 8 A100 GPUs, with 40GB GPU Memory each

P4de consists of 8 A100 80GB GPUs, with 80GB GPU memory each

Nvidia blog on HGX baseboard supporting 8 A100 GPUs – https://developer.nvidia.com/blog/introducing-hgx-a100-most-powerful-accelerated-server-platform-for-ai-hpc/

A100 80GB data sheet – https://www.nvidia.com/en-us/data-center/a100/

MIG support in A100 – https://developer.nvidia.com/blog/getting-the-most-out-of-the-a100-gpu-with-multi-instance-gpu/ and MIG user guide – https://docs.nvidia.com/datacenter/tesla/mig-user-guide

MIG support in AWS EC2 instance type P4d and in AWS EKS – https://developer.nvidia.com/blog/amazon-elastic-kubernetes-services-now-offers-native-support-for-nvidia-a100-multi-instance-gpus/

GCP A2 adds 16 A100 GPUs to a node – https://cloud.google.com/blog/products/compute/announcing-google-cloud-a2-vm-family-based-on-nvidia-a100-gpu

https://cloud.google.com/blog/products/containers-kubernetes/gke-now-supports-multi-instance-gpus

Running more pods/gpu on EKS with MIG – https://medium.com/itnext/run-more-pods-per-gpu-with-nvidia-multi-instance-gpu-d4f7fb07c9b5

Nvidia Embraces The CPU World With “Grace” Arm Server Chip

EC2 Trainium UltraClusters

Each EC2 Trn1 instance has

  • up to 16 AWS Trainium accelerators purpose built to accelerate DL training and deliver up to 3.4 petaflops of FP16/BF16 compute power. Each accelerator includes two second-generation NeuronCores
  • 512 GB of shared accelerator memory (HBM) with 9.8 TB/s of total memory bandwidth
  • 1600 Gbps of Elastic Fabric Adapter (EFAv2)

An EC2 Trn1 UltraCluster, consists of densely packed, co-located racks of Trn1 compute instances interconnected by non-blocking petabyte scale networking. It is our largest UltraCluster to date, offering 6 exaflops of compute power on demand with up to 30,000 Trainium chips.

https://aws.amazon.com/blogs/machine-learning/scaling-large-language-model-llm-training-with-amazon-ec2-trn1-ultraclusters/

Weights vs Activations

Why do Activations need more bits (16) than weights (8) ? source – https://stackoverflow.com/questions/72397839/why-do-activations-need-more-bits-16bit-than-weights-8bit-in-tensor-flows-n

Answer:

Activations are actual signals propagating through the network. They have nothing to do with activation function, this is just a name collision. They are higher accuracy because they are not part of the model, so they do not affect storage, download size, or memory usage, as if you are not training your model you never store activations beyond the current one.

For example for an MLP (middle layer perceptron ?) we have something among the lines of

a1 = relu(W1x + b1)
a2 = relu(W2a1 + b2)
...
an = Wnan-1 + bn

where each W and b will be 8bit parameters. And activations are a1, …, an. The thing is you only need previous and current layer, so to calculate at you just need at-1, and not previous ones, consequently storing them during computation at higher accuracy is just a good tradeoff.

Reasoning, Acting and Composing. ReAct and Self-Ask papers

Reasoning and actions synergize. The ReAct paper interleaves reasoning traces and task-specific actions to achieve a synergy between the two.

A reasoning trace is a record or a description of the mental steps or thought process used to arrive at a particular conclusion or solution. It is a detailed account of how someone reasons through a problem or question, including the assumptions made, the evidence considered, the inferences drawn, and the logical steps taken to reach a conclusion. By examining the reasoning trace, one can identify potential biases, errors in reasoning, or gaps in logic that may have influenced the person’s decision-making process.

A task-specific action is an action that can help a reasoning task. This depends on the task at hand. Some examples

  1. In a mathematical problem-solving task, a task-specific action might be to break down a complex problem into smaller, more manageable parts.
  2. In a critical thinking task, a task-specific action might be to evaluate the evidence provided and identify any biases or assumptions that might be influencing the conclusion.
  3. In a decision-making task, a task-specific action might be to weigh the pros and cons of each available option and consider how each option aligns with one’s goals or values.
  4. In a scientific inquiry task, a task-specific action might be to design a controlled experiment to test a hypothesis and systematically collect and analyze data to draw conclusions.
  5. In a legal reasoning task, a task-specific action might be to interpret and analyze case law and statutes, apply legal principles to the facts of a case, and argue persuasively for a particular legal outcome.

Task-specific actions can vary widely depending on the task and the context, but they generally involve applying relevant knowledge, skills, and strategies to solve a particular problem or achieve a specific goal.

From the ReAct ( paper – “The best approach overall is a combination of ReAct and CoT that allows for the use of both internal knowledge and externally obtained information during reasoning. On ALFWorld and WebShop, two or even one-shot ReAct prompting is able to outperform imitation or reinforcement learning methods trained with 103 ∼ 105 task instances, with an absolute improvement of 34% and 10% in success rates respectively. We also demonstrate the importance of sparse, versatile reasoning in decision making by showing consistent advantages over controlled baselines with actions only. Besides general applicability and performance boost, the combination of reasoning and acting also contributes to model interpretability, trustworthiness, and diagnosability across all domains, as humans can readily distinguish information from model’s internal knowledge versus external environments, as well as inspect reasoning traces to understand the decision basis of model actions.”

The Self-Ask paper discusses compositional reasoning and narrowing the “compositionality gap”.

Compositional reasoning is the ability to combine smaller pieces of knowledge or information to deduce new knowledge or solve a problem. It involves taking a set of facts or ideas and using them to create a new idea or answer a question that cannot be answered by any single fact alone. This type of reasoning is important in many areas, including natural language understanding, problem solving, and decision-making. Compositional reasoning allows us to use our knowledge in a more flexible and adaptive way, and is essential for many advanced cognitive tasks.

The compositionality gap is a metric used to measure the ability of language models to perform compositional reasoning tasks. It is defined as the ratio of the number of compositional questions for which the model answers the sub-questions correctly but not the overall question, to the total number of compositional questions. In other words, it measures how often models can correctly answer all sub-problems but not generate the overall solution. A high compositionality gap indicates that the model is struggling with compositional reasoning, while a low gap indicates that the model is better at composing multiple facts to answer complex questions.

The paper proposes a solution called “self-ask,” a new method of prompting language models to perform compositional reasoning tasks. With self-ask, the model explicitly asks itself follow-up questions before answering the initial question. By breaking down the reasoning process into smaller steps, the model is better able to combine relevant information from different sources and answer multi-hop questions correctly. Additionally, self-ask allows for plugging in a search engine to answer the follow-up questions, which further improves accuracy. The paper shows that self-ask narrows the compositionality gap by reasoning explicitly instead of implicitly.

Apache Iceberg

What does Apache Iceberg do ?

  • manages large slow-changing tabular data and gives a sql interface to the data so that it can be queried efficiently
  • breaks files into partitions and stores those files into an object store such as s3. partitions can be filtered based on the partition key(s). the partitioning is “invisible partitioning” meaning it is done by the system for you, without exposing the details to the client.
  • separates out metadata management from the data. metadata is not stored in the data files.
  • separates table schema away from the data . a change of column name will not affect the data files. see Schema Evolutuon.
  • allows accessing data as it existed at a specific point in time. this Time Travel feature is useful for auditing, debugging and reproducing issues that occurred in the past . Time travel is implemented using “snapshot isolation” which allows multiple versions of the same table to exist at the same time. (Copy on Write is used in the implementation)
  • provides ACID compliant transactions for data modifications and snapshot isolation for queries, which help ensure consistency and correctness of data
  • does all this through a lightweight design with minimal coordination

Figure. iceberg table format is used by multiple engines and is capable of writing to multiple storage types. source.

Ryan Blue’s discussion on the rationale for the design is here and a presentation with performance improvements is at https://conferences.oreilly.com/strata/strata-ny-2018/cdn.oreillystatic.com/en/assets/1/event/278/Introducing%20Iceberg_%20Tables%20designed%20for%20object%20stores%20Presentation.pdf

“By building support for Iceberg, data warehouses can skip the query layer and share data directly. Iceberg was built on the assumption that there is no single query layer. Instead, many different processes all use the same underlying data and coordinate through the table format along with a very lightweight catalog. Iceberg enables direct data access needed by all of these use cases and, uniquely, does it without compromising the SQL behavior of data warehouses.”

The client is a java jar file which can be embedded.

How does iceberg store files in s3 ?

The top level directory contains the table’s metadata files including the schema and partition information. The metadata files are stored in S3 object store using the table name as the s3 prefix.

The data files are stored in a directory structure that reflects the table partitioning. Partition values are encoded in the directory name.

s3://bucket-name/table-name/date=YYYY-MM-DD/region=us-west-1/0001.parquet

s3://bucket-name/table-name/date=YYYY-MM-DD/region=us-west-1/0002.parquet

Why a new table format – https://github.com/Netflix/iceberg

A hands-on look at Iceberg table by dremio is here .

A blog on the Adobe experience with Iceberg is here.

A blog on creating a real-time datawarehouse with Flink and Iceberg – https://www.alibabacloud.com/blog/flink-%20-iceberg-how-to-construct-a-whole-scenario-real-time-data-warehouse_597824

Apache Yunikorn

YuniKorn is an alternative scheduler to the default scheduler in kubernetes which benefits complex and mixed workloads. It provides advanced scheduling options like workload queueing and shared quotas. This helps improve the user experience and provides cost savings by providing better resource utilization.

Gang Scheduling refers to a scheduling algorithm for parallel systems that schedules related threads or processes to run simultaneously on different processors. In the distributed computing world, this refers to the mechanism to schedule correlated tasks in an All or Nothing manner.

Bin packing refers to the process of allocation and reallocation of pods to nodes in a way that achieves a high utilization of the nodes. When a node has a low level of utilization, its pods are moved to a node with the highest level of utilization and that has space for the pods available; after which the low utilization node is freed and released.

Yunikorn scheduler talk at ApacheCon’21 – link.

https://yunikorn.apache.org/community/events/#past-conference–meetup-recordings

Pinterest talk on their use of Yunikorn – link.

Hugging Face – AI models and datasets hub

Hugging Face supports around 100,000 pre-trained language models that can be used for various NLP tasks. The Hugging Face transformers library, which is a popular choice for NLP tasks such as text classification and machine translation, currently supports over 100 pre-trained language models. These models include popular models such as BERT, GPT-2, and RoBERTa. In addition Hugging Face provides tools and libraries that allow users to fine-tune and customize these models for specific tasks or datasets.

The datasets can be loaded using the python datasets package (pip install datasets). An overview is here.

A Hugging Face Course – https://github.com/huggingface/course

Hugging Face on AWS blog – https://aws.amazon.com/blogs/machine-learning/aws-and-hugging-face-collaborate-to-simplify-and-accelerate-adoption-of-natural-language-processing-models/

CEO Clement Delangue, calls it the “GitHub of machine learning.” Its emphasis on an open, collaborative approach that made investors confident in the company’s $2 billion valuation, he said. “That’s what is really important to us, makes us successful and makes us different from others in the space.” 

DistilBERT is a smaller, faster, and cheaper version of the BERT language model developed by Hugging Face by controlling the loss function during training of a ‘student model’ from a ‘teacher model’. It bucks the trend towards larger models, and instead focusses on training a more efficient model. It has been “distilled” to reduce its size and computational requirements, making it faster to train and more efficient to run. Despite being smaller than BERT, DistilBERT is able to achieve similar or even slightly better performance on many NLP tasks. The triple loss function is devised to include a distillation loss, a training loss and a cosine-distance loss.

Examples of generative models available on the Hugging Face platform include:

  1. GPT-2: GPT-2 (Generative Pre-training Transformer 2) is a large-scale language model developed by OpenAI that can be used for tasks such as language translation and text generation.
  2. BERT: BERT (Bidirectional Encoder Representations from Transformers) is a language model developed by Google that can be used for tasks such as language translation and text classification.
  3. RoBERTa: RoBERTa (Robustly Optimized BERT Approach) is a language model developed by Facebook that is based on the BERT model and can be used for tasks such as language translation and text classification.
  4. T5: T5 (Text-To-Text Transfer Transformer) is a language model developed by Google that can be used for tasks such as language translation and text summarization.
  5. DistilBERT, described above. To generate text with DistilBERT, you would typically fine-tune the model on a specific task, such as machine translation or language generation, using a dataset that is relevant to the task. Once the model has been fine-tuned, you can use it to generate text by providing it with a prompt or seed text and letting it predict the next word or sequence of words.

Docs on text generation – https://huggingface.co/transformers/v3.1.0/main_classes/model.html?highlight=generate

Here’s an example of using transformers to generate some text.

import transformers

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') 
model = AutoModelWithLMHead.from_pretrained('distilgpt2')  

# Encode the prompt
input_context_prompt = "Men on the moon "
input_ids = tokenizer.encode(input_context_prompt, return_tensors='pt')  # encode input context

# Generate text
outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.9, num_return_sequences=10, do_sample=True)  

# Sample candidate outputs and print
for i in range(10): #  10 output sequences were generated
    print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True)))

Note the temperature parameter during model.generate(). A temperature of zero means the generation process will choose the most likely next word . A higher temperature allows for less likely words to be included in the generation process.

Airflow and Orchestration

download_images >> train >> serve

This line sets the sequence of operations for an ML pipeline in Airflow. source

A metaphor to think of Airflow is that of an air-traffic controller that is orchestrating, sequencing, mediating, managing the flow of the flights of airplanes (source). It is an example of the mediator pattern which decouples dependencies in a complex system. The airplanes do not talk directly to each other, they talk to the air-traffic controller.

A functional alternative to Airflow is to use a bunch of cron jobs to schedule bash scripts. Airflow instead defines pipelines as Directed Acyclic Graphs (DAGs) in python code. This critical talk on “Don’t use Apache Airflow” describes it as cron on steroids.

A complete example of an ML pipeline built with airflow that outputs the results to a streamlit app – https://github.com/bhlr/docker-airflow-streamlit

Each operation calls an operator to do the job locally or remotely.

How does it perform an operation remotely on another node ? ssh/remote execution ? docker daemon ? k8s operator ? There can be many different ways – this logic is encapsulated by an Executor.

Local Executors

Remote Executors

A thread on airflow and alternatives- https://news.ycombinator.com/item?id=23349507 .

https://github.com/pditommaso/awesome-pipeline – A number of pipeline tools for ETL

Intro talk on Airflow by Astronomer – https://www.youtube.com/watch?v=GIztRAHc3as ,

and on an ETL use case with Snowflake – https://www.youtube.com/watch?v=3-XGY0bGJ6g

How can one compose these DAGs further and manage cross-DAG depedencies ? An approach is discussed in https://medium.com/quintoandar-tech-blog/effective-cross-dags-dependency-in-apache-airflow-1885dc7ece9f to define an explicit mediator between multiple DAGs.

Security of Solidity Smart Contracts using DistilBERT

Smart Contracts are relatively short blocks of code that run on the Ethereum Virtual Machine (EVM), and deal with tokens of value. For example a contract may release funds when certain preconditions such are met, such as time elapsed, or a signed request received. The number of smart contracts and the value of transactions in smart contracts has grown quite a bit in the last few years along with the prices of cryptocurrencies. The code of the Smart Contract is always publicly available as bytecode which can be reverse engineered, and often the source code in solidity language is often publicly available. As a result, bugs in smart contracts have become attractive exploit targets. EVMs are a distributed computing construct that run in parallel on a network of participating nodes, coordinating their actions by a consensus mechanism and protocol that runs between the nodes.

A collection of links on this topic –

https://blog.sigmaprime.io/solidity-security.html

https://rekt.news/superfluid-rekt/ newsletter reporting high level analysis recent attacks.

https://secureum.xyz

https://solidity-by-example.org/variables/ Solidity has 3 types of variables 1. local (inside function), 2. state (inside contract, outside function), 3. global (e.g. block.timestamp, msg.sender – chain level. provides info about the blockchain)

https://solidity-by-example.org/data-locations (storage, memory, calldata)

https://solidity-by-example.org/visibility/ (public, private, internal, external)

https://solidity-by-example.org/function-modifier (onlyOwner to restrict access, validAddress to validate address, noReentrancy to prevent reentrancy) Incorrect reentrancy is a source of bugs.

https://www.saurik.com/optimism.html – instrumenting the blockchain to find gaps (EthDenver talk).

Security of Bridges. Bridges are implemented as smart contracts between two different chains.

https://www.bitdefender.com/blog/hotforsecurity/smart-contract-exploit-costs-nomad-crypto-bridge-200-million/

Sequence diagram of a bridge operation in https://blog.harmony.one/introducing-horizon-an-ethereum-harmony-cross-chain-bridge/

Within the last year, bridges have accounted for a majority of the total funds stolen across all of the crypto ecosystem. Massive bridge hacks have occurred on average every few months, and each losing extremely large amounts of user funds. Some bridge hacks in the last couple of years have included the Axie Infinity Ronin bridge hack, losing users $625 million, the Wormhole bridge hack costing users $300 million, the Harmony bridge hack losing users $100 million, and just this last week the Nomad bridge hack, losing users almost $200 million.

Methods for Detecting attacks

  • Code reviews for reentrancy bugs
  • Detection of source of a txn as a bad actor
  • Using ML for code analysis and bad actor detection

https://github.com/DicksonWu654/ethdenverhack – This submission attempts using ML for detecting reentrancy attacks in Solidity code, by using transfer learning on DistilBERT, to train on good and bad smart contract code examples, and use the trained model to detect bad code on new code samples.

“from transformers import TFDistilBertModel, DistilBertTokenizerFast” # using a hugging face model

These guys had a funny presentation – https://www.youtube.com/watch?v=9oLuxJdrZwo

Machine Learning Security

Seven security concerns in Machine Learning (ML) –

  1. Data privacy and security: ML requires large amounts of data to be trained, and this data may contain sensitive or personal information. Appropriate measures need to be put in place to prevent data from being accessed by unauthorized parties.
  2. Notebooks security: ML typically requires Jupyter or similar notebooks to be served for data scientists to work on data, code, and models, both individually and collaboratively. These notebooks need to be access controlled and protected from unauthorized access. This includes the code and git repos that host the code, and the model artifacts that the notebook uses or creates.
  3. Model serving and inference security: ML models in production are commonly served and accessed over inference endpoints and such endpoints need authentication, authorization, encryption for protection against misuse. During model upgrades to an endpoint or changes to an endpoint and its configuration, a number of attacks are possible that are typical of a devops/devsecops pipeline. These need to be protected against.
  4. Model security: Models can be vulnerable to attacks such as adversarial inputs, such as when an attacker intentionally manipulates the input to the model in order to cause it to make incorrect predictions. Another example is when the model makes an egregiously bad decision on an input, for example a self-driving car hitting an obstacle instead of avoiding it. It is important to harden the model and bound the decisions that come from its use.
  5. Misuse: Even if a model works as designed, it can be misused, for example by generating fake or misleading content. It is important to consider the potential unintended consequences of using models and to put safeguards in place to prevent their misuse.
  6. Bias: ML models can sometimes exhibit biases due to the data they are trained on. There should be a plan to identify biases in a model and take steps to mitigate them.
  7. Intellectual property: ML models may be protected by intellectual property laws, and it is important to respect these laws and obtain the appropriate licenses when using language models developed by others.