Month: November 2023

TorchScript for Model Optimization and Model Serving

TorchScript is an intermediate representation of a PyTorch model that can be optimized and run in a non-Python environment, making the PyTorch model suitable for deployment. It is part of the PyTorch ecosystem (Intro_to_TorchScript_tutorial.html , TorchScript JIT.html ).

Why is TorchScript needed ? Python while excellent for ML model development ( interpreted, REPL, simplicity, integration with number of ML libraries), also has characteristics that make it less suitable for model production deployments. These characteristics include interpretation overheads, complex dependency management, high memory/CPU overheads and the lack of easy integration with native technologies such as C++ for high performance and for embedded systems. TorchScript provides tools for optimizations such as operator fusion and static graph analysis which can improve the efficiency and performance during inference. Optimizing the models is crucial for embedded systems with limited resources.

PyTorch had introduced eager/dynamic execution, which had the advantage of faster user feedback but the disadvantage of not having as many optimizations as were possible in static approaches as in Tensorflow.

A blog on Key points to grasp about TorchScript – https://medium.com/@hihuaweizhu/key-points-to-grasp-for-torchscript-beginners-c02cf94aaa50, makes several good points, including that TorchScript is a subset of PyTorch and consists of statically typed variables.

A discussion between eager mode and script mode at https://towardsdatascience.com/pytorch-jit-and-torchscript-c2a77bac0fff suggests the benefit of TorchScript is more about dev/production (versus training/inference), with the production version requiring performance optimizations and portability. Quote: “With TorchScript, PyTorch aims to create a unified framework from research to production. TorchScript will take your PyTorch modules as input and convert them into a production-friendly format.

NVIDIA uses TorchScript to facilitate the deployment and optimization of PyTorch models within their ecosystem. The Torchscript models are compiled to TensorRT, the Nvidia runtime .

AWS ML software stack, Neuron, supports tracing in torchscript. https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html . https://pytorch.org/docs/master/generated/torch.jit.trace.html#torch.jit.trace . An example of a neuron sdk trace for pytorch – https://github.com/aws-neuron/aws-neuron-sdk/issues/371 .

PyTorch/XLA is another project that integrates with Google XLA compiler to enable running PyTorch models on Google TPUs.

GraphCore produces hardware for deep learning called a GraphCore Intelligence Processing Unit (IPU). The primary software framework provided by GraphCore to execute machine learning models on their IPUs is Poplar. It allows running models from TensorFlow and PyTorch. Poplar optimizes computations for the unique architecture of GraphCore’s IPUs. This includes optimizations for memory bandwidth, parallel processing, and other hardware-specific features.