Intel, Habana Labs and Hugging Face advance deep learning software

- Advertisment -

Intel, Habana Labs, and Hugging Face have continued to enhance efficiencies and lower obstacles to AI adoption over the last year through open-source initiatives, integrated developer experiences, and scientific research.

In a statement, Intel said the work has resulted in significant breakthroughs and efficiency in establishing and training high-quality transformer models.

Transformer models outperform various machine and deep learning tasks, including natural language processing (NLP), computer vision (CV), voice, and others. Training these deep learning models at scale necessitates a significant amount of computational power, making the process time-consuming, difficult, and expensive.

The goal of Intel’s continuous collaboration with Hugging Face, as part of the Intel Disruptor Program, is to increase adoption of training and inference solutions optimised for the latest Intel® Xeon® Scalable and Habana Gaudi® and Gaudi®2 CPUs. The cooperation brings the most advanced deep learning innovation from the Intel AI Toolkit to the Hugging Face open-source community and feeds future Intel® architectural innovation drivers. This research resulted in advances in distributed fine-tuning on Intel Xeon processors, built-in optimisations, rapid training with Habana Gaudi, and few-shot learning.

Distributed Fine-Tunning on Intel Xeon Platform

Data scientists use distributed training, where clustered servers each maintain a copy of the model, train it on a subset of the training dataset, and then exchange results across nodes via the Intel® oneAPI Collective Communications Library to converge to a final model more quickly when training on a single node CPU is too slow. Transformers now natively support this capability, which makes distributed fine-tuning for data scientists simpler.

One example is to use a distributed cluster of computers using Intel Xeon Scalable processors to speed up PyTorch training for transformer models. Intel created the Intel extension for PyTorch to take advantage of the hardware features offered by the most recent Intel Xeon Scalable processors, including Intel® Advanced Matrix Extensions (Intel® AMX), AVX-512, and Intel Vector Neural Network Instructions (VNNI). This software library offers automatic speedup for inference and training.

Hugging Face transformers also include a Trainer API, making it easy to begin training without writing a training loop from scratch. The Trainer supports different search backends, including Intel’s SigOpt, a hosted hyperparameter optimisation service, and provides an API for hyperparameter search. This allows data scientists to train and obtain the optimal model more efficiently.

Optimum Developer Experience

Hugging Face designed Optimum, an open-source library, to ease transformer acceleration across an increasing spectrum of training and inference devices. Beginners may use Optimum immediately, thanks to built-in optimisation algorithms and ready-made scripts, while professionals can keep modifying for maximum efficiency.

The Optimum Intel interface connects the transformers library to the various tools and libraries supplied by Intel to expedite end-to-end pipelines on Intel platforms. Built on top of the Intel® Neural Compressor, it provides a unified experience for popular network compression algorithms such as quantisation, pruning, and knowledge distillation. Furthermore, utilising the Optimum Intel library, developers may more easily execute post-training quantisation on a transformer model to compare model metrics on evaluation datasets.

Optimum Intel also provides a straightforward interface for optimising transformer models, converting them to OpenVINO intermediate representation format, and running OpenVINO inference.

Accelerated Training with Habana Gaudi

Hugging Face and Habana Labs are cooperating to make training large-scale, high-quality transformer models easier and faster. With a few lines of code, data scientists and machine learning engineers may speed transformer deep learning training with Habana processors – Gaudi and Gaudi2 – using Habana’s SynapseAI® software suite and the Hugging Face Optimum-Habana open-source module.

The Optimum-Habana library supports a range of computer vision, natural language, and multimodal models. BERT, AlBERT, DistilBERT, RoBERTa, Vision Transformer, swin, T5, GPT2, wav2vec2, and Stable-Diffusion are among the supported and tested model architectures. There are presently over 40,000 models based on these architectures accessible on the Hugging Face hub, which developers can easily enable on Gaudi and Gaudi2 using Optimum-Habana.

The cost-to-performance ratio of the Habana Gaudi solution, which powers Amazon’s EC2 DL1 instances, is up to 40 per cent better than comparable training solutions, allowing clients to train more while spending less. Gaudi2, which is based on the same high-efficiency architecture as the first-generation Gaudi, promises to achieve excellent pricing performance.

The Optimum-Habana package now includes Habana DeepSpeed, which allows it simple to configure and train big language models at scale on Gaudi devices utilising DeepSpeed optimisations. The Optimum-Habana DeepSpeed usage guide might help you learn more.

The current version of Optimum-Habana incorporates support for the Hugging Face diffusers library’s Stable Diffusion pipeline, providing the Hugging Face developer community with cost-effective test-to-image production on Habana Gaudi.

Few-shot Learning in Production

SetFit, a framework for few-shot fine-tuning of Sentence Transformers, was recently introduced by Intel Labs, Hugging Face, and UKP Lab. Few-shot learning using pre-trained language models has emerged as a promising solution to a real data scientist challenge: dealing with unlabeled data.

To translate instances into a format acceptable for the underlying language model, current solutions for few-shot fine-tuning require handmade prompts or verbalisers. SetFit eliminates prompts by immediately producing rich embeddings from a small number of tagged text examples.

Researchers created SetFit to work with any Sentence Transformer on the Hugging Face Hub, allowing text to be identified in many languages by fine-tuning a multilingual checkpoint.

LATEST NEWS

- Advertisment -