Use Any Language Model from Hugging Face Model Hub

How to leverage Haystack native integration with Hugging Face's model hub.

Haystack open source framework for composable natural language processing (NLP) provides the building blocks for customized semantic search and question answering (QA) pipelines. With Haystack, you can set up a production-ready NLP system in no time. A key tool for working in Haystack is the Hugging Face Model Hub — a platform for sharing NLP models. 

What is the Hugging Face model hub?

Hugging Face's model hub serves as an interface to pre-trained machine learning (ML) models. Model training requires significant resources to which not everyone has access, and retraining a model can be an unnecessary waste of computational power. The model hub takes advantage of the fact that you can save a trained model’s parameters for future use. The HF platform allows machine learning practitioners to share their own pre-trained model checkpoints that anybody can use, whether for inference or as a basis for further fine-tuning.

How does it work?

Hugging Face hosts models in the form of git-based repositories. This means the model hub comes with version control, and you can use common git commands to push, update, or clone the model repositories. To learn more, have a look at the Hugging Face Hub documentation.

Is it only for Transformer models?

The model hub is not restricted to Transformer models. In fact, any model that is available as a TensorFlow or PyTorch checkpoint can be uploaded to the HF model hub. In practice, due to the impressive Transformer-powered results in ML and in NLP in particular,  most of the models shared on the hub are Transformer-based.

Is it only for NLP models?

The model hub is not just for NLP models, though various NLP applications—like question answering, text classification, and sentiment analysis are clearly the focus. However, you can also find models pertaining to computer vision and audio tasks.

How to use Hugging Face models in Haystack pipelines

Haystack NLP pipelines are directed acyclic graphs (DAGs). Pipeline graphs consist of a succession of nodes connected by edges, where every node performs exactly one task. For instance, in a basic extractive QA pipeline, the Retriever node handles document retrieval, while the Reader node applies a Transformer-based question answering model to the documents. 

Pipelines are flexible: you can add or remove nodes according to your use case. Haystack offers ready-made nodes for summarization, classification, text generation, and many other tasks, but you can also design your own custom nodes. For instance, you could set up a decision node with more than one outgoing edge. What makes nodes so useful is that they handle the model loading for you. All you need to do is to pass the model’s repository name when instantiating a node.

Advantages of combining Haystack and models on Hugging Face model hub

In addition to the advantages outlined above, there are a few more reasons for using Haystack as your interface to HF pre-trained models:

  1. Composability. Haystack allows you to implement pipelines of varying degrees of complexity, from “vanilla” semantic QA systems to pipelines with parallel branches and decision nodes. You can add and combine nodes flexibly with just a few lines of code. 
  2. Debugging. The more complex a pipeline system gets, the harder it becomes to debug. However, Haystack strives to make debugging your pipelines as easy as possible. 
  3. Serialization. Haystack’s use of the YAML format makes it easy to define and save a pipeline configuration. Use YAML to share the architecture of your QA system with others, or to deploy it to production.

Start using Haystack now!