How to use Llama for text generation
A practical guide to getting started with Meta’s new large language model
If you have followed the latest trends in AI you have probably heard of Llama or even Llama 2. No, we’re not talking about the domesticated South American animal, we’re talking about Meta’s latest open-source large language model that has had a large influence on AI and natural language processing over the past year.
In this article, I will go over how large language models work, explain the architecture behind the latest version of Llama (Llama 2), and also demonstrate how you can use Llama in Python.
What is a large language model (LLM)?
To explain how Llama works, we need to start with a general explanation of large language models (LLMs). A language model is defined as a probabilistic model of human language. In other words, a language model can take a section of text and predict the next words in the text with a probability distribution for the words in its vocabulary. This concept is similar to the idea of Word2Vec and the continuous bag of words/skip-gram models for predicting target and surrounding context words in text.