Large Language Models, or LLMs, are statistical models of very large corpuses of text. An LLM makes a calculation of how often certain words and concepts appear near each other, and then uses those statistics to create new text, usually in response to prompts. LLMs are sometimes called "auto-correct on steroids" because they predict the next word in a sequence using a similar logic to auto-correct, only LLMs have vastly more data to draw on to make their inferences, making them capable of generating human-like text patterns in a wide variety of styles and circumstances. [Large language model - Wikipedia](https://en.wikipedia.org/wiki/Large_language_model): >A **large language model** (**LLM**) is a computational [model](https://en.wikipedia.org/wiki/Model#Conceptual_model "Model") notable for its ability to achieve general-purpose language generation and other [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing "Natural language processing") tasks such as [classification](https://en.wikipedia.org/wiki/Statistical_classification "Statistical classification"). Based on [language models](https://en.wikipedia.org/wiki/Language_model "Language model"), LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a computationally intensive [self-supervised](https://en.wikipedia.org/wiki/Self-supervised_learning "Self-supervised learning") and [semi-supervised](https://en.wikipedia.org/wiki/Semi-supervised_learning "Semi-supervised learning") training process. LLMs can be used for text generation, a form of [generative AI](https://en.wikipedia.org/wiki/Generative_artificial_intelligence "Generative artificial intelligence"), by taking an input text and repeatedly predicting the next token or word --- **Relates to**: [[AI]], [[Generative AI|GenAI]]