A measurement of how well a language model predicts text. Lower perplexity means the model is less 'surprised' by the text it sees. Technically, it's the exponentiated average negative log-likelihood. Useful for comparing models but doesn't always correlate with usefulness for specific tasks.
The process of measuring how well an AI model performs on its intended task.
An AI model that understands and generates human language.
A standardized test used to measure and compare AI model performance.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.