Contrastive Language-Image Pre-training. An OpenAI model that learns to connect images and text by training on millions of image-caption pairs. It can understand images through natural language descriptions and vice versa. Powers many image generation and search systems.
AI models that can understand and generate multiple types of data — text, images, audio, video.
A self-supervised learning approach where the model learns by comparing similar and dissimilar pairs of examples.
A model's ability to perform a task it was never explicitly trained on, with no examples provided.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.