A model trained to predict how helpful, harmless, and honest a response is, based on human preferences.
A model trained to predict how helpful, harmless, and honest a response is, based on human preferences. It provides the reward signal for RLHF training. Getting the reward model right is crucial — if it rewards the wrong things, the final model will optimize for the wrong behavior.
Reinforcement Learning from Human Feedback.
The research field focused on making sure AI systems do what humans actually want them to do.
A learning approach where an agent learns by interacting with an environment and receiving rewards or penalties.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.