Back to Glossary

Reinforcement Learning through Human Feedback

Reinforcement Learning from Human Feedback (RLHF), including reinforcement learning from human preferences, is a technique that trains a "reward model" directly from human feedback and uses the model as a reward function to optimize an agent's policy using reinforcement learning (RL) through an optimization algorithm.

Last Updated:

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call