Expressive robot that communicates through emotions only.
Reachy Mini listens to you and responds with procedurally-generated motion and sound based on the PAD emotional model. No speech, just pure expression.
git clone https://huggingface.co/spaces/RemiFabre/feeling_machinecd feeling_machine && uv sync
cd src/feeling_machine && python main.py --gradioInstead of playing pre-recorded animations, Feeling Machine generates unique expressive movements on-the-fly using the PAD emotional model - a psychological framework that represents emotions in a 3D space.
Pre-recorded emotions look the same every time. Procedural generation creates infinite variations - each expression is unique while staying emotionally consistent.
The PAD values control head movement amplitude, speed, antenna positions, and sound characteristics. Joy produces quick, bouncy movements with high-pitched sounds. Sadness creates slow, drooping motions with lower tones.
joy, happiness, anger, fear, sadness, surprise, boredom, uncertainty, disgust, neutral
Or the AI can specify custom P/A/D values for nuanced emotional expressions.
PAD-based motion and sound generation by Anaelle Jaffré, from the I3R student project.