Resynth
A Machine That Listens. A Machine That Feels
Type
Personal Project
Timeframe
In Progress
Toolkit
Figma, AI: DistilRoBERTa
Year
2024
🧠 Overview
Resynth is an audiovisual installation that transforms African political speeches into living emotional expressions through light and sound. Built with AI and sound synthesis, the system listens not just to what is said, but how it’s felt—making the hidden emotional layers behind political rhetoric visible, audible, and alive.
🎯 Why We Built This
In political spaces, words carry weight—but emotions often go unnoticed. Resynth was created to challenge that. By focusing on African political speech, we aim to expose emotional currents that are often erased or overlooked in mainstream media and data systems. It’s not about what’s right or wrong, but about feeling what was meant to be felt. View Live Link

🧩 How It Works
Step 1 – Speech Transcription
We collect political speeches from African leaders and transcribe the spoken content into text.
Step 2 – Emotion Detection with AI
Using a machine learning model (DistilRoBERTa), each line of text is analyzed and tagged with an emotion: joy, anger, sadness, fear, disgust, surprise, or neutrality.
Step 3 – Emotional Synthesis
Each emotion is mapped to a distinct color, motion behavior (e.g., spiraling for surprise, drooping for sadness), and synthesized sound using Tone.js. Emotions are blended in real time to mimic the layered nature of real human feeling.
🎨 Design & Interaction
Visuals shift dynamically; colors pulse, forms twitch or spiral; while matching audio plays in sync. The goal was to design a UI that disappears into the experience, allowing users to feel the performance of the speech through both senses.
🧠 The Role of UI
My focus was creating an interface that complements the emotion engine without overpowering it. I designed the interactions to feel intuitive and alive, allowing users to focus entirely on the feeling, not the function.