← Back to Portfolio
Pre-launch · Concept stage

Audia

A brain-computer interface–powered music companion that detects your mood in real time and curates a dynamic playlist for focus, rest, or late-night grind sessions.

Learn more

The problem

We get stuck in the same playlists. You might be looping a genre for too long, or you know you want different energy—but you can’t quite put your finger on the exact song.

Manually searching, skipping, and tweaking playlists breaks focus, kills flow, and often leaves you more frustrated than when you started.

The solution · Audia

Audia uses a small, non-invasive brain–computer interface (BCI) that you can easily put on your head. It detects your mood and cognitive state, suggests a few songs, learns your preferences, and builds a dynamic playlist around you.

Whether you're on a 5-minute class break, a 4-hour lock-in study session, or a 5 AM workout, Audia continuously adapts the BPM, intensity, and style of your music in real time—so you don't have to.

Why Hande × Audia is a strong founder–idea fit

BCI experience

Hands-on work with the Muse-2 headset and previous brain–computer interface projects builds intuition for what's feasible in non-invasive neurotechnology.

Product & dev

Prior experience building a Swift iOS app and a MERN web app lowers the friction of prototyping Audia's first product across mobile and web.

User testing & outreach

Consulting work with MasterCard Foundation and other projects builds skills in user research, structured experimentation, and talking to early adopters.

Core values

🧶 Personalized🪼 Unique🦦 Dynamic🧥 Private (by design)

Audia aims to be emotionally aware, context-sensitive, and privacy-preserving—tuned to your brain and your goals, not just your streaming history.

The idea · First principles

Software: decoding mood & preference

Long-term goal: build a model that learns the relationship between a user's brain activity and their response to different songs, without requiring constant manual feedback.

  • Model questions

    • Which brain regions should we target?
    • What should be unsupervised vs. supervised?
    • How should the architecture encode temporal EEG patterns?
  • Song features to explore

    • Genre, BPM, language, era
    • Context tags (study, breakup, workout, etc.)
    • Per-user & cross-user like rate
    • Linking songs to specific memories (e.g. films)

Brain-state signals to decode

  • Concentration vs. stress vs. drowsiness
  • Positive vs. negative affect during a song
  • Annoyance/friction signals when something isn't working
  • Alignment with desired state (e.g. “more energy” vs. “calm”)

Example dataset: myBrainTunes

A study from Imperial College London recorded EEG responses of 721 subjects to 30 songs using Emotiv EPOC+ headsets (14 channels, 256 Hz). This kind of dataset can inform Audia's early modeling directions.

Read about myBrainTunes ↗

Hardware: headset or headphone?

Audia is envisioned as a small, comfortable, non-invasive BCI: something closer to headphones than lab equipment.

  • Key decisions

    • Build custom hardware vs. leverage existing BCI platforms (e.g. Emotiv, Muse).
    • Headset vs. in-ear (AirPods-style) form factor.
    • Which cortical regions to cover for music & mood.
  • Inspiration & references

    • Apple's BCI-related patents for audio devices.
    • Emotiv's headset lineup: emotiv.com.
    • Research on designing BCIs that bridge neurons and digital systems.

Current landscape · Where Audia fits

Endel

Personalized soundscapes · No BCI

Visit Endel ↗

Endel creates real-time adaptive soundscapes for focus, relaxation, and sleep, using inputs like time of day, weather, heart rate, and location. It's a popular and well-designed tool for productivity and wellness.

The gap

  • No brain–computer interface integration.
  • Limited access to the user's real cognitive state beyond proxies (time, weather, heart rate).
  • Uses proprietary soundscapes rather than deeply personalized music tied to a user's brain responses.

Audia's angle

  • Directly links EEG signals to music response and preference.
  • Aims for highly personalized playlists, not just generic soundscapes.
  • Designed for dynamic adjustments: if a track starts to break focus, Audia adapts in real time.

Research notes & reading list

Early ideation is grounded in existing research on music, focus, and brain–computer interfaces:

Audia is currently in the exploration phase. The next step is to talk to listeners, students, developers, and BCI researchers.

Want to collaborate, share datasets, or just jam on the idea?

Email Hande about Audia