Sensing Beyond Light: Why APG Changes the Game for in-ear Sensing

Sensing Beyond Light: Why APG Changes the Game for in-ear Sensing

30th April 2026

For years, photoplethysmography (PPG) has been the default way of introducing biosensing into consumer devices. It started on the wrist, made its way into earbuds, and today it powers many of the health features that users are beginning to expect.
The principle is straightforward. Shine light into the skin, measure how it reflects, and translate that into physiological data. It works, it is a mature technology and it helped define an entire category.

But as hearables evolve from audio devices into continuous sensing platforms, the question is no longer just whether we can measure something, it’s how we do it, and what we might be missing. Because PPG, for all its success, is still fundamentally limited by what light can capture.

AuriSense: a new approach

Now consider a different approach.

Not one that necessarily adds more sensors or increases system complexity, but one that starts from a simple observation: the ear is already one of the most information-rich and underutilized sensing locations of the human body.
It’s enclosed and in direct proximity to critical physiological signals. More importantly, it’s already the center of any acoustic system. So instead of shining light into the skin, what if you used sound to read the body? This is where audioplethysmography (APG) comes in. At USound, this approach is realized through Aurisense, a platform that uses sound to read the body in a fundamentally different way.

At first glance, APG might sound like just yet another method of data acquisition. But that misses the point. APG is not just about replacing light with sound. It’s about using the ear canal as a sensing environment and doing so in a way that goes far beyond the audible domain. USound’s approach leverages ultrasound signals facilitated by our specifically engineered MEMS speakers, operating well above the human hearing threshold. These signals propagate through the ear canal and return information that reflects what’s happening inside the body.

All of this happens continuously. And crucially, it happens in parallel to normal audio playback on the same hardware. While the user listens to music the system listens to the body. At the very same time.

This is where the difference becomes more fundamental. PPG is confined to an optical interaction at the skin surface. It observes changes in blood volume and derives insights from that single dimension. It is effective, but ultimately limited by what light can capture.

APG operates in a different domain entirely. Instead of relying on the visible spectrum, it moves into ultrasound frequencies beyond what we can hear. Inside the ear canal, these ultrasonic signals propagate through a complex physical environment, interacting with tissue, pressure, and structural characteristics in ways that go far beyond surface-level observation. What comes back is not just a reflection, but a response shaped by multiple factors. Every heartbeat, every subtle physiological change and even each individual user influences how these signals behave.

The result is a fundamentally richer dataset. Not just a pulse derived from light absorption, but a combination of acoustic and mechanical signatures that reflect what is happening inside the body.

Audible sound is what defines the user experience: music, voice, interaction. It operates in a frequency range optimized for perception. Aurisense, on the other hand, operates in a range optimized for sensing. By separating these domains, Aurisense leverages the full capability of the USound MEMS speakers, not just to deliver sound, but to probe the environment they operate in. The ear canal becomes more than a pathway for audio; it becomes a sensing volume where information can be actively captured and interpreted.

This shift, from using sound purely for output to using it as a sensing mechanism, expands what hearables are capable of. It moves biosensing from a surface-level measurement to a deeper interaction with the body’s physical state.

A new Architecture

Another implication is architectural. PPG requires dedicated optical hardware: emitters, detectors, extensive mechanical integration effort.

APG does not.

By leveraging USound MEMS speakers, the existing acoustic system turns the earbud into a sensing device without adding complexity. The same components that deliver sound become the interface for capturing physiological data.
That shift, from adding hardware to unlocking capability, changes how products are designed.

But the real impact shows up over time.
Because once sensing is no longer tied to a specific sensor, it becomes a question of interpretation. PPG delivers a defined set of metrics. Aurisense delivers a richer dataset, one that can evolve. New algorithms can extract new insights. New applications can emerge without redesigning the device. The system becomes more capable not by changing what it is, but by better understanding what it already captures.

Standing Out

The hearables market is moving into a phase where differentiation is increasingly difficult. Audio quality is expected. Noise cancellation is expected. Even health tracking is becoming standard. What comes next will be defined by how much more these devices can understand about the user. PPG was the first step, it brought biosensing into the mainstream. APG points to what comes after. Not by refining the same approach, but by shifting the perspective entirely.

From light to ultrasound.
From observing the body to interacting with it acoustically.
From simple signals to multi-dimensional data.

And from devices that play audio to devices that quietly, continuously, understand the body without the user ever noticing.


Kristóf Dornbach

Kristóf Dornbach is inspired by the challenge of turning technological potential into practical reality. As Product Manager at USound, he applies his strategic mindset and technical expertise to deliver impactful solutions in a dynamic market.