The way to get the best latency on a device is not for your processing loop to be directly driven by an audio hardware events/interrupts. This is for many reasons:
1) events/interrupts can be delayed (usually by (bad) drivers disabling interrupt for a long time).
2) latency is not adjustable. You may want to give your processing more time if you know more work needs to be done, or the system is under variable load.
3) you will usually finish generating your audio too early. Eg: if the interrupts is every 2ms, but your processing is 1ms long, you are adding an unnecessary 1ms latency by starting as soon as the interrupt is received.
4) the device will consume more power as it can not know when it will next wakeup (theoretically, I do not think it makes a difference in practice)
Modern mobile devices use a DMA to copy the content of the application processor (AP) to the audio DSP. This DMA will copy periodically the content of a AP buffer to a DSP buffer, this is called a DMA burst.
You want to track this burst and wakup just enough time before it to have the time to generate your audio data and write it to the AP buffer + a safety margin.
This allows you to track the system performance & application load to adjust your safety margin and optimize latency. It also allows the scheduler to know when your app will need to be woken up far in advance leading.
The Android Aaudio API [1] implement what I just described, as well as memory mapping the AP buffer to the application process to achieve 0 copy. It is the way to achieve the lowest latency on Android.
I believe Apple low latency APIs uses a similar no interrupt design.
Source: worked 3 years on the Android audio framework at Google.
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
You are right, Aaudio is available since Oreo only and the device is 3 version older. Additionally Netflix 's video playback is not a low latency use-case, so it shouldn't use Aaudio even on recent devices.
My comment was about the parent's affirmation that low latency apps should be scheduled by interrupt.
1) events/interrupts can be delayed (usually by (bad) drivers disabling interrupt for a long time).
2) latency is not adjustable. You may want to give your processing more time if you know more work needs to be done, or the system is under variable load.
3) you will usually finish generating your audio too early. Eg: if the interrupts is every 2ms, but your processing is 1ms long, you are adding an unnecessary 1ms latency by starting as soon as the interrupt is received.
4) the device will consume more power as it can not know when it will next wakeup (theoretically, I do not think it makes a difference in practice)
Modern mobile devices use a DMA to copy the content of the application processor (AP) to the audio DSP. This DMA will copy periodically the content of a AP buffer to a DSP buffer, this is called a DMA burst.
You want to track this burst and wakup just enough time before it to have the time to generate your audio data and write it to the AP buffer + a safety margin. This allows you to track the system performance & application load to adjust your safety margin and optimize latency. It also allows the scheduler to know when your app will need to be woken up far in advance leading.
The Android Aaudio API [1] implement what I just described, as well as memory mapping the AP buffer to the application process to achieve 0 copy. It is the way to achieve the lowest latency on Android.
I believe Apple low latency APIs uses a similar no interrupt design.
Source: worked 3 years on the Android audio framework at Google.
Disclaimer: Opinions expressed are solely my own and do not express the views or opinions of my employer.
[1]: https://developer.android.com/ndk/guides/audio/aaudio/aaudio