>You can't and shouldn't rely on your audio handler getting called on time via a timer in order to keep playback stable, especially not on a non latency sensitive use case
Why not? According to [1], using timers is how Windows, CoreAudio, and PulseAudio all work under the hood, and on Windows and in PulseAudio it replaced the previous interrupt-based implementations. On the app-end of the APIs, Windows' WASAPI code example uses Sleep polling [2], and PulseAudio's write callback is optional and VLC doesn't use it [3], foobar2000 has a polling-based output mode[4], Windows has specific APIs for audio thread scheduling [5], etc.
Why not? According to [1], using timers is how Windows, CoreAudio, and PulseAudio all work under the hood, and on Windows and in PulseAudio it replaced the previous interrupt-based implementations. On the app-end of the APIs, Windows' WASAPI code example uses Sleep polling [2], and PulseAudio's write callback is optional and VLC doesn't use it [3], foobar2000 has a polling-based output mode[4], Windows has specific APIs for audio thread scheduling [5], etc.
Is this a specific deficiency of Android?
[1] https://fedoraproject.org/wiki/Features/GlitchFreeAudio
[2] https://docs.microsoft.com/en-us/windows/win32/coreaudio/ren...
[3] http://www.videolan.org/developers/vlc/modules/audio_output/...
[4] http://wiki.hydrogenaud.io/index.php?title=Foobar2000:Compon...
[5] https://docs.microsoft.com/en-us/windows/win32/procthread/mu...