I am trying to understand how a programm outputs sound on a low level. I got this question while thinking about programming a synthesizer.
Rather than an unchaning file (like mp3 sound), a synthesizer creates the sound in real time.
Imagine I got a synthesizer programm with GUI: A slider can change the pitch of a simple sine wave. There has to be some kind of rate in which the programm sends the pitch values to a system API which somehow transforms the digital data to analog sound to the speakers.
- How does this whole process works?
- How could I access such API on an Unix system with e.g. python?
- Can you recommend any ressources for further reading on sound synthesis or real time data processing?
Thanks for taking your time to read the question!:)