What could be causing distorted audio when streaming audio from Blazor to the Web Audio API?

159 Views Asked by At

Streaming Generated Audio from Blazor (via invokeMethodAsync) to the Web Audio API (AudioWorkletNode)

something is seriously wrong with the sound being produced (its like a machine gun with other distortions)

I am attempting to use invokeMethodAsync in the main thread javascript side to invoke a function in blazor wasm that returns two channels of float arrays. Then send it on to the worklet processor thread. The Web app feels very responsive using a midi input (using an interop as well) with very little latency. But the sound im getting is very distorted

Where could the problem be:

  1. the wave generation in C# -

        
        [JSInvokable("ProcessAudioData")]
        public float[][] ProcessAudioData()
        {
            float[] leftSamples = new float[bufferSize];
            float[] rightSamples = new float[bufferSize];


            for (int i = 0; i < bufferSize; i++)
            {
                float mixedSample = 0.0f;


                foreach (var note in activeNotes)
                {
                    int noteKey = note.Key;
                    float frequency = (float)note.Value;
                    float amplitude = (float)activeAmplitudes[noteKey];
                    float phaseIncrement = (float)(2.0 * Math.PI * frequency / sampleRate);

                    mixedSample += (float)(amplitude * Math.Sin(phase));
                    phase += phaseIncrement;

                }

                // Clamp value between -1 and 1 to avoid clipping
                mixedSample = (float)Math.Max(-1.0, Math.Min(1.0, mixedSample));

                // Assign the mixed sample to the appropriate index in the left and right sample arrays
                leftSamples[i] = mixedSample;
                rightSamples[i] = mixedSample;


            }

            // Return the arrays in a 2D array
            return new[] { leftSamples, rightSamples };
        }



  1. Recieving 2D Array in JS main thread -
async function setupAudioNode(audioSwitch, instrumentReference) {
    const bufferSize = 128;
    const numberOfChannels = 2;
    const audioContext = new AudioContext();

    console.log('Create audioWorklet');
    console.log('audioContext:', audioContext);

    const sharedBufferSize = bufferSize * numberOfChannels * Float32Array.BYTES_PER_ELEMENT;
    const sharedBuffer = new SharedArrayBuffer(sharedBufferSize);

    const stateBuffer = new SharedArrayBuffer(4); // Holds one Int32
    const stateIntArray = new Int32Array(stateBuffer);


    audioSwitch.audioContext = audioContext;

    await audioContext.audioWorklet.addModule('/AudioNode/AudioProcessor.js');

    if (instrumentReference) {
        const workletNode = new AudioWorkletNode(audioContext, 'audio-worklet-processor', {
            numberOfOutputs: 1,
            outputChannelCount: [2],
            channelCountMode: 'explicit',
            channelInterpretation: 'speakers',
            processorOptions: { sharedBuffer, stateBuffer }
        });
        workletNode.connect(audioContext.destination);

        // Resume the audio context
        await audioContext.resume();
        console.log('audioContext state:', audioContext.state);



        async function sendAudioData() {

            if (Atomics.load(stateIntArray, 0) === 0) { // 0 - buffer is ready, 1 - buffer is consumed

                // Get the float arrays directly from the .NET method
                const audioData = await instrumentReference.invokeMethodAsync('ProcessAudioData');

                const leftFloatArray = new Float32Array(audioData[0]);
                const rightFloatArray = new Float32Array(audioData[1]);

                // Fill SharedArrayBuffer
                new Float32Array(sharedBuffer, 0, bufferSize).set(leftFloatArray);
                new Float32Array(sharedBuffer, bufferSize * Float32Array.BYTES_PER_ELEMENT, bufferSize).set(rightFloatArray);

                // Set the state to "ready"
                Atomics.store(stateIntArray, 0, 1);
                Atomics.notify(stateIntArray, 0, 1); // Notify the worklet that the state has changed            
            }

            // Schedule the next audio data fetch
            setTimeout(sendAudioData, bufferSize / audioContext.sampleRate * 1000);
        }




        // Start fetching audio data
        sendAudioData();


        audioSwitch.workletNode = workletNode;
    }

    return audioContext;
}
  1. Between the main thread and Worklet Processor thread -
class AudioProcessor extends AudioWorkletProcessor {
    constructor(options) {
        super(options);
        console.log("AudioProcessor created");
        this.sharedBuffer = options.processorOptions.sharedBuffer;
        this.stateBuffer = options.processorOptions.stateBuffer
        this.stateIntArray = new Int32Array(this.stateBuffer);
        this.leftData = new Float32Array(this.sharedBuffer, 0, 128);
        this.rightData = new Float32Array(this.sharedBuffer, 128 * Float32Array.BYTES_PER_ELEMENT, 128);

    }

    process(inputs, outputs, parameters) {
        if (Atomics.load(this.stateIntArray, 0) === 1) {
            const outputBuffer = outputs[0];

            outputBuffer[0].set(this.leftData);
            outputBuffer[1].set(this.rightData);


            // Set the state to "consumed"
            Atomics.store(this.stateIntArray, 0, 0);


        }
        // return false if no data is available yet to keep the processor alive

        return true;
    }
}


registerProcessor('audio-worklet-processor', AudioProcessor);

ideally i would want to call the c# code from a web worker then share the data via sharedArrayBuffer between the threads - i think might be available in the future in .net 8 I could be doing something very stupid here and it might be impossible, i am very new. Any insight would be much appreciated. and happy to give more infomation.

0

There are 0 best solutions below