The ScriptProcessorNode is deprecated. Use AudioWorkletNode instead

“The ScriptProcessorNode is deprecated. Use AudioWorkletNode instead.” Is a Java Script warning from my Open Frameworks / OfxPd / Emscripten website: https://gameoflife3d.handmadeproductions.de/ Is it possible to replace it for Open Frameworks / Emscripten and could the latency benefit from it (right now there is a audio latency of around 200 ms and a buffersize of 4096 samples, that I try to compensate with a video delay…)? Perhaps this points in the right direction: Audio Worklet and WebAssembly | WebAudio Samples ?

I still use the ScriptProcessorNode, now with a buffer size of 1024. The latency is quite small, but it only works without artifacts with a fast computer: https://simplesequencer.handmadeproductions.de/
I also tried to implement the audioworklet node, and got some sound with it. The problem is, that there are more artifacts than with the ScriptNodeProcessor (I am sure I did something wrong with the implementation).
In https://github.com/openframeworks/openFrameworks/blob/master/addons/ofxEmscripten/libs/html5audio/lib/emscripten/library_html5audio.js I replaced:

			dynCall('viiii',callback, [bufferSize,inputChannels,outputChannels,userData]);

			if(outputChannels>0){
				for(c=0;c<outputChannels;++c){
					var outChannel = event.outputBuffer.getChannelData(c);
					for(i=0,j=c;i<bufferSize;++i,j+=outputChannels){
						outChannel[i] = outbufferArray[j];
					}
				}
			}

with this:

        dynCall("viiii", callback, [bufferSize, inputChannels, outputChannels, userData]);

        if (outputChannels > 0) {
            context.audioWorklet.addModule('bypass-processor.js').then(() => {
                const bypasser = new AudioWorkletNode(context, 'bypass-processor');
                var myArrayBuffer = context.createBuffer(2, bufferSize, context.sampleRate);
                for (channel = 0; channel < outputChannels; ++channel) {
                    var outChannel = myArrayBuffer.getChannelData(channel);
                    for (i = 0,
                        j = channel; i < bufferSize; ++i,
                        j += outputChannels) {
                        outChannel[i] = outbufferArray[j];
                    }
                }
                // Get an AudioBufferSourceNode.
                // This is the AudioNode to use when we want to play an AudioBuffer
                var source = context.createBufferSource();
                // set the buffer in the AudioBufferSourceNode
                source.buffer = myArrayBuffer;
                source.connect(bypasser).connect(context.destination);
                source.start();
            });
        }

And added the file bypass-processor.js as a Audioworklet processor (took it from this example: Hello Audio Worklet! | WebAudio Samples):

// Copyright (c) 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

/**
 * A simple bypass node demo.
 *
 * @class BypassProcessor
 * @extends AudioWorkletProcessor
 */
class BypassProcessor extends AudioWorkletProcessor {

    // When constructor() undefined, the default constructor will be
    // implicitly used.

    process(inputs, outputs) {
        // By default, the node has single input and output.
        const input = inputs[0];
        const output = outputs[0];

        for (let channel = 0; channel < output.length; ++channel) {
            output[channel].set(input[channel]);
        }

        return true;
    }
}

registerProcessor('bypass-processor', BypassProcessor);

The audio is still triggered by ScriptNodeProcessor and onaudioprocess. And maybe the Audioworklet needs to be implemented as a ring buffer because the buffer size is larger than 128?

https://googlechromelabs.github.io/web-audio-samples/audio-worklet/design-pattern/wasm-ring-buffer/

I actually found a way to use the audioworklet. For stability reasons I set the buffer size to 2048. Now the sound is just bypassed to the audioworklet, not sure if it is an improvement, because the samples are still fired from onaudioprocess in the main thread (would be great, if they are called from the audioworklet processor instead)…

I implemented the change in this patch:

https://simplesequencer.handmadeproductions.de/

This is what I changed in library_html5audio.js (at the end of html5audio_stream_create()):

    AUDIO.contexts[context_id].audioWorklet.addModule('bypass-processor.js').then(() => {
        const bypasser = new AudioWorkletNode(AUDIO.contexts[context_id], 'bypass-processor');
        stream.connect(AUDIO.ffts[context_id]).connect(bypasser).connect(AUDIO.contexts[context_id].destination);
        });

    //stream.connect(AUDIO.ffts[context_id]);
    AUDIO.streams[id] = stream;
    return id

And this is bypass-processor.js:

// Copyright (c) 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

/**
 * A simple bypass node demo.
 *
 * @class BypassProcessor
 * @extends AudioWorkletProcessor
 */
class BypassProcessor extends AudioWorkletProcessor {

    // When constructor() undefined, the default constructor will be
    // implicitly used.

    process(inputs, outputs) {
        // By default, the node has single input and output.
        if (inputs[0].length > 0) {

            const input = inputs[0];
            const output = outputs[0];

            for (let channel = 0; channel < output.length; ++channel) {
                output[channel].set(input[channel]);
            }
        }
        return true;
    }
}

registerProcessor('bypass-processor', BypassProcessor);

Now I am quite sure that my recent changes do not improve the audio latency (it is just bypassing the audio stream to the audioworklet processor). I guess that I need to import Wasm memory and funtion into the audioworklet processor (or share it with the main thread somehow…).

This is the relevant code from https://github.com/openframeworks/openFrameworks/blob/master/addons/ofxEmscripten/libs/html5audio/lib/emscripten/library_html5audio.js

		var inbufferArray = Module.HEAPF32.subarray(inbuffer>>2,(inbuffer>>2)+bufferSize*inputChannels);
		var outbufferArray = Module.HEAPF32.subarray(outbuffer>>2,(outbuffer>>2)+bufferSize*outputChannels);
...
dynCall('viiii',callback, [bufferSize,inputChannels,outputChannels,userData]);

But with my lacking knowlege it is not possible for me to import it into the audioworklet processor.

1 Like