AudioWorkletProcessor

Baseline Widely available

This feature is well established and works across many devices and browser versions. It’s been available across browsers since April 2021.

Web Audio APIAudioWorkletProcessor 接口代表了一个 自定义的音频处理代码 AudioWorkletNode. 它身处于 AudioWorkletGlobalScope 并运行在 Web Audio rendering 线程上。同时,一个建立在其基础上的 AudioWorkletNode 运行在主线程上。

构造函数

备注: AudioWorkletProcessor 及其子类不能通过用户提供的代码直接实例化。它们只能随着与之相联系的 AudioWorkletNode 的创建而被其创建再内部。其子类的构造函数将被一个可选对象调用,因此你可以执行自定义的初始化过程——详细信息请参见构造函数页面。

AudioWorkletProcessor()

创建一个 AudioWorkletProcessor 对象的新实例。

属性

port 只读

返回一个用于在处理程序和其所属的AudioWorkletNode间双向通信的 MessagePort 。另一端 可通过该节点的port 属性使用。

方法

AudioWorkletProcessor 接口没有定义任何自己的方法。但是,你必须提供一个 process() 方法,用以处理音频流。

事件

AudioWorkletProcessor 接口不响应任何事件。

使用说明

Deriving classes

要自定义音频处理代码,你必须从AudioWorkletProcessor 接口派生一个类。这个派生类必须具有在该接口中不曾定义的process 方法。该方法将被每个含有 128 样本帧的块调用并且接受输入和输出数组以及自定义的AudioParams (如果它们刚被定义了) 的计算值作为参数。你可以使用输入和 音频参数值去填充输出数组,这是默认的用于使输出静音。

Optionally, if you want custom AudioParams on your node, you can supply a parameterDescriptors property as a static getter on the processor. The array of AudioParamDescriptor-based objects returned is used internally to create the AudioParams during the instantiation of the AudioWorkletNode.

The resulting AudioParams reside in the parameters property of the node and can be automated using standard methods such as linearRampToValueAtTime. Their calculated values will be passed into the process() method of the processor for you to shape the node output accordingly.

处理音频

一个创建自定义音频处理算法的步骤的实例:

  1. 创建一个独立的文件;

  2. 在这个文件中:

    1. Extend the AudioWorkletProcessor class (see "Deriving classes" section) and supply your own process() method in it;
    2. Register the processor using AudioWorkletGlobalScope.registerProcessor() method;
  3. Load the file using addModule() method on your audio context's audioWorklet property;

  4. Create an AudioWorkletNode based on the processor. The processor will be instantiated internally by the AudioWorkletNode constructor.

  5. Connect the node to the other nodes.

例子

In the example below we create a custom AudioWorkletNode that outputs white noise.

First, we need to define a custom AudioWorkletProcessor, which will output white noise, and register it. Note that this should be done in a separate file.

js
// white-noise-processor.js
class WhiteNoiseProcessor extends AudioWorkletProcessor {
  process(inputs, outputs, parameters) {
    const output = outputs[0];
    output.forEach((channel) => {
      for (let i = 0; i < channel.length; i++) {
        channel[i] = Math.random() * 2 - 1;
      }
    });
    return true;
  }
}

registerProcessor("white-noise-processor", WhiteNoiseProcessor);

Next, in our main script file we'll load the processor, create an instance of AudioWorkletNode, passing it the name of the processor, then connect the node to an audio graph.

js
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule("white-noise-processor.js");
const whiteNoiseNode = new AudioWorkletNode(
  audioContext,
  "white-noise-processor",
);
whiteNoiseNode.connect(audioContext.destination);

Specifications

Specification
Web Audio API
# AudioWorkletProcessor

Browser compatibility

BCD tables only load in the browser

See also