Web MIDI API is an interesting beast. Although it has been around for almost five years, it is still supported only by Chromium. But this does not prevent us from creating a full-fledged synthesizer in Angular. It's time to take the Web Audio API to the next level!


ITKarma picture


Earlier, I talked about the declarative work with the Web Audio API in Angular .


Programming music is fun, of course, but what if we want to play it? In the 80s, the standard for messaging between electronic instruments - MIDI - appeared. It is actively used to this day, and Chrome supports it at the native level. This means that if you have a synthesizer or MIDI keyboard, you can connect them to a computer and read what you are playing. You can even control devices from your computer by sending outgoing messages. Let's see how to do it in a good way in Angular.


Web MIDI API


There is not much documentation on the Internet about this API, apart from the specification . You request access to MIDI devices through CDMY0CDMY and get CDMY1CDMY with all inputs and outputs. These inputs and outputs - also called ports - are native CDMY2CDMYs. Data exchange is carried out through CDMY3CDMYs that contain CDMY4CDMY messages. In each message no more than 3 bytes. The first element of the array is called status byte. Each number means a specific role of the message, for example, pressing a key or moving the parameter slider. In the case of a pressed key, the second byte is responsible for which key is pressed, and the third - how loudly the note was played. A full description of the messages can be found on the official MIDI website . In Angular, we work with events through CDMY5CDMY, so the first step is to bring the Web MIDI API to RxJs.


Dependency Injection


To subscribe to events, we first need to get an CDMY6CDMY object to get to the ports. CDMY7CDMY will return CDMY8CDMY to us, and RxJs will turn it for us into CDMY9CDMY. We can create CDMY10CDMY for this using CDMY11CDMY from @ ng-web-apis/common . So we do not directly access the global object:


export const MIDI_ACCESS=new InjectionToken<Promise<MIDIAccess>>( 'Promise for MIDIAccess object', { factory: () => { const navigatorRef=inject(NAVIGATOR); return navigatorRef.requestMIDIAccess ? navigatorRef.requestMIDIAccess() : Promise.reject(new Error('Web MIDI API is not supported')); }, }, ); 

Now we can subscribe to all MIDI events. You can create CDMY12CDMY in one of two ways:


  1. Create a service that inherits from CDMY13CDMY, as we did in Geolocation API
  2. Create a factory token that will broadcast this CDMY14CDMY to CDMY15CDMY events

Since this time we need very few conversions, the token is quite suitable. With failure processing, the subscription code for all events looks like this:


export const MIDI_MESSAGES=new InjectionToken<Observable<MIDIMessageEvent>>( 'All incoming MIDI messages stream', { factory: () => from(inject(MIDI_ACCESS).catch((e: Error) => e)).pipe( switchMap(access => access instanceof Error ? throwError(access) : merge( ...Array.from(access.inputs).map(([_, input]) => fromEvent( input as FromEventTarget<MIDIMessageEvent>, 'midimessage', ), ), ), ), share(), ), }, ); 

If we need a specific port, for example, if we want to send an outgoing message, we will get it from CDMY16CDMY. To do this, add another token and prepare the factory for convenient use:


export function outputById(id: string): Provider[] { return [ { provide: MIDI_OUTPUT_QUERY, useValue: id, }, { provide: MIDI_OUTPUT, deps: [MIDI_ACCESS, MIDI_OUTPUT_QUERY], useFactory: outputByIdFactory, }, ]; } export function outputByIdFactory( midiAccess: Promise<MIDIAccess>, id: string, ): Promise<MIDIOutput | undefined> { return midiAccess.then(access => access.outputs.get(id)); } 

By the way, did you know that there is no need to spread the CDMY17CDMY array when you add it to the metadata? The providers field of the CDMY18CDMY decorator supports multidimensional arrays, so you can write simply:

providers: [ outputById(‘someId’), ANOTHER_TOKEN, SomeService, ] 

If you are interested in such practical little things about Angular - I invite you to read our series of tweets with helpful tips .

In the same way, you can get input ports, as well as request them by name.


Operators


To work with the flow of events, we need to create our own operators. In the end, we don’t want to poke around in the original data array every time.
Operators can be divided into two groups:


  • Filtering. They screen out events that do not interest us. For example, if we want to listen only to the keys played or the volume slider.
  • Converting.They will transform values ​​for us. For example, leave only the message data array, discarding the remaining fields of the event.

This is how we can listen to events from a specific channel:


export function filterByChannel( channel: MidiChannel, ): MonoTypeOperatorFunction<MIDIMessageEvent> { return source => source.pipe(filter(({data}) => data[0] % 16 === channel)); } 

Status byte is organized in groups of 16: 128-143 are responsible for the pressed keys (CDMY19CDMY) on each of the 16 channels. 144-159 - for releasing the keys pressed (CDMY20CDMY). Thus, if we take the remainder of dividing this byte by 16, we get the channel number.


If we are only interested in playing notes, this operator will help:


export function notes(): MonoTypeOperatorFunction<MIDIMessageEvent> { return source => source.pipe( filter(({data}) => between(data[0], 128, 159)), map(event => { if (between(event.data[0], 128, 143)) { event.data[0] += 16; event.data[2]=0; } return event; }), ); } 

Some MIDI devices send explicit CDMY21CDMY messages when you release a key. But some instead send the CDMY22CDMY message at zero volume. This statement normalizes this behavior, leading all messages to CDMY23CDMY. We simply shift the status byte by 16 so that CDMY24CDMY messages go to the territory of CDMY25CDMY, and set the volume to zero.

Now you can build chains of operators to get the stream that we need:


readonly notes$=this.messages$.pipe( catchError(() => EMPTY), notes(), toData(), ); constructor( @Inject(MIDI_MESSAGES) private readonly messages$: Observable<MIDIMessageEvent>, ) {} 

It's time to put all this into practice!


Create a synthesizer


With a little help from the library for the Web Audio API , which we discussed earlier create a nice-sounding synthesizer in just a couple of directives. Then we feed him notes that we play through the stream described above.


As a starting point we use the last piece of code. For the synthesizer to be polyphonic, you need to track all the notes played. To do this, use the scan operator:


readonly notes$=this.messages$.pipe( catchError(() => EMPTY), notes(), toData(), scan( (map, [_, note, volume]) => map.set(note, volume), new Map() ), ); 

So that the sound does not interrupt sharply and does not always sound at the same volume, we will create a full-fledged ADSR-pipe. The last article had a simplified version of it. Let me remind you that the idea of ​​ADSR is to change the sound volume as follows:


ITKarma picture


So that the note does not start abruptly, is held at a certain volume while the key is pressed, and then gradually fades out.


@Pipe({ name: 'adsr', }) export class AdsrPipe implements PipeTransform { transform( value: number, attack: number, decay: number, sustain: number, release: number, ): AudioParamInput { return value ? [ { value: 0, duration: 0, mode: 'instant', }, { value, duration: attack, mode: 'linear', }, { value: sustain, duration: decay, mode: 'linear', }, ] : { value: 0, duration: release, mode: 'linear', }; } } 

Now, when we press the key, the volume will increase linearly during the attack. Then it will decrease to sustain level during decay. And when we release the key, the volume drops to zero during release time.

With this pipe we can sketch the synthesizer in the template:


<ng-container *ngFor="let note of notes | keyvalue; trackBy: noteKey" > <ng-container waOscillatorNode detune="5" autoplay [frequency]="toFrequency(note.key)" > <ng-container waGainNode gain="0" [gain]="note.value | adsr: 0:0.1:0.02:1" > <ng-container waAudioDestinationNode></ng-container> </ng-container> </ng-container> <ng-container waOscillatorNode type="sawtooth" autoplay [frequency]="toFrequency(note.key)" > <ng-container waGainNode gain="0" [gain]="note.value | adsr: 0:0.1:0.02:1" > <ng-container waAudioDestinationNode></ng-container> <ng-container [waOutput]="convolver"></ng-container> </ng-container> </ng-container> </ng-container> <ng-container #convolver="AudioNode" waConvolverNode buffer="assets/audio/response.wav" > <ng-container waAudioDestinationNode></ng-container> </ng-container> 

We iterate over the collected notes using the built-in CDMY26CDMY pipe, tracking them by the number of the played key. Then we have two oscillators playing the right frequencies. And at the end - the reverb effect with CDMY27CDMY. Pretty simple construction and very little code, but we get a good-sounding, ready-to-use instrument. Try the demo in Chrome:


https://ng-web-apis.github.io/midi


If you don’t have a MIDI keyboard, you can click on the notes with the mouse.


A live demo is available here, however the browser will not allow access to MIDI in the iframe: https://stackblitz.com/edit/angular -midi

Conclusion


In Angular, we are used to working with events using RxJs. And the Web MIDI API is not much different from the usual DOM events. With a pair of tokens and architectural solutions, we were able to easily add MIDI support to our Angular application. The described solution is available as an open-source library @ ng-web-apis/midi . It is part of a large project called Web APIs for Angular. Our goal is to create lightweight quality wrappers for using the native API in Angular applications. So if you need, for example, Payment Request API or Intersection Observer - look all our releases .


If you are curious about what is so interesting you can do on Angular using the Web MIDI API, I invite you to learn how to play the keys in your personal project Jamigo. app

.

Source