Lots of improvements for ofxPDSP (addon for synthesis and generative music)


#1

Hello people! I’ve worked a lot on ofxPDSP in the last months so now it has lots of new examples, ranging from the most basic examples to a more complex wolfram dub example and new classes for making coding with it faster. There is also a page on the docs with a more organized list of classes:
http://npisanti.com/ofxPDSP/md__modules.html
New features from my last post include a class to use the computer keyboard as musical keyboard, OSC input/output, a ofxPDSPScope class to monitor signals, more classes with internal ofParameter to use with ofParameterGroup, graphics for the engine.score to monitor sequences and lots of other improvement and fixes, everything used and commented in the included examples.

link to github:

wolfram-dub example

another new example:

some of the old examples for who doesn’t know about ofxPDSP yet

(graincloud example now let you load your samples by pressing the ‘L’ key )

(midi polysynth)

( serial out to arduino )


#2

also thanks to the people of Void i made some big improvement to ofxPDSP, so now it has full support of all the oF platforms: Linux (x86/ARM), OSX, Windows, iOS, Android. Lately i’ve also worked to clean the API and to make clearer examples.

the main documentation is now this page:
http://npisanti.com/ofxPDSP/md__modules.html


#3

I have question about patching and chaining audio and effects. I have looked at the examples and have a couple of questions.

  1. Are all the modules (compressors, eqs phasers, except the engine) mono processing. I see this in the input example:

engine.audio_in(0) >> compressor >> channelGain >> engine.audio_out(0);

Assuming it is only passing a mono signal through these lines

So for these lines:

  engine.audio_in(0) >> compressor >> channelGain         >> engine.audio_out(0);
                                      channelGain         >> engine.audio_out(1);

I am guessing here the compressor and channelGain are mono and the above line sends the same signal to the left and right output.

  1. If this is the case is there a pan control (mono sources panning left and right across a stereo output), other than just using a gain on each channel?

I know I can first alter the number of channels in the compressor, using

compressor.channels(2);

But what does this do, does it make 2 separate compressors? If the right side goes over the compressor threshold is the gain on both channels reduced? Or do the channels act independently?

  1. What about passing mono sources to other stereo effects (if they are any stereo, but I think the IR reverb is)? Do I need to make the reverb 2 channels (like reverb.channels(2), or only if I want to send it a stereo source?

  2. I think I got a little confused by some of the syntax - It seems that the connection process sometimes does not require any specification of input and output except for the engine (this makes sense), but in the grain cloud example I see

cloud.out_L() >> ampControl[0] * dB(12.0f) >> engine.audio_out(0); 
cloud.out_R() >> ampControl[1] * dB(12.0f) >> engine.audio_out(1);

Here an output is specified (but the grain cloud does not have an input so I am wondering why this choice in syntax, assuming I missed something).

Can I just use cloud >> ampcontrol as they are both stereo (if I don’t need to do anything different with the individual channels).

I expected it to be more consistent, but I am assuming there is a logic behind the difference, for example why is there are difference between the way the channels are accessed between the cloud, the ampcontrol and the engine.

Is there a reason it is not

cloud[0] >> ampcontrol[0] >> engine.audio_out()[0];

or

cloud.out(0) >> ampcontrol.out(0) >> engine.audio_out(0);

Or

cloud.outL() >> ampControl.outL() >> engine.audio_outL();

To keep the syntax consistent?

Or can I use say:

compressor.output() >>  ampControl[0] * dB(12.0f) >> engine.audio_out(0);

Where I reference the output of the compressor? It seems not to work for me, but I dont have a handle on the syntax.

  1. Maybe overall is there a way to know how effects work, if they pass through audio and have a wet and dry, if they receive a send and only output wet? If they have mono or stereo inputs and outputs? I think using the >> operators to patch between units makes a very nice system, but it means that a connection cannot be checked until runtime.

6.How does the summing mixer work. I see you can connect multiple outputs simultaneously.

  // we patch our audio input
    engine.audio_in(0) >> compressor >> channelGain         >> engine.audio_out(0);
                                        channelGain         >> engine.audio_out(1);
                          compressor >> delaySend >> delayL >> engine.audio_out(0);
                                        delaySend >> delayR >> engine.audio_out(1);

In these lines 2 sources are going to the output. How is the summing handled? Is everything just added together?

  1. is there a way to programmatically build a chain? Say from an XML file that has a list of effects to then create a signal flow and GUI that add effects to a chain?

  2. How can I easily get the max and min values for effects parameters?

Oh, and the last part I forgot about, I am trying to make a vector of certain tools, when I try to make a vector of

pdsp::PatchNode

I get an error:

[pdsp] warning: copy used on module (GrainCloud), undefined behavior
[pdsp] warning! patch node copy constructed
Assertion failed: false, file c:\openframeworks\addons\ofxpdsp\src\dsp\pdspfunctions.h, line 31

Is there a way to work with these in a vector, it works fine with an array, but then I am stuck when trying to allocate at run-time.


#4

oh, so many question, i’ll try to answer each question with a different post, so starting with 1:

Are all the modules (compressors, eqs phasers, except the engine) mono processing. I see this in the input example:

all the units, that are the basic building blocks are mono, but some modules are stereo, the multichannel api with the [] operator is something that has come later and it works just for some units, but basically stereo modules have an in_0(), in_1(), and out_0(), out_1() or in_L() / in_R() and out_L(), out_R() methods. The compressor has two inputs and two outputs, and it also has an option for stereo linking that is activated by default. There is no channels() method for the compressor. When stereo linking is active (activated by default) the compression of both channel is linked.

If this is the case is there a pan control (mono sources panning left and right across a stereo output), other than just using a gain on each channel?

there is a pdsp::Panner unit with mono in and stereo out, for real time panning enven with audio signal rate
http://npisanti.com/ofxPDSP/classpdsp_1_1_panner.html

What about passing mono sources to other stereo effects (if they are any stereo, but I think the IR reverb is)? Do I need to make the reverb 2 channels (like reverb.channels(2), or only if I want to send it a stereo source?

the IR reverb has stereo input and outputs, you can call them with L/R or with 0/1, it is the same. For mono in/ stereo out you have to patch the same input to in_L() and in_R(). pdsp::BasiVerb is a mono in / stereo out reverb so you just patch to the input.


#5

I think I got a little confused by some of the syntax - It seems that the connection process sometimes does not require any specification of input and output except for the engine (this makes sense), but in the grain cloud example I see

cloud.out_L() >> ampControl[0] * dB(12.0f) >> engine.audio_out(0); 
cloud.out_R() >> ampControl[1] * dB(12.0f) >> engine.audio_out(1);

Here an output is specified (but the grain cloud does not have an input so I am wondering why this choice in syntax, assuming I missed something).

for the class that support the [] operator, it changes both the input AND the output. For all the classes when there isn’t used an input or output method a default input or output is selected. So when you are selecting an input you can also chain with >> assuming the default output is used. The [] operator is an addition of the last months to quickly use pdsp::ParameterAmp and pdsp::ParameterGain. The multichannel with those classes is easier now but maybe i could have sticked just to the old syntax, to avoid confusion.

I will try to clarify this in the multisample example.

Some of the things in the syntax are there just for being compatible with old versions, i’ll prefer adding the in_0(), in_1() , and out_0() , out_1() or in_L() / in_R() and out_L() , out_R() to all the stereo modules to le the user do what it wants, anyway maybe the clearer should be with L/R and the [] for the classes that use it. The [] operator is mostly used not with L/R channel, but with multiple voices. Maybe calling the method channels() was confusing, i should have called it resize() and let you think about that classes as vector of the same classes, with just one thing in common between all the indices.


#6

Maybe overall is there a way to know how effects work, if they pass through audio and have a wet and dry, if they receive a send and only output wet? If they have mono or stereo inputs and outputs? I think using the >> operators to patch between units makes a very nice system, but it means that a connection cannot be checked until runtime.

most of the effect don’t have a wet/dry control, because they could be used as insert or send, so the choice is to the user. Just add two pdsp::LinearCrossfader for wet/dry (maybe i should maka a dry/wet stereo module, well i’ll put it in the //todos )


#7

6.How does the summing mixer work. I see you can connect multiple outputs simultaneously.

  // we patch our audio input
    engine.audio_in(0) >> compressor >> channelGain         >> engine.audio_out(0);
                                        channelGain         >> engine.audio_out(1);
                          compressor >> delaySend >> delayL >> engine.audio_out(0);
                                        delaySend >> delayR >> engine.audio_out(1);

In these lines 2 sources are going to the output. How is the summing handled? Is everything just added together?

things are just added together. All the summing is SIMD-accelerated.


#8

is there a way to programmatically build a chain? Say from an XML file that has a list of effects to then create a signal flow and GUI that add effects to a chain?

ofxPDSP is a library mostly for creative coding, it is expected for the end user to be a coder and change things by code, so i mostly made my choices for the API with that in mind (about this: i’m very interested in making the API as clear as possible, so your confusion about the inputs/outputs will be taken very seriously). But what you are saying should be possible, everything should be allocated and deallocated with pointers, anyway usually i’m just expecting to use std::vector with the right size at startup, so things could break.


#9

How can I easily get the max and min values for effects parameters?

max and min value should be in the class reference. If they are not, submit an issue on github and i will fix the docs. I am becoming aware that doxygen docs are not that clear, so i am also open to an alternative for making them prettier.


#10

finally: i don’t want things to be copied around. As i stated here: https://github.com/npisanti/ofxPDSP/issues/15

i don’t want any pdsp unit to be copied around, because what happen to the other units patched to it? my choices for the API when for example a class called B has to be copied and A is connected to it would be:

  1. patch all the connections from A to B and B’
  2. don’t patch anything to B’

both those choices can lead to confusion or undefined behaviors, so i preferred being clear and avoiding copy, all the inputs and outputs emits warnings that can be traced when it happens.


#11

@npisanti

Wow, thanks so much for the fast reply, it clears up a lot of other questions I had as well. I am really enjoying this addon, it has a lot of amazing possibilities. Sorry if my questions are really obvious.

Everything is perfectly clear, but I think I was not so clear in asking 7. and my last question.

First 7. - Is there a way to add effects from an XML.

I am coding with this (obviously) but I am trying to make a system for a user who does not code. So far so good, using the modular nature of your code I can read some data from a settings file and the user can make some true/false entries and restart with new configurations. So far this is working perfectly. I wanted to add effect chains to this, so I am looking to be able to be able to read some kind of input at run time, like from an xml, or any other data source and add effects to a chain (I dont need it to happen while the chain is active, but in my setup. In the most basic case, an example could be:

if (hasCompressor)
{
      engine.audio_in(0) >> compressor >> channelGain >> engine.audio_out(0);  
}

else{
        engine.audio_in(0)  >> channelGain >> engine.audio_out(0);
} 

Which is easy and works well. But what I would like to do is build the chain piece by piece, and so far it does not work (but maybe I have to go through this amazing response with more detail first). I am not sure if I can do this already, but I have made my scenario below.

If I had something that would allow me to make a signal placeholder (I have made one up for this pseudo code) I imagine would let me complete the chain with thee syntax you have now (while no audio is passing)

like

pdsp::signalEndPointHolder temporaryEndPoint; // this is my imaginary object that will let me break up the creation of a chain

engine.audio_in(0) >> temporaryEndPoint;

if (hasCompressor)
{
      temporaryEndPoint >> compressor >> temporaryEndPoint;
}

if (hasPhaser)
{
     temporaryEndPoint >> phaser >> temporaryEndPoint;
}

etc, continuing until the chain has added all the effects I want and ending, closing the chain:

temporaryEndPoint >> engine.audio_out(0);

Maybe there is a way to do this already?

and in reference to your answer to me getting the error:

[pdsp] warning! patch node copy constructed

I apologize, I am no programmer, I think my question was more basic. I have a signal chain that has a pdsp::PatchNode in it. I want to have multiple versions of this chain with different sources
all playing at once. I had done this with an array and it was fine. Is there a way to have a vector of components of a signal chain, that will allow me to dynamically add layers to my sound.

so just some alternative (not an array) to using

std::vector<pdsp::PatchNode> myNodes;
pdsp::PatchNode myNodeToPush;
myNodes.push_back(myNodeToPush);

As this is what gives me the error

[pdsp] warning! patch node copy constructed

Many thanks again for the code and answering all the questions, sorry for the barrage.


#12

resizing an array at run-time will invoke copy costructor, which i want to avoid. The only way of doing what do you want to do is by using a std::vector of pointers to pdsp::PatchNode

std::vector<pdsp::PatchNode*> nodes;

then

pdsp::PatchNode* myNodeToPush = new pdsp::PatchNode();
nodes.push_back(myNodeToPush);

or you could do:

nodes.push_back( new pdsp::PatchNode() );

that is the same.

on quitting the app you have to remember to delete all the objects you have dynamically allocated

if you need a tutorial about pointers and memory we got it in the ofBook:
https://openframeworks.cc/ofBook/chapters/memory.html


#13

I am not sure exactly how the doxygen docs should work. You have put a lot of work into the comments (and I think that is the docs): Here is a section of the GrainCloud.h file

    /*!
    @brief Sets "pitch_jitter" as selected input and returns this module ready to be patched. Grain pitch jitter in semitones.
    */
 Patchable& in_pitch_jitter();

These hints are shown when I hover over the function in visual studio and when I option click in xcode, so when I have

cloud.in_pitch_jitter();

in my code I do see comments, on xcode and visual studio, but no maximum and minimum values or the data type, in my mind I imagined that here is the place to add the details, like this:

    /*!
    @brief Sets "pitch_jitter" as selected input and returns this module ready to be patched. Grain pitch jitter in semitones with a value range between -12.0 and 12.0 semitones expressed as a float.
    */
 Patchable& in_pitch_jitter();

This would make it very convenient when coding to see quickly what the ranges are.

Due to the structure of the code jumping to definition with the same function reveals:

pdsp::Patchable& pdsp::GrainCloud::in_pitch_jitter(){
    return in("pitch_jitter");
}

Which does not contain any information

But maybe this is not the way to view what you are talking about. If it is the doxygen docs then I will start an issue on github, but maybe I am just looking in the wrong place.


#14

the doxygen comments are used to generate the docs, so you should check the docs online ( or generate it on your machine with the given doxygen file).
http://npisanti.com/ofxPDSP/md__modules.html

I have not made the docs thinking about doxygen supports in IDE, as i not even use an IDE (just a text editor and command line).

Not all the inputs have a maximum and minimum range, there is not a maximum or minimum for the pitch jitter of pdsp::GrainCloud (obviously if you go too much outside audible range things will become inaudible).


#15
  1. I think I got a little confused by some of the syntax - It seems that the connection process sometimes does not require any specification of input and output except for the engine (this makes sense), but in the grain cloud example I see
cloud.out_L() >> ampControl[0] * dB(12.0f) >> engine.audio_out(0); 
cloud.out_R() >> ampControl[1] * dB(12.0f) >> engine.audio_out(1);

after some thinking i decided to revisit the multichannel API totally, now channels are selected with the .ch( size_t index ) method for everything that has more than one channel, so the code above will look like this:

cloud.ch(0) >> ampControl.ch(0) * dB(12.0f) >> engine.audio_out(0); 
cloud.ch(1) >> ampControl.ch(1) * dB(12.0f) >> engine.audio_out(1);

all the old functions, like the [] operator, in_L(), in_R(), out_L(), out_R(), in_0(), in_1(), out_0(), out_1() are deprecated but they are still functional and i will keep long support for them for backward compatibility (where "long support` means they will stay around for 18 months or longer). The deprecation messages will let you know what to use instead.
Multichannel modules will automatically allocate channels when you query for them and stereo modules will let you know if you go outside the available channels (just 2).