Real time video delay | port from Processing


#1

Hello everybody
I have a little piece of code in Processing but I would like to have the same functionality in oF instead.
The code make and playback some seconds of a webcam mixed with the original image.

The problem is: I don’t know were to start.
I have been looking in the addons to see if there is some example that could be a starting point, but I had no luck so far.
ofxVideoBuffer, ofxVideoUtils and ofxPlaymodes were my bets. But they let me down.

The ofxVideoBuffer basic video grabber example and the multi-tap, which seemed (by their names) good starting points, are missing this header, which I couldn’t find anywhere: ofxVideoFrame.h

ofxVideoUtils I got a …/BufferLoaderEvents.h:29: error: Poco/Exception.h: No such file or directory

and ofxPlaymodes examples complains also about not finding Poco/* like in: …/addons/ofxPlaymodes/src/utils/pmUtils.h:11: error: Poco/Timestamp.h: No such file or directory
#include “Poco/Timestamp.h”
^

I would rather use a vanilla solution, but I don’t know were to start. Any direction either of a solution or something I should learn to figure out how to do it, will be greatly appreciated.

best regards

Here goes the Processing code:

import processing.video.*;
Capture video;
int capW = 640;  //match camera resolution here
int capH = 480;
float yoff = 50.0;  // 2nd dimension of perlin noise
float delayTime;

int nDelayFrames = 100; // about 3 seconds
int currentFrame = nDelayFrames-1;
int currentFrame2;
int currentFrame3;

int numPixels;
int[] previousFrame;

PImage frames[];
PImage framesHV[];
PImage framesV[];
PImage videoFlipH;

void setup() {
  size(640, 480);  //set monitor size here
  frameRate(200);
  video = new Capture(this, capW, capH );
  video.start();
  frames = new PImage [nDelayFrames];
  framesHV = new PImage[nDelayFrames];
  framesV = new PImage[nDelayFrames];
  videoFlipH = new PImage(video.width, video.height);
  for (int i= 0; i<nDelayFrames; i++) {
    frames[i] = createImage(capW, capH, ARGB);
    framesHV[i] = createImage(capW, capH, ARGB);
    framesV[i] = createImage(capW, capH, ARGB);
    
    numPixels = video.width * video.height;
  // Create an array to store the previously captured frame
  previousFrame = new int[numPixels];
  loadPixels();
  }
}

void draw() {

  float delayTime = constrain(map(noise(yoff)*10, 1, 7, 1, 100), 1, 100);    // Option #2: 1D Noise
  yoff = (yoff+0.01) % nDelayFrames;
  nDelayFrames = int(delayTime);

  if (video.available()) {
    video.read();
    video.loadPixels(); // Make its pixels[] array available
    for (int loc = 0; loc < width*height; loc++) {
      color currColor = video.pixels[loc];
      color prevColor = previousFrame[loc];

      int currR = (currColor >> 16) & 0xFF;
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;
      
      // Compute the difference of the red, green, and blue values
      // summes and divide the colors to result black and white
      // look at the FrameDifferencing example if you want it in colors
      int newR = abs(int(currR+(currG+currB)/2)/2);
      int newG = abs(int(currG+(currR+currB)/2)/2);
      int newB = abs(int(currB+(currG+currR)/2)/2);
      // mantain the values of the colors between 0 and 255
      newR = newR < 0 ? 0 : newR > 255 ? 255 : newR;
      newG = newG < 0 ? 0 : newG > 255 ? 255 : newG;
      newB = newB < 0 ? 0 : newB > 255 ? 255 : newB;

      // Render the difference image to the screen
      video.pixels[loc] = 0xff000000 | (newR << 16) | (newG << 8) | newB;

      //previousFrame[loc] = lerpColor (previousFrame[loc], currColor, 0.1);
    }
    
    for (int x = 0; x < video.width; x++) {
      for (int y = 0; y < video.height; y++) {
        framesHV[currentFrame].pixels[y*(video.width) + x] = video.pixels[(video.height - 1 - y)*video.width+(video.width-(x+1))];
        framesV[currentFrame].pixels[y*(video.width) + x] = video.pixels[(video.height - 1 - y)*(video.width) + x];
        videoFlipH.pixels[y*video.width + x] = video.pixels[y*video.width+(video.width-(x+1))];
      }
    }
    arrayCopy (video.pixels, frames[currentFrame].pixels);  //, tempImage[currentFrame].pixels
    frames[currentFrame].updatePixels();
    framesHV[currentFrame].updatePixels();
    framesV[currentFrame].updatePixels();
    tint(255, 167);
    updatePixels();
    videoFlipH.updatePixels();
    currentFrame = (currentFrame-1 + nDelayFrames) % nDelayFrames;
    currentFrame2 = (currentFrame +30)%nDelayFrames;  //+30= delay time. must be less than nDelayFrames
    currentFrame3 = (currentFrame +60)%nDelayFrames;  //+60= delay time. must be less than nDelayFrames

      image (framesHV[currentFrame], 0, 0, width, height);
    blend(frames[currentFrame2], 0, 0, width, height, 0, 0, width, height, OVERLAY);  //try with ADD, DARKEST etc here. see blend help 
    blend(framesV[currentFrame3], 0, 0, width, height, 0, 0, width, height, SOFT_LIGHT);  //try with ADD, DARKEST etc here. see blend help
    blend(videoFlipH, 0, 0, width, height, 0, 0, width, height, LIGHTEST);  //try with ADD, DARKEST etc here. see blend help
  }
 //   println(nDelayFrames);
  println(int(frameRate));
}

Poor performance with multiple pixel buffers
#2

Hi,
no idea about the addons you mention but as of the latest release you need to add poco as an addon, as it used to be included into of. So if poco is needed just include ofxPoco.

As for the main question, it is quite straight forwards. In your code you have some arrays that hold frames,

PImage frames[];
PImage framesHV[];
PImage framesV[];

In of just make this to be of ofImage type.
It is up to you if you use an array or a vector. I’d recomend a vector as you can dynamically allocate it.
the rest of the code should be very similar.
for accessing individual pixels in ofImage you have to call ofImageInstance.getPixels().getData()[pixel_index]
hope this helps.
best


#3

Thanks for your answer.
I did show me a path to follow.

I made it. And it works. Although it is redundant, clumsy and inefficient. But it’s a start.

I didn’t really undestand how to use this, and couldn’t find references on the web:

I found something that helped me in the book Mastering Openframeworks: Working with video chapter / Radial Slit-scan example.

Well, at the moment is a Frankenstein, but this is what I have:

ofApp.h

#pragma once

#include "ofMain.h"

class ofApp : public ofBaseApp{
	public:
		void setup();
		void update();
        void draw();

        ofVideoGrabber vidGrabber;
        ofPixels videoInverted;
        ofTexture videoTexture;

        unsigned char* myVideo;
        
        // I'd rather use vector here, 
        // but I couldn't find the equivalent or something
        // to substitute .push_front(pixels) in .cpp
        deque<ofPixels> frames0; 
        deque<ofPixels> frames1;
        deque<ofPixels> frames2;

        float yoff;  // 2nd dimension of perlin noise
        float delayTime;
        int nDelayFrames;
        int currentFrame0;
        int currentFrame1;
        int currentFrame2;


        // Main processing function which computes the 
        // pixel color (x, y) using frames buffer
        ofColor getPixelColor0( int x, int y );
        ofColor getPixelColor1( int x, int y );
        ofColor getPixelColor2( int x, int y );

        ofBlendMode blendMode;

        ofImage framesHV;
        ofImage framesOrig;
        ofImage framesV;
        ofImage videoFlip;

        int camWidth;
        int camHeight;
};

ofApp.cpp

#include "ofApp.h"

//--------------------------------------------------------------
void ofApp::setup(){
    camWidth = 640;  // try to grab at this size.
    camHeight = 480;


    vidGrabber.setDesiredFrameRate(30);
    vidGrabber.initGrabber(camWidth, camHeight);

    ofSetVerticalSync(true);

    nDelayFrames = 100; //Set buffer size
    yoff = 50.0; // 2nd dimension of perlin noise
    currentFrame0 = nDelayFrames-1;

    videoFlip.allocate(256, 256, OF_IMAGE_COLOR_ALPHA);
    blendMode = OF_BLENDMODE_ALPHA;
}

//--------------------------------------------------------------

void ofApp::update(){
    ofBackground(100, 100, 100);
    vidGrabber.update();

    if(vidGrabber.isFrameNew()){
        ofPixels & pixels0 = vidGrabber.getPixels();
        ofPixels & pixels1 = vidGrabber.getPixels();
        ofPixels & pixels2 = vidGrabber.getPixels();
        videoFlip = vidGrabber.getPixels();
        videoFlip.mirror(0, 1);

        // Now, what follows is really redundant
        // I couldn't find another way to have each video 
        // running in its own frame number
        // ////////////////////////////////////////////////////////////////

        //Push the new frame to the beginning of the frame list
        frames0.push_front(pixels0);

        //If number of buffered frames > nDelayFrames,
        //then pop the oldest frame
        if ( frames0.size() > nDelayFrames ) {
            frames0.pop_back();
        }

        if ( !pixels0.isAllocated() ) {
            pixels0 = frames0[0];
        }

        //Getting video frame size for formulas simplification
        int w = vidGrabber.getWidth();
        int h = vidGrabber.getHeight();
        //Scan all the pixels
        for (int y = 0; y < h; y++) {
            for (int x = 0; x < w; x++) {
                //Get pixel color
                ofColor color0 = getPixelColor0( x, y );
                //Set pixel to image pixels
                pixels0.setColor( x, y, color0 );
            }
        }
        //Set new pixels values to the image
        framesHV.setFromPixels( pixels0 );
        framesHV.mirror(1, 1);

        // ////////////////////////////////////////////////////////////////

        frames1.push_front(pixels1);

        if ( frames1.size() > nDelayFrames ) {
            frames1.pop_back();
        }

        if ( !pixels1.isAllocated() ) {
            pixels1 = frames1[0];
        }

        for (int y = 0; y < h; y++) {
            for (int x = 0; x < w; x++) {
                ofColor color1 = getPixelColor1( x, y );
                pixels1.setColor( x, y, color1 );
            }
        }
        framesOrig.setFromPixels( pixels1 );

        // ////////////////////////////////////////////////////////////////

        frames2.push_front(pixels2);

        if ( frames2.size() > nDelayFrames ) {
            frames2.pop_back();
        }
        if ( !pixels2.isAllocated() ) {
            pixels2 = frames2[0];
        }
        for (int y = 0; y < h; y++) {
            for (int x = 0; x < w; x++) {
                ofColor color2 = getPixelColor2( x, y );
                pixels2.setColor( x, y, color2 );
            }
        }
        framesV.setFromPixels( pixels2 );
        framesV.mirror( 1, 0 );
    }

    float delayTime = ofClamp(ofMap(ofNoise(yoff)*10, 1, 7, 1, 100), 1, 100);
    
    yoff = fmod((yoff+0.01), nDelayFrames);

    nDelayFrames = int(delayTime);

    currentFrame0 = ( currentFrame0 - 1 + nDelayFrames ) % nDelayFrames;
    currentFrame1 = ( currentFrame1 + 30 ) % nDelayFrames;
    currentFrame2 = ( currentFrame2 + 60 ) % nDelayFrames;  
}

//--------------------------------------------------------------
void ofApp::draw(){

    ofEnableBlendMode(OF_BLENDMODE_ALPHA);
    ofSetHexColor(0xffffff);

    framesHV.draw(0, 0);

    ofEnableBlendMode(blendMode);

    blendMode = OF_BLENDMODE_MULTIPLY;
    framesOrig.draw(0, 0);

    blendMode = OF_BLENDMODE_MULTIPLY;
    framesOrig.draw(0, 0);

    blendMode = OF_BLENDMODE_SCREEN;
    ofSetColor(255, 255, 255, 128);
    videoFlip.draw(0, 0);

    cout << currentFrame0 << endl;
}


// redundant again
// how could I make one funcion that could return three values here:
//--------------------------------------------------------------
ofColor ofApp::getPixelColor0( int x, int y ){

    int n = frames0.size() - 1;

    int i0 = ofClamp( currentFrame0, 0, n );
    //Getting the frame colors
    ofColor color0 = frames0[ i0 ].getColor( x, y );
    //Interpolate colors - this is the function result
    return color0;
}
//--------------------------------------------------------------
ofColor ofApp::getPixelColor1( int x, int y ){

    int n = frames1.size() - 1;

    int i1 = ofClamp( currentFrame1, 0, n );
    //Getting the frame colors
    ofColor color1 = frames1[ i1 ].getColor( x, y );
    //Interpolate colors - this is the function result
    return color1;
}
//--------------------------------------------------------------
ofColor ofApp::getPixelColor2( int x, int y ){

    int n = frames2.size() - 1;

    int i2 = ofClamp( currentFrame1, 0, n );
    //Getting the frame colors
    ofColor color2 = frames2[ i2 ].getColor( x, y );
    //Interpolate colors - this is the function result
    return color2;
}

thanks again

best regards,
Gil


#4

Hi,
right it is super redundant.
theres a lot of stuff that doesnt even show up at the end.
All those for loops are just iterating through the pixels and copying which is super inefficient. you can just use the = and it will do a memcopy of the pixels, which is a lot better.

as for the

ImageInstance.getPixels().getData() [pixel_index]

it is the way to access individual pixels data.
take a look at this as it might help.

also look at the ofBook
http://openframeworks.cc/learning/

hope this helps.
best


#5

Hi.
I couldn’t get back to this for a while, but now I did it.
I think I have a much better code now.
I’m getting really low fps. So I still have to improve it. Just don’t know how.

Thanks for pointing me the path @roymacdonald.

Here is the new code

ofApp.h

#pragma once
#include "ofMain.h"
class ofApp : public ofBaseApp {
public:
    void setup();
    void update();
    void draw();
    void windowResized(int w, int h);
    ofVideoGrabber vidGrabber;
    int camWidth;
    int camHeight;

    int nDelayFrames;
    int delayedFrame;
    int delayedFrame1;
    int delayedFrame2;
    ofImage videoFlipH;
    ofImage videoFlipV;
    ofImage videoRotate;
    deque<ofPixels> frames0;
    deque<ofPixels> frames1;
    deque<ofPixels> frames2;

    // Main processing function which computes the
    // pixel color (x, y) using frames buffer
    ofColor getPixelColor0( int x, int y );
    ofColor getPixelColor1( int x, int y );
    ofColor getPixelColor2( int x, int y );
    ofColor color0;
    ofColor color1;
    ofColor color2;

    void setupSignedNoiseDemo();
    void updateSignedNoiseDemo();
    int *signedNoiseData;
    int nSignedNoiseData;

    float radialNoiseCursor; };

ofApp.cpp

#include "ofApp.h"
void ofApp::setup(){
    camWidth = 640;  // try to grab at this size.
    camHeight = 480;
    vidGrabber.setDeviceID(0);
    vidGrabber.setDesiredFrameRate(60);
    vidGrabber.initGrabber(camWidth, camHeight);
    ofSetVerticalSync(true);

    nDelayFrames = 100; //Set buffer size
    delayedFrame = nDelayFrames-1;
    setupSignedNoiseDemo();
}

//--------------------------------------------------------------

void ofApp::setupSignedNoiseDemo(){
    // Setup and allocate resources used in the signed noise demo.
    nSignedNoiseData = 100; // we'll store a history of 100 numbers
    signedNoiseData = new int[nSignedNoiseData];
    for (int i=0; i<nSignedNoiseData; i++){
        signedNoiseData[i] = 0;
    }
    radialNoiseCursor = 0.0;
}

//--------------------------------------------------------------
void ofApp::update(){
    ofBackground(100, 100, 100);
    vidGrabber.update();

    if(vidGrabber.isFrameNew()){
        ofPixels pixels0 = vidGrabber.getPixels();
        ofPixels pixels1 = vidGrabber.getPixels();
        ofPixels pixels2 = vidGrabber.getPixels();

        //Push the new frame to the beginning of the frame list
        frames0.push_front(pixels0);
        frames1.push_front(pixels1);
        frames2.push_front(pixels2);

        if ( frames0.size() > nDelayFrames ) {
            frames0.pop_back();
        };
        if ( frames1.size() > nDelayFrames ) {
            frames1.pop_back();
        };
        if ( frames2.size() > nDelayFrames ) {
            frames2.pop_back();
        };
        if ( !pixels0.isAllocated() ) {
            pixels0 = frames0[0]; //is index 0 the newest or the oldest?
        }
        if ( !pixels1.isAllocated() ) {
            pixels1 = frames1[0];
        }
        if ( !pixels2.isAllocated() ) {
            pixels2 = frames2[0];
        }
        //Getting video frame size for formulas simplification
        int w = vidGrabber.getWidth();
        int h = vidGrabber.getHeight();
        //Scan all the pixels
        for (int y = 0; y < h; y++) {
            for (int x = 0; x < w; x++) {
                //Get pixel color
                ofColor color0 = getPixelColor0( x, y );
                // ofColor *color0 = new ofColor(getPixelColor0( x, y ));
                ofColor color1 = getPixelColor1( x, y );
                ofColor color2 = getPixelColor2( x, y );
                //Set pixel to image pixels
                pixels0.setColor( x, y, color0 );
                pixels1.setColor( x, y, color1 );
                pixels2.setColor( x, y, color2 );
            }
        }

        videoFlipH.setFromPixels( pixels0 );
        videoRotate.setFromPixels( pixels1 );
        videoFlipV.setFromPixels( pixels2 );

        delayedFrame = signedNoiseData[0] + (nSignedNoiseData/4) % nSignedNoiseData;
        delayedFrame1 = delayedFrame + (nSignedNoiseData/3) % nSignedNoiseData;
        delayedFrame2 = delayedFrame1 + (nSignedNoiseData/2 ) % nSignedNoiseData;
    }

    updateSignedNoiseDemo();

    std::stringstream strm;
        strm << "fps: " << ofGetFrameRate();
    ofSetWindowTitle(strm.str());
}
//--------------------------------------------------------------

void ofApp::updateSignedNoiseDemo(){

    // Shift all of the old data forward through the array
    for (int i=(nSignedNoiseData-1); i>0; i--){
        signedNoiseData[i] = signedNoiseData[i-1];
    }
    // Compute the latest data, and insert it at the head of the array.
    // Here is where ofSignedNoise is requested.
    float noiseStep    = 0.2;
    float noiseAmount  = 50;
    signedNoiseData[0] = noiseAmount * ofSignedNoise( radialNoiseCursor );
    radialNoiseCursor += noiseStep;
}

//--------------------------------------------------------------
void ofApp::draw(){
   ofSetColor(255);

    vidGrabber.draw(20, 20);
    //ofEnableAlphaBlending();
    ofEnableBlendMode(OF_BLENDMODE_SCREEN);
    ofSetColor(255,255,255,191);
    videoFlipH.draw(20 + camWidth, 20, - 1* camWidth, camHeight);
    ofDisableAlphaBlending();
    ofEnableBlendMode(OF_BLENDMODE_MULTIPLY);
    ofSetColor(255,255,255,127);
    videoRotate.draw(20 + camWidth, 20 + camHeight, -1 * camWidth, -1 * camHeight);
    ofSetColor(255,255,255,64);
    videoFlipV.draw(20, 20 + camHeight, camWidth, - 1* camHeight);
    ofDisableBlendMode();
}

//--------------------------------------------------------------
ofColor ofApp::getPixelColor0( int x, int y ){
    int n = frames0.size() - 1;

    int i0 = ofClamp( delayedFrame, 0, n );
    //Getting the frame colors
    color0 = frames0[ i0 ].getColor( x, y );
    // this here instead of in update makes a cool(d)effect:
    //    delayedFrame = ( delayedFrame - 1 + nDelayFrames ) % nDelayFrames;
}

ofColor ofApp::getPixelColor1( int x, int y ){
    int n = frames1.size() - 1;
    int i0 = ofClamp( delayedFrame1, 0, n );
    color1 = frames1[ i0 ].getColor( x, y );
}

ofColor ofApp::getPixelColor2( int x, int y ){
    int n = frames2.size() - 1;
    int i0 = ofClamp( delayedFrame2, 0, n );
    color2 = frames2[ i0 ].getColor( x, y );
}

//--------------------------------------------------------------
void ofApp::windowResized(int w, int h){
}

#6

Hey, sorry for the delay.
Out of a very fast read of your code, the problems seems to be the deque which doesn’t seem to be necesary andit is considerably slower than a stl::vector.
hope it helps


#7

Hi Roy.
I tried to make this exact change, but couldn’t find the way to substitute
this:
frames 1 is a vector, and pixels1=vidGrabber.getPixels();
frames1.push_front(pixels1);
vectors don’t have a .push_front. I thought that just .front would do the
job, but it seems it’s not the same. Do you know how could I do that?

thanks once more

roymacdonald forum@openframeworks.cc schrieb am Do., 16. März 2017 um
21:54 Uhr:


#8

Oops. Sorry.
Right now frames1 is a deque, but if I make it a vector it goes like I said.
Now I’m guessing that stl::vector is not the same as e.g vector.
Is that right?

Gil Fuser gilfuser@gmail.com schrieb am Do., 16. März 2017 um 23:04 Uhr:


#9

deque can be much faster than a vector if you need to push and pop on the front and back. the problem with your eaxmple is that you are making a lot of copies and also creating and destroying ofPixels all the time.

things like:

        ofPixels pixels0 = vidGrabber.getPixels();
        ofPixels pixels1 = vidGrabber.getPixels();
        ofPixels pixels2 = vidGrabber.getPixels();

can be changed with

        ofPixels & pixels0 = vidGrabber.getPixels();
        ofPixels & pixels1 = vidGrabber.getPixels();
        ofPixels & pixels2 = vidGrabber.getPixels();

which gets a reference instead of a copy but also when you push or pop from the deque it destroys and creates new ofPixels which can be really slow.

also getPixelColor is pretty slow using ofPixels iterators to iterate through all the pixels in the ofPixels is much faster


#10

Hi Arturo,

I tried having this before: ofPixels & pixels0 = vidGrabber.getPixels();

but then all the delayed buffers were being read at the same time. So to make it work I had one of this for each pixels1, pixels2 and pixels3:

for (int y = 0; y < h; y++) {
    for (int x = 0; x < w; x++) {
        color0 = getPixelColor0( x, y );
        pixels0.setColor( x, y, color0 );
    }
}
videoFlipH.setFromPixels( pixels0 );

having three of this looked redundant and clumsy, but I couldn’t find a way to make the delayedFrames independent.

I’ll work on substituting the getPixelColor for the ofPixels iterators.

thanks


#11

Hey,
I think that this would be so much easier if you use a circular buffer.
Check this

ofApp.h

#pragma once
#include "ofMain.h"
class circularPixelBuffer{
public:
	circularPixelBuffer(){
		currentIndex = 0;
	}
	void setup(int numFrames){
		frames.resize(numFrames);
		currentIndex = numFrames -1;
	}
	void pushPixels(ofPixels& pix){
		currentIndex--;
		if (currentIndex < 0) {
			currentIndex = frames.size() -1;
		}
		frames[currentIndex] = pix;
	}
	
	ofPixels& getDelayedPixels(size_t delay){
		if(delay < frames.size()){
			return frames[ofWrap(delay + currentIndex, 0, frames.size())];
		}
		return frames[0];
	}
	
protected:
	int currentIndex;
	vector<ofPixels> frames;
};
class ofApp : public ofBaseApp {
public:
	void setup();
	void update();
	void draw();
	ofVideoGrabber vidGrabber;
	int camWidth;
	int camHeight;
	
	int nDelayFrames;
	
	circularPixelBuffer buffer;

};

ofApp.cpp

#include "ofApp.h"
void ofApp::setup(){
	camWidth = 640;  // try to grab at this size.
	camHeight = 480;
	vidGrabber.setDeviceID(0);
	vidGrabber.setDesiredFrameRate(60);
	vidGrabber.initGrabber(camWidth, camHeight);
	ofSetVerticalSync(true);
	
	nDelayFrames = 100; //Set buffer size
	buffer.setup(nDelayFrames);
}

//--------------------------------------------------------------
void ofApp::update(){

	vidGrabber.update();
	
	if(vidGrabber.isFrameNew()){
		buffer.pushPixels(vidGrabber.getPixels());
	}
	
}


//--------------------------------------------------------------
void ofApp::draw(){
	ofSetColor(255);
	
	vidGrabber.draw(20, 20);

	int ind = ofMap(ofGetMouseX(), 0, ofGetWidth(), 0, nDelayFrames-1, true);

	
	ofDrawBitmapStringHighlight(ofToString(ind), 20, camHeight+40);
	
	ofTexture tex;
	tex.loadData(buffer.getDelayedPixels(ind));
	tex.draw(20, camHeight+ 40);
	
}

Hope it helps.
cheers


#12

Hey Roy.

Thank you! It worked really good!
I’m just working to solve a minor problem that arose and as soon as solve it I post here the final result.