ofSerial slow compared to native serial monitor?

Hey, all. I’m pulling a 10bit value off an analog input of an ATMega168 and writing that value to the serial port. If I use a serial monitor, like the one in the Arduino, I get a very snappy response and no lag. Right now I’m constructing as 49 byte packet(16 3-digit values and a stop bit) and sending it to the serial bus. I’m trying initially to just emulate the serial monitor in OF, before moving on to creating a much fancier oscilloscope like app the read this custom sensor I’ve built. The sensor functions like a strain gauge. The code on the micro controller reads the ADC and when a “trigger” value is reached, the 16 samples are taken, the stop bit added, and the packet is sent.

In the code below (and on the microcontroller) I’ve limited the number of samples to 5, again each 3-digits + the stop bit, as that’s when I’m noticing the first speed hit. In the Arduino IDE serial monitor there is no slow down, in OF it gets progressively sloggier the more samples I try to move across the bus.

Any idea where I should start looking? What might I be doing wrong? Most of the code is modified from the serial example.

  
#include "testApp.h"  
  
//--------------------------------------------------------------  
void testApp::setup(){	   
  
	ofSetVerticalSync(true);  
  
	  
	ofBackground(255,255,255);	  
	ofSetLogLevel(OF_LOG_NOTICE);  
  
	serial.setup("COM5", 9600); //open the first device  
  
}  
  
//--------------------------------------------------------------  
void testApp::update(){  
	//ofSetBackgroundAuto(false);  
		  
  
	int nTimesRead  = 0;    
	int nBytesRead  = 0;    
	int nRead       = 0;  // a temp variable to keep count per read    
    
     
    unsigned char bytesReturned[16];    
    unsigned char bytesReadString[16];   
    
    //clear our buffers    
    memset( bytesReadString, 0, 16 );    
    memset( bytesReturned, 0, 16 );    
    
    //we read as much as possible so we make sure we get the newest data    
    while ( (nRead = serial.readBytes( bytesReturned, 16)) > 0 ) {    
        nTimesRead++;       
        nBytesRead = nRead;    
    }    
    
    //if we have got all bytes    
    if(bytesReturned[15] == 'X') {    
        //lets update our buffer  
		memcpy( bytesReadString, bytesReturned, 15 );  
		printf( "%s", bytesReadString );  
		printf( "\n" );  
		str = ofToString(bytesReadString);  
		  
    }  
	//serial.flush();  
	  
	  
	int x = ofGetFrameNum();  
	  
	point.x = x;  
	point.y = ofToFloat(str);  
		    
			  
  
}  

If I add a

  
sleep(50);  

delay at the bottom of the update() loop, I get MUCH better behavior. Still wondering if there’s a better way to do what I’m doing.

NovySan

Generally your OF app, or Processing app for that matter, is going to run much faster than anything on an µC, so you’ll either want to only read data when your packet size is reached, ie

  
if(serial.available() > 15) {  
// do all your reading  
} else {  
// don't bother  
}  
  

or, in your case you can just read and assemble bytes over several update() cycles because you have a stop character. That just requires that you make bytesReturned a property of the class, keep track of the last index that you wrote into bytesReturned, and then read however much is available in the buffer until it’s full.

Also, there’s the possibility that there also may be some oddness on the Windows side, but I’d recommend trying those out first before digging in too much more.

Thanks! I’ll look into it and let you know.

NovySan