encode doubles into binary

hey all

i’m trying to encode a vector of doubles into binary (i.e. chars) to send over UDP.
Currently i’m trying to test with:

  
  
	double testVal=ofGetSeconds();  
	  
	char *charEncoded;  
	charEncoded = (char *) &testVal;  
	  
	_sender.SendAll(charEncoded,8);  
  

when i test this with netcat, i get data coming out in my terminal
(although it’s only showing 6bytes per message instead of 8, which i presume is because terminal cant show the other 2 characters)

is there a quick / neat method to do that for a whole vector?
i plan for my message format to be:

0 byte = number of values in first vector
1 byte = number of values in second vector
2,3,4,etc = data from first and then second vectors

here’s my current messy approach:

  
	//message =  
	//[ 0 1 ] [ 2 3 ... ]  
	// 0 = SliceCountX (up tp 255)          <--- 255 * 8 = number of bytes occupied by these slices  
	// 1 = SliceCountY (up to 255)  
	// 4, 5, etc = data for slicesx then slicesy  
	  
	int length = (_SliceCountX + _SliceCountY) * 8 + 2;  
	unsigned char *output = new unsigned char[length];  
	unsigned char *dblchar;  
	double tempdouble;  
	  
	output[0]		=  char(_SliceCountX);  
	output[1]		=  char(_SliceCountY);  
	  
	output+=2;  
	  
	for (int iSlice=0; iSlice<_SliceCountX; iSlice++) {  
		tempdouble = _valuesX.at(iSlice);  
		dblchar = (unsigned char*) &tempdouble;  
		  
		for (int iByte=0; iByte<8; iByte++)  
			output[iByte] = dblchar[iByte];  
		  
		output+=8;  
	}  
	  
	for (int iSlice=0; iSlice<_SliceCountY; iSlice++) {  
		tempdouble = _valuesY.at(iSlice);  
		dblchar = (unsigned char*) &tempdouble;  
		  
		for (int iByte=0; iByte<8; iByte++)  
			output[iByte] = dblchar[iByte];  
		  
		output+=8;  
	}  
										   
	_freshDataAvailable = false;  
	  
	output -= length;  
	  
	memcpy(binary, output, length);								  
	  
	return length;										   
  

since my char string has plenty of '\0’s in it, is there any way to print that variable up to its entire length (or selected length), rather than up to the first ‘\0’?

had some progress with this
data seems to be a bit corrupted on the receiving end where i’m using .NET to decode with

  
BitConverter.ToDouble()  

it seems the values are kind of OK after decoding but not exact, and anything < 2 comes up completely wrong.

either that means i’m doing something wrong
or .NET is using a different standard for doubles than C++

There are different formats for encoding floats (and doubles):

http://www.codeproject.com/KB/applicati-…-umber.aspx

However this post seems to indicate that some C++ compilers will encode the same static number the same way:
http://stackoverflow.com/questions/2085-…-tostringf2

Although it’s the processor ultimately that uses the floating point values, so if the two machines are different architectures, that may explain the difference.

thanks for the info!

i messed around a bit more and found out that c# was using 7bit decoding at the other end which turned out to be the problem (which was only obvious in my examples when there was a negative value in the exponent)

i set it up with my own ‘single floats’ (int16 for significand, int16 for exponent) and that worked

but now gone back to system doubles (53bit significand iirc) and that’s working now
sending from iphone oF, receiving in windows c#

thanks again