accessing variables from another thread causes crashes?

I’m working on an application where lots of particles interact with blobs captured using the ofxOpenCv addon.
To speed things up i made a threaded class that handles the collision between the particles and the blobs. It reads the particles position, compares it with all the points in the blobs and adds a force to the particle towards the nearest point. Both the particles and the contourFinder exist in the main testApp thread, the threaded class only has pointers to them. Everything seems to work just fine but every once in a while (sometimes after a couple minutes, sometimes after an hour or so) the application crashes, and the debugger points to the opencv addon’s contourFinder, more precisely the part where the blobs are added to the blobs vector.
Could it be that the thread is accessing the blobs simultaneously as the contourFinder is writing them, and that is what is causing the application to crash?
this is my first time working with threads, so i’m a bit lost and this is all i could think of, considering i have used the openv addon many times without problems.

Any help will be much appreciated



I’d be very careful with using a thread to access a vector. In my experience stl containers are not thread safe at all…

(acutally just tonight was helping JGL with a similary problem, a vector of threaded objects was crashing… putting them into an array made it fine).

you can lock / unlock around the findContours and your thread’s access of the contourFinder. another idea is to copy the ofBlobs into an array per frame. Then, access that array in the thread.

hope that helps!!

generally accessing variables/members across threads is quite tricky. i am working a lot with a lockless ringbuffer to send data from a display thread to an audio thread. this really works well and without mutexing or locking.

i use this short template class:

// ringbuffer  
#define ringbuffer_next(x) (( (x) + 1) % size)  
template<class T, int size> class Ringbuffer  
    T buffer[size];  
    int writePtr;  
    int readPtr;  
    T zero;  
    Ringbuffer(T _zero) {zero = _zero; writePtr = 0; readPtr = 0; for(int i = 0; i < size; i++) buffer[i] = zero;}  
    bool add(T sg)  
        if(ringbuffer_next(writePtr) == readPtr)  
            return false;  
        writePtr = ringbuffer_next(writePtr);  
        buffer[writePtr] = sg;  
        return true;  
    bool isEmpty()  
        return readPtr == writePtr;  
    T get()  
            return zero;  
        //int r = readPtr;  
        readPtr = ringbuffer_next(readPtr);  
        T r = buffer[readPtr];  
        buffer[readPtr] = zero;  
        return r;  



that is a very cool data structure haha

lockless ringbuffer = wow. will have to give that a shot.

@pelintra yeah, you have to manually ensure that only one thing is accessing the vector at any one time. the easiest way to do this is to use a ‘mutex’ (mutually exclusive usage lock, i suppose it’s short for). it’s like the talking stick at a meeting of hippies: there’s only one talking stick (the mutex) and if any of the people (threads) want to talk (access the vector) they need to be holding the talking stick.

so, every time you make an access to the vector in any thread, you have to try and grab the mutex, and if you can’t get it, you have to wait until you can. when you’ve got it, you can read/write; once you’ve finished using it, you have to release it so that other trheads can using it.

this is how i managed the multithreaded OSC receiving in ofxOsc. have a look at addons/ofxOsc/src/ofxOscReceiver.cpp. for example, in the ProcessMessage() method:

	// grab a lock on the queue  
	// add incoming message on to the queue  
	messages.push_back( ofMessage );  
	// release the lock  

hope this helps…

Hi. thanks for the help.

@damian, thanks for the tip, i thinks i finally understand what mutex, mutex lock/unlock do.
i tried wrapping the contourFinder in lock()/unlock functions, and it works, all threads wait for each other to finish but the framerate drops a lot!!! i get about 8fps

@zach i ended up doing what you said about copying everything into arrays, and substituted avery std::vector that was being accessed in the threaded class with arrays, and so far so good, i’ve been testing the application for a couple hours now without any crashes :stuck_out_tongue:

yeah, that’ll happen. ideally you want to be inside lock()/unlock() blocks for the minimum amount of time possible. try doing the contourFinder stuff on a local copy of the data, and once all the processing is complete grab a lock on the _shared_ data and copy from local to shared. hope that makes sense…


Hey Rui, one of the big common dangers of using multithreading is, once you fix all the crashing and freezing with mutex’s, your threads may end up not running simultaneously at all! (which is whats happening in your case)
I have a big rant about it here…/813/0

What you want to do (in my experience) is have the contourfinder process info for the next frame while your other classes process/draw results from what the contourfinder process from the last frame (because contourfinder is processing for next frame). And then wait for both threads to finish and sync (contourfinder feeds its results into your other thread). I explain it a bit more with the code I went for in the thread I linked to above.

This is the method I used on the glastonbury pi project (optical flow on 6 cameras+fluid dynamics+thousands of particles etc.) and its pretty stable. All 8 threads run in parallel, when they are all done, they exchange numbers and the main::update() continues and then so does the main::draw(). There may be more optimum ways (e.g. in my solution the update() + draw() doesn’t continue till all other treads are done. There could be solutions which involve being smarter and allow some threads - e.g. vision - to run even while the main::draw is running… but my brain was fried by then so I just went with what worked…).

Hope that helps!

The thread I linked to is quite long and has loads of crap regarding other thread related crashes I was having all over the place, so its really just the 1st post (to set the context) and last post (solution) which are relevant…

one thing I recommend is to really stress test the thing that caused the crash – add random sleeps, call functions multiple times (even hundreds of times), etc – since crashes are often timing related and can take hours to emerge. It’s good to try to make it happen quicker

about STL and thread safety - there is alot of info if you poke around:

Standard synchronization rules apply–multiple simultaneous container reads are fine but if you’re going to be writing then you will need to synchronize access.

I do think you’ll want to mutex somewhere, if you are both writing and reading, but damian is right, you’ll see best results to cut the mutex just down to memcpy to and from a shared copy, and to keep the processing down to a minimum.

also, the keyworld volitile can be helpful - there is info here:…-hared-data

also, with damian’s approach, I think you might also look at how the thread you created is timed. How often does it run in comparison to the testApp update? it might need to be tuned a bit to get less collisions…

  • zach

Hi. thanks again for the replies. it’s been really helpful.

i have on question though, must ALL shared data be accessed/written using mutex locks/unlocks? i’m talking about ints, booleans, primitive data types… or are these types thread safe?

@Damian, yes, using mutex lock/unlock only where it is stricly needed helps alot, i notice the particles are a little less responsive but nothing too notorious.
The way to solve this would be to redesign the program so it has less data being passed around and fewer times. I want to try to have all threads keep a copy of the data they need to work on and only use mutex when they copy the data.
something like

//make a local copy of the data...  
//do all the calculations....  
//copy data back into the main thread....  

that should work a lot better than what i have now

@memo, i ha read that thread before, but never got to end of it :stuck_out_tongue:
the class that calculates the collisions is doing really heavy calculations. its running alot slower than the main testApp thread so i dont think having that wait for the collisions to b ready would be a good idea. But since that the particles have theyr position updated in the main testApp thread, and they also get random velocity added it hides the fact that they are not being very responsive. but it looks really cool non the less… :slight_smile:

I am having issues with using queue or vector in a thread with memory crashes.

Is it generally better to use a fixed array than these?

This might be of interest…

Creating a thread safe producer consumer queue in C++ without using locks…-locks.aspx

ThreadSafe Vector Access?…-07516.html

again, I think all STL container stuff is MEGA unsafe across threads, unless you get a version that’s specified to be thread safe. I’ve gotten burnt from this. Switching to arrays helps a great deal.

take care!

So, mainly the problem with dynamic memory structures and multithreading is usually the index/size.

I suppose chris is talking about the issues with the http utils. reviewing the code, I think the problem is something like this:

Imagine there’s a vector with 4 elements, there’s a thread that adds a new one and another thread that is removing one. The first thread adds the new element and internally the vector reads the size to increment it. The size is 4 so when it’s incremented it should be 5, meanwhile the other thread removes an element and decrements the size to 3. The first thread then writes the result of the increment, 5 when it should be 4. From here the vector is corrupted and at some point you are going to access a position of the vector that “doesn’t exist”,

Yup, the problem with threads and STL containers or other non-trivial data structures is that these objects have to keep an internal state which is managed through calls to the class’s methods, which are not thread-safe.

The solution here is either to wrap your STL data structure in a class which synchronizes access to the underlying object, or to use thread-safe data structures like the lock-free ringbuffer or techniques like double-buffering your data.


how about sharing arrays? just regular arrays.
I was reading somewhere online that the problem is with sharing data that is multibyte, so for example, if 2 threads are reading and writing to that data it may happen that one is reading while the other one didn’t have enough time to write all the bytes, there fore it will read gibberish and crash. So i assume sharing unsigned chars should be ok? since they only have one byte. and i also read that sharing ints is ok as well (not sure if this is true as ints have 32 bytes). so my question is if its ok to share arrays of bytes, say pixel information (arrays of unsigned chars)? how about arrays of floats or other primitive data types?



i would not use arrays but maybe it’s possible.


The problem is not so much with the data type but what you do with it, if you read some data to for example draw it, it doesn’t matter if it’s not complete, you’ll get a non correct image, or for example if reading from video, the image being drawn will be half of one frame and half of the next, but it won’t crash.

If you use the shared data to make a jump, as the condition in an if or a for, or to access a memory position as in the case of the vector size/index, you’ll have problems.

Using locks you manage that the data you’re reading is not modified meanwhile

it would be ok to share an array of unsigned bytes, yes. an array of floats, no. a float is a 32-bit data structure, which means it is made up of four bytes. there’s no way of knowing, in a multi-threaded environment, what will happen if you issue a read and a write to the same memory location at the same time: either operation may be interrupted halfway through.

with anything that has a >1 byte granularity, all bets are off. (actually strictly speaking with a 1 byte granularity all bets are still off because perhaps your CPU architecture haha a 4-bit processing pipeline… haha).

and to further comment on the STL stuff, the STL interface doesn’t actually say _how_ data should be stored. for example, most implementations of vector allocate an initial number of slots for items (say 16), and when this is full, they allocate a whole new block of memory that is twice as large (32 slots) and copy the existing contents into the new array. (this is why it’s always a good idea to .reserve() space on your vector if you’re going to be adding to/removing from it a lot). so, the threading issue is more complicated than just index and size issues.

this thread is amazing! thanks damian for the reply, at least i know i can share pixel information from one thread to the other.
but for my particular application the one with all the particles, i came to a solution, i made it single threaded!!! hahaha
now, instead of all particles searching for the closest point in the blobs they follow a predefined blob point! the results are very similar and its fast enough to run single threaded. And i have soooo much stuff to do, and so little time, i cant afford to waste any more time on this (unfortunately)

i’ve learned that with threads comes great power, but that power is hard to tame! haha

Ive changed ofxHttpUtils to use fixed size arrays instead of vectors, but I am still getting memory read or write crash alert boxes.

Do you have to lock and unlock even with ints? Maybe something else is breaking it.