Does that resolve the problem or is there still a race condition?
Have not tested yet, but will now.
At the first glance it looks like you changed the semaphore_info struct to always contain a reference count, and in DestroySemaphoreInfo you copy the semaphore_info pointer, null the original, and then unlock it as often as neccessary before deleting.
so, if the mutex was locked before by the same thread calling DestroySemaphoreInfo, the unlocking should work ok, but its unfortunatly still not safe against race conditions with AcquireSemaphoreInfo:
here the global semaphore_mutex is unlocked before the call to LockSemaphoreInfo, which then might bail out with semaphore_info==NULL...
Also, any other thread currently waiting on the semaphore_info (inside pthread_mutex_lock(&semaphore_info->mutex)) might relock the mutex in the same moment the last iteration of the unlocking-while-loop finishes, or have its semaphore deleted while its still waiting on it.
So it looks like I didn't think my initial proposal through to the end
Having LockSemaphoreInfo grab the global semaphore_mutex to work arround this seems like a very bad idea, you'd end up with some sort of "Big Kernel Lock", preventing most parallel execution or creating deadlocks.
Back to the drawing board:
The root cause of the problem are the calls to DestroySemaphoreInfo while there might be still threads using it.
So, would it be an option to simply NOT destroy the wand_semaphore each time the count of active wands reaches 0?
Deleting and re-creating the wand_ids SplayTree each time would still be possible, but depending on the application it might be faster to simply keep it alive as well.
On program termination, there'd now be a leak of a few bytes for the semaphore_info and an (hopefully empty) Splay Tree, as well as a pthread_mutex_t.
To clean them up, an approach like, for example, the DestroyPixelCacheResources function from magick/cache.c, called from MagickCoreTerminus could be used, or would the extra exported function break binary compatibility?