1
1

A slightly more correct fix for issues with reordering unlock calls. We want

to flush all writes pending (ie, the data being protected) out of the memory
manager before we write the spinlock unlock. Only need a wmb instead of
full mb, which is at least slightly less intrusive.  Also, after much
thought, no need for a memory barrier in init.

This commit was SVN r13649.
Этот коммит содержится в:
Brian Barrett 2007-02-14 16:37:31 +00:00
родитель 2483cefc57
Коммит 2a16c094f7

Просмотреть файл

@ -337,7 +337,6 @@ static inline void
opal_atomic_init( opal_atomic_lock_t* lock, int value )
{
lock->u.lock = value;
opal_atomic_mb();
}
@ -364,12 +363,8 @@ opal_atomic_lock(opal_atomic_lock_t *lock)
static inline void
opal_atomic_unlock(opal_atomic_lock_t *lock)
{
/*
opal_atomic_cmpset_rel( &(lock->u.lock),
OPAL_ATOMIC_LOCKED, OPAL_ATOMIC_UNLOCKED);
*/
opal_atomic_wmb();
lock->u.lock=OPAL_ATOMIC_UNLOCKED;
opal_atomic_mb();
}
#endif /* OPAL_HAVE_ATOMIC_SPINLOCKS */