Import BSDDB 4.7.25 (as of svn r89086)
This commit is contained in:
110
mutex/README
Normal file
110
mutex/README
Normal file
@@ -0,0 +1,110 @@
|
||||
# $Id: README,v 12.1 2005/07/20 16:51:55 bostic Exp $
|
||||
|
||||
Note: this only applies to locking using test-and-set and fcntl calls,
|
||||
pthreads were added after this was written.
|
||||
|
||||
Resource locking routines: lock based on a DB_MUTEX. All this gunk
|
||||
(including trying to make assembly code portable), is necessary because
|
||||
System V semaphores require system calls for uncontested locks and we
|
||||
don't want to make two system calls per resource lock.
|
||||
|
||||
First, this is how it works. The DB_MUTEX structure contains a resource
|
||||
test-and-set lock (tsl), a file offset, a pid for debugging and statistics
|
||||
information.
|
||||
|
||||
If HAVE_MUTEX_FCNTL is NOT defined (that is, we know how to do
|
||||
test-and-sets for this compiler/architecture combination), we try and
|
||||
lock the resource tsl some number of times (based on the number of
|
||||
processors). If we can't acquire the mutex that way, we use a system
|
||||
call to sleep for 1ms, 2ms, 4ms, etc. (The time is bounded at 10ms for
|
||||
mutexes backing logical locks and 25 ms for data structures, just in
|
||||
case.) Using the timer backoff means that there are two assumptions:
|
||||
that mutexes are held for brief periods (never over system calls or I/O)
|
||||
and mutexes are not hotly contested.
|
||||
|
||||
If HAVE_MUTEX_FCNTL is defined, we use a file descriptor to do byte
|
||||
locking on a file at a specified offset. In this case, ALL of the
|
||||
locking is done in the kernel. Because file descriptors are allocated
|
||||
per process, we have to provide the file descriptor as part of the lock
|
||||
call. We still have to do timer backoff because we need to be able to
|
||||
block ourselves, that is, the lock manager causes processes to wait by
|
||||
having the process acquire a mutex and then attempting to re-acquire the
|
||||
mutex. There's no way to use kernel locking to block yourself, that is,
|
||||
if you hold a lock and attempt to re-acquire it, the attempt will
|
||||
succeed.
|
||||
|
||||
Next, let's talk about why it doesn't work the way a reasonable person
|
||||
would think it should work.
|
||||
|
||||
Ideally, we'd have the ability to try to lock the resource tsl, and if
|
||||
that fails, increment a counter of waiting processes, then block in the
|
||||
kernel until the tsl is released. The process holding the resource tsl
|
||||
would see the wait counter when it went to release the resource tsl, and
|
||||
would wake any waiting processes up after releasing the lock. This would
|
||||
actually require both another tsl (call it the mutex tsl) and
|
||||
synchronization between the call that blocks in the kernel and the actual
|
||||
resource tsl. The mutex tsl would be used to protect accesses to the
|
||||
DB_MUTEX itself. Locking the mutex tsl would be done by a busy loop,
|
||||
which is safe because processes would never block holding that tsl (all
|
||||
they would do is try to obtain the resource tsl and set/check the wait
|
||||
count). The problem in this model is that the blocking call into the
|
||||
kernel requires a blocking semaphore, i.e. one whose normal state is
|
||||
locked.
|
||||
|
||||
The only portable forms of locking under UNIX are fcntl(2) on a file
|
||||
descriptor/offset, and System V semaphores. Neither of these locking
|
||||
methods are sufficient to solve the problem.
|
||||
|
||||
The problem with fcntl locking is that only the process that obtained the
|
||||
lock can release it. Remember, we want the normal state of the kernel
|
||||
semaphore to be locked. So, if the creator of the DB_MUTEX were to
|
||||
initialize the lock to "locked", then a second process locks the resource
|
||||
tsl, and then a third process needs to block, waiting for the resource
|
||||
tsl, when the second process wants to wake up the third process, it can't
|
||||
because it's not the holder of the lock! For the second process to be
|
||||
the holder of the lock, we would have to make a system call per
|
||||
uncontested lock, which is what we were trying to get away from in the
|
||||
first place.
|
||||
|
||||
There are some hybrid schemes, such as signaling the holder of the lock,
|
||||
or using a different blocking offset depending on which process is
|
||||
holding the lock, but it gets complicated fairly quickly. I'm open to
|
||||
suggestions, but I'm not holding my breath.
|
||||
|
||||
Regardless, we use this form of locking when we don't have any other
|
||||
choice, because it doesn't have the limitations found in System V
|
||||
semaphores, and because the normal state of the kernel object in that
|
||||
case is unlocked, so the process releasing the lock is also the holder
|
||||
of the lock.
|
||||
|
||||
The System V semaphore design has a number of other limitations that make
|
||||
it inappropriate for this task. Namely:
|
||||
|
||||
First, the semaphore key name space is separate from the file system name
|
||||
space (although there exist methods for using file names to create
|
||||
semaphore keys). If we use a well-known key, there's no reason to believe
|
||||
that any particular key will not already be in use, either by another
|
||||
instance of the DB application or some other application, in which case
|
||||
the DB application will fail. If we create a key, then we have to use a
|
||||
file system name to rendezvous and pass around the key.
|
||||
|
||||
Second, System V semaphores traditionally have compile-time, system-wide
|
||||
limits on the number of semaphore keys that you can have. Typically, that
|
||||
number is far too low for any practical purpose. Since the semaphores
|
||||
permit more than a single slot per semaphore key, we could try and get
|
||||
around that limit by using multiple slots, but that means that the file
|
||||
that we're using for rendezvous is going to have to contain slot
|
||||
information as well as semaphore key information, and we're going to be
|
||||
reading/writing it on every db_mutex_t init or destroy operation. Anyhow,
|
||||
similar compile-time, system-wide limits on the numbers of slots per
|
||||
semaphore key kick in, and you're right back where you started.
|
||||
|
||||
My fantasy is that once POSIX.1 standard mutexes are in wide-spread use,
|
||||
we can switch to them. My guess is that it won't happen, because the
|
||||
POSIX semaphores are only required to work for threads within a process,
|
||||
and not independent processes.
|
||||
|
||||
Note: there are races in the statistics code, but since it's just that,
|
||||
I didn't bother fixing them. (The fix requires a mutex tsl, so, when/if
|
||||
this code is fixed to do rational locking (see above), then change the
|
||||
statistics update code to acquire/release the mutex tsl.
|
||||
237
mutex/mut_alloc.c
Normal file
237
mutex/mut_alloc.c
Normal file
@@ -0,0 +1,237 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1999,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_alloc.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/*
|
||||
* __mutex_alloc --
|
||||
* Allocate a mutex from the mutex region.
|
||||
*
|
||||
* PUBLIC: int __mutex_alloc __P((ENV *, int, u_int32_t, db_mutex_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_alloc(env, alloc_id, flags, indxp)
|
||||
ENV *env;
|
||||
int alloc_id;
|
||||
u_int32_t flags;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* The caller may depend on us to initialize. */
|
||||
*indxp = MUTEX_INVALID;
|
||||
|
||||
/*
|
||||
* If this is not an application lock, and we've turned off locking,
|
||||
* or the ENV handle isn't thread-safe, and this is a thread lock
|
||||
* or the environment isn't multi-process by definition, there's no
|
||||
* need to mutex at all.
|
||||
*/
|
||||
if (alloc_id != MTX_APPLICATION &&
|
||||
(F_ISSET(env->dbenv, DB_ENV_NOLOCKING) ||
|
||||
(!F_ISSET(env, ENV_THREAD) &&
|
||||
(LF_ISSET(DB_MUTEX_PROCESS_ONLY) ||
|
||||
F_ISSET(env, ENV_PRIVATE)))))
|
||||
return (0);
|
||||
|
||||
/* Private environments never share mutexes. */
|
||||
if (F_ISSET(env, ENV_PRIVATE))
|
||||
LF_SET(DB_MUTEX_PROCESS_ONLY);
|
||||
|
||||
/*
|
||||
* If we have a region in which to allocate the mutexes, lock it and
|
||||
* do the allocation.
|
||||
*/
|
||||
if (MUTEX_ON(env))
|
||||
return (__mutex_alloc_int(env, 1, alloc_id, flags, indxp));
|
||||
|
||||
/*
|
||||
* We have to allocate some number of mutexes before we have a region
|
||||
* in which to allocate them. We handle this by saving up the list of
|
||||
* flags and allocating them as soon as we have a handle.
|
||||
*
|
||||
* The list of mutexes to alloc is maintained in pairs: first the
|
||||
* alloc_id argument, second the flags passed in by the caller.
|
||||
*/
|
||||
if (env->mutex_iq == NULL) {
|
||||
env->mutex_iq_max = 50;
|
||||
if ((ret = __os_calloc(env, env->mutex_iq_max,
|
||||
sizeof(env->mutex_iq[0]), &env->mutex_iq)) != 0)
|
||||
return (ret);
|
||||
} else if (env->mutex_iq_next == env->mutex_iq_max - 1) {
|
||||
env->mutex_iq_max *= 2;
|
||||
if ((ret = __os_realloc(env,
|
||||
env->mutex_iq_max * sizeof(env->mutex_iq[0]),
|
||||
&env->mutex_iq)) != 0)
|
||||
return (ret);
|
||||
}
|
||||
*indxp = env->mutex_iq_next + 1; /* Correct for MUTEX_INVALID. */
|
||||
env->mutex_iq[env->mutex_iq_next].alloc_id = alloc_id;
|
||||
env->mutex_iq[env->mutex_iq_next].flags = flags;
|
||||
++env->mutex_iq_next;
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_alloc_int --
|
||||
* Internal routine to allocate a mutex.
|
||||
*
|
||||
* PUBLIC: int __mutex_alloc_int
|
||||
* PUBLIC: __P((ENV *, int, int, u_int32_t, db_mutex_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_alloc_int(env, locksys, alloc_id, flags, indxp)
|
||||
ENV *env;
|
||||
int locksys, alloc_id;
|
||||
u_int32_t flags;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int ret;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
ret = 0;
|
||||
|
||||
/*
|
||||
* If we're not initializing the mutex region, then lock the region to
|
||||
* allocate new mutexes. Drop the lock before initializing the mutex,
|
||||
* mutex initialization may require a system call.
|
||||
*/
|
||||
if (locksys)
|
||||
MUTEX_SYSTEM_LOCK(env);
|
||||
|
||||
if (mtxregion->mutex_next == MUTEX_INVALID) {
|
||||
__db_errx(env,
|
||||
"unable to allocate memory for mutex; resize mutex region");
|
||||
if (locksys)
|
||||
MUTEX_SYSTEM_UNLOCK(env);
|
||||
return (ENOMEM);
|
||||
}
|
||||
|
||||
*indxp = mtxregion->mutex_next;
|
||||
mutexp = MUTEXP_SET(*indxp);
|
||||
DB_ASSERT(env,
|
||||
((uintptr_t)mutexp & (dbenv->mutex_align - 1)) == 0);
|
||||
mtxregion->mutex_next = mutexp->mutex_next_link;
|
||||
|
||||
--mtxregion->stat.st_mutex_free;
|
||||
++mtxregion->stat.st_mutex_inuse;
|
||||
if (mtxregion->stat.st_mutex_inuse > mtxregion->stat.st_mutex_inuse_max)
|
||||
mtxregion->stat.st_mutex_inuse_max =
|
||||
mtxregion->stat.st_mutex_inuse;
|
||||
if (locksys)
|
||||
MUTEX_SYSTEM_UNLOCK(env);
|
||||
|
||||
/* Initialize the mutex. */
|
||||
memset(mutexp, 0, sizeof(*mutexp));
|
||||
F_SET(mutexp, DB_MUTEX_ALLOCATED |
|
||||
LF_ISSET(DB_MUTEX_LOGICAL_LOCK | DB_MUTEX_PROCESS_ONLY));
|
||||
|
||||
/*
|
||||
* If the mutex is associated with a single process, set the process
|
||||
* ID. If the application ever calls DbEnv::failchk, we'll need the
|
||||
* process ID to know if the mutex is still in use.
|
||||
*/
|
||||
if (LF_ISSET(DB_MUTEX_PROCESS_ONLY))
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, NULL);
|
||||
|
||||
#ifdef HAVE_STATISTICS
|
||||
mutexp->alloc_id = alloc_id;
|
||||
#else
|
||||
COMPQUIET(alloc_id, 0);
|
||||
#endif
|
||||
|
||||
if ((ret = __mutex_init(env, *indxp, flags)) != 0)
|
||||
(void)__mutex_free_int(env, locksys, indxp);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_free --
|
||||
* Free a mutex.
|
||||
*
|
||||
* PUBLIC: int __mutex_free __P((ENV *, db_mutex_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_free(env, indxp)
|
||||
ENV *env;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
/*
|
||||
* There is no explicit ordering in how the regions are cleaned up
|
||||
* up and/or discarded when an environment is destroyed (either a
|
||||
* private environment is closed or a public environment is removed).
|
||||
* The way we deal with mutexes is to clean up all remaining mutexes
|
||||
* when we close the mutex environment (because we have to be able to
|
||||
* do that anyway, after a crash), which means we don't have to deal
|
||||
* with region cleanup ordering on normal environment destruction.
|
||||
* All that said, what it really means is we can get here without a
|
||||
* mpool region. It's OK, the mutex has been, or will be, destroyed.
|
||||
*
|
||||
* If the mutex has never been configured, we're done.
|
||||
*/
|
||||
if (!MUTEX_ON(env) || *indxp == MUTEX_INVALID)
|
||||
return (0);
|
||||
|
||||
return (__mutex_free_int(env, 1, indxp));
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_free_int --
|
||||
* Internal routine to free a mutex.
|
||||
*
|
||||
* PUBLIC: int __mutex_free_int __P((ENV *, int, db_mutex_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_free_int(env, locksys, indxp)
|
||||
ENV *env;
|
||||
int locksys;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t mutex;
|
||||
int ret;
|
||||
|
||||
mutex = *indxp;
|
||||
*indxp = MUTEX_INVALID;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
DB_ASSERT(env, F_ISSET(mutexp, DB_MUTEX_ALLOCATED));
|
||||
F_CLR(mutexp, DB_MUTEX_ALLOCATED);
|
||||
|
||||
ret = __mutex_destroy(env, mutex);
|
||||
|
||||
if (locksys)
|
||||
MUTEX_SYSTEM_LOCK(env);
|
||||
|
||||
/* Link the mutex on the head of the free list. */
|
||||
mutexp->mutex_next_link = mtxregion->mutex_next;
|
||||
mtxregion->mutex_next = mutex;
|
||||
++mtxregion->stat.st_mutex_free;
|
||||
--mtxregion->stat.st_mutex_inuse;
|
||||
|
||||
if (locksys)
|
||||
MUTEX_SYSTEM_UNLOCK(env);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
70
mutex/mut_failchk.c
Normal file
70
mutex/mut_failchk.c
Normal file
@@ -0,0 +1,70 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 2005,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_failchk.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/*
|
||||
* __mut_failchk --
|
||||
* Check for mutexes held by dead processes.
|
||||
*
|
||||
* PUBLIC: int __mut_failchk __P((ENV *));
|
||||
*/
|
||||
int
|
||||
__mut_failchk(env)
|
||||
ENV *env;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t i;
|
||||
int ret;
|
||||
char buf[DB_THREADID_STRLEN];
|
||||
|
||||
dbenv = env->dbenv;
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
ret = 0;
|
||||
|
||||
MUTEX_SYSTEM_LOCK(env);
|
||||
for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
|
||||
mutexp = MUTEXP_SET(i);
|
||||
|
||||
/*
|
||||
* We're looking for per-process mutexes where the process
|
||||
* has died.
|
||||
*/
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED) ||
|
||||
!F_ISSET(mutexp, DB_MUTEX_PROCESS_ONLY))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* The thread that allocated the mutex may have exited, but
|
||||
* we cannot reclaim the mutex if the process is still alive.
|
||||
*/
|
||||
if (dbenv->is_alive(
|
||||
dbenv, mutexp->pid, 0, DB_MUTEX_PROCESS_ONLY))
|
||||
continue;
|
||||
|
||||
__db_msg(env, "Freeing mutex for process: %s",
|
||||
dbenv->thread_id_string(dbenv, mutexp->pid, 0, buf));
|
||||
|
||||
/* Unlock and free the mutex. */
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
MUTEX_UNLOCK(env, i);
|
||||
|
||||
if ((ret = __mutex_free_int(env, 0, &i)) != 0)
|
||||
break;
|
||||
}
|
||||
MUTEX_SYSTEM_UNLOCK(env);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
189
mutex/mut_fcntl.c
Normal file
189
mutex/mut_fcntl.c
Normal file
@@ -0,0 +1,189 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_fcntl.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/*
|
||||
* __db_fcntl_mutex_init --
|
||||
* Initialize a fcntl mutex.
|
||||
*
|
||||
* PUBLIC: int __db_fcntl_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__db_fcntl_mutex_init(env, mutex, flags)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
COMPQUIET(flags, 0);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_fcntl_mutex_lock
|
||||
* Lock on a mutex, blocking if necessary.
|
||||
*
|
||||
* PUBLIC: int __db_fcntl_mutex_lock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_fcntl_mutex_lock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
struct flock k_lock;
|
||||
int locked, ms, ret;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
CHECK_MTX_THREAD(env, mutexp);
|
||||
|
||||
#ifdef HAVE_STATISTICS
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
++mutexp->mutex_set_wait;
|
||||
else
|
||||
++mutexp->mutex_set_nowait;
|
||||
#endif
|
||||
|
||||
/* Initialize the lock. */
|
||||
k_lock.l_whence = SEEK_SET;
|
||||
k_lock.l_start = mutex;
|
||||
k_lock.l_len = 1;
|
||||
|
||||
for (locked = 0;;) {
|
||||
/*
|
||||
* Wait for the lock to become available; wait 1ms initially,
|
||||
* up to 1 second.
|
||||
*/
|
||||
for (ms = 1; F_ISSET(mutexp, DB_MUTEX_LOCKED);) {
|
||||
__os_yield(NULL, 0, ms * US_PER_MS);
|
||||
if ((ms <<= 1) > MS_PER_SEC)
|
||||
ms = MS_PER_SEC;
|
||||
}
|
||||
|
||||
/* Acquire an exclusive kernel lock. */
|
||||
k_lock.l_type = F_WRLCK;
|
||||
if (fcntl(env->lockfhp->fd, F_SETLKW, &k_lock))
|
||||
goto err;
|
||||
|
||||
/* If the resource is still available, it's ours. */
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
locked = 1;
|
||||
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
}
|
||||
|
||||
/* Release the kernel lock. */
|
||||
k_lock.l_type = F_UNLCK;
|
||||
if (fcntl(env->lockfhp->fd, F_SETLK, &k_lock))
|
||||
goto err;
|
||||
|
||||
/*
|
||||
* If we got the resource lock we're done.
|
||||
*
|
||||
* !!!
|
||||
* We can't check to see if the lock is ours, because we may
|
||||
* be trying to block ourselves in the lock manager, and so
|
||||
* the holder of the lock that's preventing us from getting
|
||||
* the lock may be us! (Seriously.)
|
||||
*/
|
||||
if (locked)
|
||||
break;
|
||||
}
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
/*
|
||||
* We want to switch threads as often as possible. Yield every time
|
||||
* we get a mutex to ensure contention.
|
||||
*/
|
||||
if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
|
||||
__os_yield(env, 0, 0);
|
||||
#endif
|
||||
return (0);
|
||||
|
||||
err: ret = __os_get_syserr();
|
||||
__db_syserr(env, ret, "fcntl lock failed");
|
||||
return (__env_panic(env, __os_posix_err(ret)));
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_fcntl_mutex_unlock --
|
||||
* Release a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_fcntl_mutex_unlock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_fcntl_mutex_unlock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
__db_errx(env, "fcntl unlock failed: lock already unlocked");
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Release the resource. We don't have to acquire any locks because
|
||||
* processes trying to acquire the lock are waiting for the flag to
|
||||
* go to 0. Once that happens the waiters will serialize acquiring
|
||||
* an exclusive kernel lock before locking the mutex.
|
||||
*/
|
||||
F_CLR(mutexp, DB_MUTEX_LOCKED);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_fcntl_mutex_destroy --
|
||||
* Destroy a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_fcntl_mutex_destroy __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_fcntl_mutex_destroy(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
|
||||
return (0);
|
||||
}
|
||||
332
mutex/mut_method.c
Normal file
332
mutex/mut_method.c
Normal file
@@ -0,0 +1,332 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_method.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/*
|
||||
* __mutex_alloc_pp --
|
||||
* Allocate a mutex, application method.
|
||||
*
|
||||
* PUBLIC: int __mutex_alloc_pp __P((DB_ENV *, u_int32_t, db_mutex_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_alloc_pp(dbenv, flags, indxp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t flags;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
switch (flags) {
|
||||
case 0:
|
||||
case DB_MUTEX_PROCESS_ONLY:
|
||||
case DB_MUTEX_SELF_BLOCK:
|
||||
break;
|
||||
default:
|
||||
return (__db_ferr(env, "DB_ENV->mutex_alloc", 0));
|
||||
}
|
||||
|
||||
ENV_ENTER(env, ip);
|
||||
ret = __mutex_alloc(env, MTX_APPLICATION, flags, indxp);
|
||||
ENV_LEAVE(env, ip);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_free_pp --
|
||||
* Destroy a mutex, application method.
|
||||
*
|
||||
* PUBLIC: int __mutex_free_pp __P((DB_ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__mutex_free_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (indx == MUTEX_INVALID)
|
||||
return (EINVAL);
|
||||
|
||||
/*
|
||||
* Internally Berkeley DB passes around the db_mutex_t address on
|
||||
* free, because we want to make absolutely sure the slot gets
|
||||
* overwritten with MUTEX_INVALID. We don't export MUTEX_INVALID,
|
||||
* so we don't export that part of the API, either.
|
||||
*/
|
||||
ENV_ENTER(env, ip);
|
||||
ret = __mutex_free(env, &indx);
|
||||
ENV_LEAVE(env, ip);
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_lock --
|
||||
* Lock a mutex, application method.
|
||||
*
|
||||
* PUBLIC: int __mutex_lock_pp __P((DB_ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__mutex_lock_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (indx == MUTEX_INVALID)
|
||||
return (EINVAL);
|
||||
|
||||
ENV_ENTER(env, ip);
|
||||
ret = __mutex_lock(env, indx);
|
||||
ENV_LEAVE(env, ip);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_unlock --
|
||||
* Unlock a mutex, application method.
|
||||
*
|
||||
* PUBLIC: int __mutex_unlock_pp __P((DB_ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__mutex_unlock_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (indx == MUTEX_INVALID)
|
||||
return (EINVAL);
|
||||
|
||||
ENV_ENTER(env, ip);
|
||||
ret = __mutex_unlock(env, indx);
|
||||
ENV_LEAVE(env, ip);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_get_align --
|
||||
* DB_ENV->mutex_get_align.
|
||||
*
|
||||
* PUBLIC: int __mutex_get_align __P((DB_ENV *, u_int32_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_get_align(dbenv, alignp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *alignp;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (MUTEX_ON(env)) {
|
||||
/* Cannot be set after open, no lock required to read. */
|
||||
*alignp = ((DB_MUTEXREGION *)
|
||||
env->mutex_handle->reginfo.primary)->stat.st_mutex_align;
|
||||
} else
|
||||
*alignp = dbenv->mutex_align;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_set_align --
|
||||
* DB_ENV->mutex_set_align.
|
||||
*
|
||||
* PUBLIC: int __mutex_set_align __P((DB_ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_set_align(dbenv, align)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t align;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_align");
|
||||
|
||||
if (align == 0 || !POWER_OF_TWO(align)) {
|
||||
__db_errx(env,
|
||||
"DB_ENV->mutex_set_align: alignment value must be a non-zero power-of-two");
|
||||
return (EINVAL);
|
||||
}
|
||||
|
||||
dbenv->mutex_align = align;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_get_increment --
|
||||
* DB_ENV->mutex_get_increment.
|
||||
*
|
||||
* PUBLIC: int __mutex_get_increment __P((DB_ENV *, u_int32_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_get_increment(dbenv, incrementp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *incrementp;
|
||||
{
|
||||
/*
|
||||
* We don't maintain the increment in the region (it just makes
|
||||
* no sense). Return whatever we have configured on this handle,
|
||||
* nobody is ever going to notice.
|
||||
*/
|
||||
*incrementp = dbenv->mutex_inc;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_set_increment --
|
||||
* DB_ENV->mutex_set_increment.
|
||||
*
|
||||
* PUBLIC: int __mutex_set_increment __P((DB_ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_set_increment(dbenv, increment)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t increment;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_increment");
|
||||
|
||||
dbenv->mutex_cnt = 0;
|
||||
dbenv->mutex_inc = increment;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_get_max --
|
||||
* DB_ENV->mutex_get_max.
|
||||
*
|
||||
* PUBLIC: int __mutex_get_max __P((DB_ENV *, u_int32_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_get_max(dbenv, maxp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *maxp;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (MUTEX_ON(env)) {
|
||||
/* Cannot be set after open, no lock required to read. */
|
||||
*maxp = ((DB_MUTEXREGION *)
|
||||
env->mutex_handle->reginfo.primary)->stat.st_mutex_cnt;
|
||||
} else
|
||||
*maxp = dbenv->mutex_cnt;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_set_max --
|
||||
* DB_ENV->mutex_set_max.
|
||||
*
|
||||
* PUBLIC: int __mutex_set_max __P((DB_ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_set_max(dbenv, max)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t max;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_mutex_max");
|
||||
|
||||
dbenv->mutex_cnt = max;
|
||||
dbenv->mutex_inc = 0;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_get_tas_spins --
|
||||
* DB_ENV->mutex_get_tas_spins.
|
||||
*
|
||||
* PUBLIC: int __mutex_get_tas_spins __P((DB_ENV *, u_int32_t *));
|
||||
*/
|
||||
int
|
||||
__mutex_get_tas_spins(dbenv, tas_spinsp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *tas_spinsp;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if (MUTEX_ON(env)) {
|
||||
/* Cannot be set after open, no lock required to read. */
|
||||
*tas_spinsp = ((DB_MUTEXREGION *)env->
|
||||
mutex_handle->reginfo.primary)->stat.st_mutex_tas_spins;
|
||||
} else
|
||||
*tas_spinsp = dbenv->mutex_tas_spins;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_set_tas_spins --
|
||||
* DB_ENV->mutex_set_tas_spins.
|
||||
*
|
||||
* PUBLIC: int __mutex_set_tas_spins __P((DB_ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_set_tas_spins(dbenv, tas_spins)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t tas_spins;
|
||||
{
|
||||
ENV *env;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
/*
|
||||
* Bound the value -- less than 1 makes no sense, greater than 1M
|
||||
* makes no sense.
|
||||
*/
|
||||
if (tas_spins == 0)
|
||||
tas_spins = 1;
|
||||
else if (tas_spins > 1000000)
|
||||
tas_spins = 1000000;
|
||||
|
||||
/*
|
||||
* There's a theoretical race here, but I'm not interested in locking
|
||||
* the test-and-set spin count. The worst possibility is a thread
|
||||
* reads out a bad spin count and spins until it gets the lock, but
|
||||
* that's awfully unlikely.
|
||||
*/
|
||||
if (MUTEX_ON(env))
|
||||
((DB_MUTEXREGION *)env->mutex_handle
|
||||
->reginfo.primary)->stat.st_mutex_tas_spins = tas_spins;
|
||||
else
|
||||
dbenv->mutex_tas_spins = tas_spins;
|
||||
return (0);
|
||||
}
|
||||
394
mutex/mut_pthread.c
Normal file
394
mutex/mut_pthread.c
Normal file
@@ -0,0 +1,394 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1999,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_pthread.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
|
||||
/*
|
||||
* This is where we load in architecture/compiler specific mutex code.
|
||||
*/
|
||||
#define LOAD_ACTUAL_MUTEX_CODE
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
#ifdef HAVE_MUTEX_SOLARIS_LWP
|
||||
#define pthread_cond_destroy(x) 0
|
||||
#define pthread_cond_signal _lwp_cond_signal
|
||||
#define pthread_cond_wait _lwp_cond_wait
|
||||
#define pthread_mutex_destroy(x) 0
|
||||
#define pthread_mutex_lock _lwp_mutex_lock
|
||||
#define pthread_mutex_trylock _lwp_mutex_trylock
|
||||
#define pthread_mutex_unlock _lwp_mutex_unlock
|
||||
#endif
|
||||
#ifdef HAVE_MUTEX_UI_THREADS
|
||||
#define pthread_cond_destroy(x) cond_destroy
|
||||
#define pthread_cond_signal cond_signal
|
||||
#define pthread_cond_wait cond_wait
|
||||
#define pthread_mutex_destroy mutex_destroy
|
||||
#define pthread_mutex_lock mutex_lock
|
||||
#define pthread_mutex_trylock mutex_trylock
|
||||
#define pthread_mutex_unlock mutex_unlock
|
||||
#endif
|
||||
|
||||
#define PTHREAD_UNLOCK_ATTEMPTS 5
|
||||
|
||||
/*
|
||||
* IBM's MVS pthread mutex implementation returns -1 and sets errno rather than
|
||||
* returning errno itself. As -1 is not a valid errno value, assume functions
|
||||
* returning -1 have set errno. If they haven't, return a random error value.
|
||||
*/
|
||||
#define RET_SET(f, ret) do { \
|
||||
if (((ret) = (f)) == -1 && ((ret) = errno) == 0) \
|
||||
(ret) = EAGAIN; \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* __db_pthread_mutex_init --
|
||||
* Initialize a pthread mutex.
|
||||
*
|
||||
* PUBLIC: int __db_pthread_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__db_pthread_mutex_init(env, mutex, flags)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int ret;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
ret = 0;
|
||||
|
||||
#ifdef HAVE_MUTEX_PTHREADS
|
||||
{
|
||||
pthread_condattr_t condattr, *condattrp = NULL;
|
||||
pthread_mutexattr_t mutexattr, *mutexattrp = NULL;
|
||||
|
||||
if (!LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
|
||||
RET_SET((pthread_mutexattr_init(&mutexattr)), ret);
|
||||
#ifndef HAVE_MUTEX_THREAD_ONLY
|
||||
if (ret == 0)
|
||||
RET_SET((pthread_mutexattr_setpshared(
|
||||
&mutexattr, PTHREAD_PROCESS_SHARED)), ret);
|
||||
#endif
|
||||
mutexattrp = &mutexattr;
|
||||
}
|
||||
|
||||
if (ret == 0)
|
||||
RET_SET((pthread_mutex_init(&mutexp->mutex, mutexattrp)), ret);
|
||||
if (mutexattrp != NULL)
|
||||
(void)pthread_mutexattr_destroy(mutexattrp);
|
||||
if (ret == 0 && LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
|
||||
if (!LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
|
||||
RET_SET((pthread_condattr_init(&condattr)), ret);
|
||||
if (ret == 0) {
|
||||
condattrp = &condattr;
|
||||
#ifndef HAVE_MUTEX_THREAD_ONLY
|
||||
RET_SET((pthread_condattr_setpshared(
|
||||
&condattr, PTHREAD_PROCESS_SHARED)), ret);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
if (ret == 0)
|
||||
RET_SET(
|
||||
(pthread_cond_init(&mutexp->cond, condattrp)), ret);
|
||||
|
||||
F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
|
||||
if (condattrp != NULL)
|
||||
(void)pthread_condattr_destroy(condattrp);
|
||||
}
|
||||
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_MUTEX_SOLARIS_LWP
|
||||
/*
|
||||
* XXX
|
||||
* Gcc complains about missing braces in the static initializations of
|
||||
* lwp_cond_t and lwp_mutex_t structures because the structures contain
|
||||
* sub-structures/unions and the Solaris include file that defines the
|
||||
* initialization values doesn't have surrounding braces. There's not
|
||||
* much we can do.
|
||||
*/
|
||||
if (LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
|
||||
static lwp_mutex_t mi = DEFAULTMUTEX;
|
||||
|
||||
mutexp->mutex = mi;
|
||||
} else {
|
||||
static lwp_mutex_t mi = SHAREDMUTEX;
|
||||
|
||||
mutexp->mutex = mi;
|
||||
}
|
||||
if (LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
|
||||
if (LF_ISSET(DB_MUTEX_PROCESS_ONLY)) {
|
||||
static lwp_cond_t ci = DEFAULTCV;
|
||||
|
||||
mutexp->cond = ci;
|
||||
} else {
|
||||
static lwp_cond_t ci = SHAREDCV;
|
||||
|
||||
mutexp->cond = ci;
|
||||
}
|
||||
F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
|
||||
}
|
||||
#endif
|
||||
#ifdef HAVE_MUTEX_UI_THREADS
|
||||
{
|
||||
int type;
|
||||
|
||||
type = LF_ISSET(DB_MUTEX_PROCESS_ONLY) ? USYNC_THREAD : USYNC_PROCESS;
|
||||
|
||||
ret = mutex_init(&mutexp->mutex, type, NULL);
|
||||
if (ret == 0 && LF_ISSET(DB_MUTEX_SELF_BLOCK)) {
|
||||
ret = cond_init(&mutexp->cond, type, NULL);
|
||||
|
||||
F_SET(mutexp, DB_MUTEX_SELF_BLOCK);
|
||||
}}
|
||||
#endif
|
||||
|
||||
if (ret != 0) {
|
||||
__db_err(env, ret, "unable to initialize mutex");
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_pthread_mutex_lock
|
||||
* Lock on a mutex, blocking if necessary.
|
||||
*
|
||||
* PUBLIC: int __db_pthread_mutex_lock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_pthread_mutex_lock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int i, ret;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
CHECK_MTX_THREAD(env, mutexp);
|
||||
|
||||
#if defined(HAVE_STATISTICS) && !defined(HAVE_MUTEX_HYBRID)
|
||||
/*
|
||||
* We want to know which mutexes are contentious, but don't want to
|
||||
* do an interlocked test here -- that's slower when the underlying
|
||||
* system has adaptive mutexes and can perform optimizations like
|
||||
* spinning only if the thread holding the mutex is actually running
|
||||
* on a CPU. Make a guess, using a normal load instruction.
|
||||
*/
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
++mutexp->mutex_set_wait;
|
||||
else
|
||||
++mutexp->mutex_set_nowait;
|
||||
#endif
|
||||
|
||||
RET_SET((pthread_mutex_lock(&mutexp->mutex)), ret);
|
||||
if (ret != 0)
|
||||
goto err;
|
||||
|
||||
if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
|
||||
/*
|
||||
* If we are using hybrid mutexes then the pthread mutexes
|
||||
* are only used to wait after spinning on the TAS mutex.
|
||||
* Set the wait flag before checking to see if the mutex
|
||||
* is still locked. The holder will clear the bit before
|
||||
* checking the wait flag.
|
||||
*/
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
mutexp->wait++;
|
||||
MUTEX_MEMBAR(mutexp->wait);
|
||||
#endif
|
||||
while (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
RET_SET((pthread_cond_wait(
|
||||
&mutexp->cond, &mutexp->mutex)), ret);
|
||||
/*
|
||||
* !!!
|
||||
* Solaris bug workaround:
|
||||
* pthread_cond_wait() sometimes returns ETIME -- out
|
||||
* of sheer paranoia, check both ETIME and ETIMEDOUT.
|
||||
* We believe this happens when the application uses
|
||||
* SIGALRM for some purpose, e.g., the C library sleep
|
||||
* call, and Solaris delivers the signal to the wrong
|
||||
* LWP.
|
||||
*/
|
||||
if (ret != 0 && ret != EINTR &&
|
||||
#ifdef ETIME
|
||||
ret != ETIME &&
|
||||
#endif
|
||||
ret != ETIMEDOUT) {
|
||||
(void)pthread_mutex_unlock(&mutexp->mutex);
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
mutexp->wait--;
|
||||
#else
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* According to HP-UX engineers contacted by Netscape,
|
||||
* pthread_mutex_unlock() will occasionally return EFAULT
|
||||
* for no good reason on mutexes in shared memory regions,
|
||||
* and the correct caller behavior is to try again. Do
|
||||
* so, up to PTHREAD_UNLOCK_ATTEMPTS consecutive times.
|
||||
* Note that we don't bother to restrict this to HP-UX;
|
||||
* it should be harmless elsewhere. [#2471]
|
||||
*/
|
||||
i = PTHREAD_UNLOCK_ATTEMPTS;
|
||||
do {
|
||||
RET_SET((pthread_mutex_unlock(&mutexp->mutex)), ret);
|
||||
} while (ret == EFAULT && --i > 0);
|
||||
if (ret != 0)
|
||||
goto err;
|
||||
} else {
|
||||
#ifdef DIAGNOSTIC
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
char buf[DB_THREADID_STRLEN];
|
||||
(void)dbenv->thread_id_string(dbenv,
|
||||
mutexp->pid, mutexp->tid, buf);
|
||||
__db_errx(env,
|
||||
"pthread lock failed: lock currently in use: pid/tid: %s",
|
||||
buf);
|
||||
ret = EINVAL;
|
||||
goto err;
|
||||
}
|
||||
#endif
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
}
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
/*
|
||||
* We want to switch threads as often as possible. Yield every time
|
||||
* we get a mutex to ensure contention.
|
||||
*/
|
||||
if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
|
||||
__os_yield(env, 0, 0);
|
||||
#endif
|
||||
return (0);
|
||||
|
||||
err: __db_err(env, ret, "pthread lock failed");
|
||||
return (__env_panic(env, ret));
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_pthread_mutex_unlock --
|
||||
* Release a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_pthread_mutex_unlock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_pthread_mutex_unlock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int i, ret;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
#if !defined(HAVE_MUTEX_HYBRID) && defined(DIAGNOSTIC)
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
__db_errx(
|
||||
env, "pthread unlock failed: lock already unlocked");
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
|
||||
RET_SET((pthread_mutex_lock(&mutexp->mutex)), ret);
|
||||
if (ret != 0)
|
||||
goto err;
|
||||
|
||||
F_CLR(mutexp, DB_MUTEX_LOCKED);
|
||||
|
||||
RET_SET((pthread_cond_signal(&mutexp->cond)), ret);
|
||||
if (ret != 0)
|
||||
goto err;
|
||||
} else
|
||||
F_CLR(mutexp, DB_MUTEX_LOCKED);
|
||||
|
||||
/* See comment above; workaround for [#2471]. */
|
||||
i = PTHREAD_UNLOCK_ATTEMPTS;
|
||||
do {
|
||||
RET_SET((pthread_mutex_unlock(&mutexp->mutex)), ret);
|
||||
} while (ret == EFAULT && --i > 0);
|
||||
|
||||
err: if (ret != 0) {
|
||||
__db_err(env, ret, "pthread unlock failed");
|
||||
return (__env_panic(env, ret));
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_pthread_mutex_destroy --
|
||||
* Destroy a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_pthread_mutex_destroy __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_pthread_mutex_destroy(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int ret, t_ret;
|
||||
|
||||
if (!MUTEX_ON(env))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
ret = 0;
|
||||
if (F_ISSET(mutexp, DB_MUTEX_SELF_BLOCK)) {
|
||||
RET_SET((pthread_cond_destroy(&mutexp->cond)), ret);
|
||||
if (ret != 0)
|
||||
__db_err(env, ret, "unable to destroy cond");
|
||||
}
|
||||
RET_SET((pthread_mutex_destroy(&mutexp->mutex)), t_ret);
|
||||
if (t_ret != 0) {
|
||||
__db_err(env, t_ret, "unable to destroy mutex");
|
||||
if (ret == 0)
|
||||
ret = t_ret;
|
||||
}
|
||||
return (ret);
|
||||
}
|
||||
380
mutex/mut_region.c
Normal file
380
mutex/mut_region.c
Normal file
@@ -0,0 +1,380 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_region.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/log.h"
|
||||
#include "dbinc/lock.h"
|
||||
#include "dbinc/mp.h"
|
||||
#include "dbinc/txn.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
static size_t __mutex_align_size __P((ENV *));
|
||||
static int __mutex_region_init __P((ENV *, DB_MUTEXMGR *));
|
||||
static size_t __mutex_region_size __P((ENV *));
|
||||
|
||||
/*
|
||||
* __mutex_open --
|
||||
* Open a mutex region.
|
||||
*
|
||||
* PUBLIC: int __mutex_open __P((ENV *, int));
|
||||
*/
|
||||
int
|
||||
__mutex_open(env, create_ok)
|
||||
ENV *env;
|
||||
int create_ok;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t cpu_count;
|
||||
u_int i;
|
||||
int ret;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
/*
|
||||
* Initialize the ENV handle information if not already initialized.
|
||||
*
|
||||
* Align mutexes on the byte boundaries specified by the application.
|
||||
*/
|
||||
if (dbenv->mutex_align == 0)
|
||||
dbenv->mutex_align = MUTEX_ALIGN;
|
||||
if (dbenv->mutex_tas_spins == 0) {
|
||||
cpu_count = __os_cpu_count();
|
||||
if ((ret = __mutex_set_tas_spins(dbenv, cpu_count == 1 ?
|
||||
cpu_count : cpu_count * MUTEX_SPINS_PER_PROCESSOR)) != 0)
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* If the user didn't set an absolute value on the number of mutexes
|
||||
* we'll need, figure it out. We're conservative in our allocation,
|
||||
* we need mutexes for DB handles, group-commit queues and other things
|
||||
* applications allocate at run-time. The application may have kicked
|
||||
* up our count to allocate its own mutexes, add that in.
|
||||
*/
|
||||
if (dbenv->mutex_cnt == 0)
|
||||
dbenv->mutex_cnt =
|
||||
__lock_region_mutex_count(env) +
|
||||
__log_region_mutex_count(env) +
|
||||
__memp_region_mutex_count(env) +
|
||||
__txn_region_mutex_count(env) +
|
||||
dbenv->mutex_inc + 100;
|
||||
|
||||
/* Create/initialize the mutex manager structure. */
|
||||
if ((ret = __os_calloc(env, 1, sizeof(DB_MUTEXMGR), &mtxmgr)) != 0)
|
||||
return (ret);
|
||||
|
||||
/* Join/create the mutex region. */
|
||||
mtxmgr->reginfo.env = env;
|
||||
mtxmgr->reginfo.type = REGION_TYPE_MUTEX;
|
||||
mtxmgr->reginfo.id = INVALID_REGION_ID;
|
||||
mtxmgr->reginfo.flags = REGION_JOIN_OK;
|
||||
if (create_ok)
|
||||
F_SET(&mtxmgr->reginfo, REGION_CREATE_OK);
|
||||
if ((ret = __env_region_attach(env,
|
||||
&mtxmgr->reginfo, __mutex_region_size(env))) != 0)
|
||||
goto err;
|
||||
|
||||
/* If we created the region, initialize it. */
|
||||
if (F_ISSET(&mtxmgr->reginfo, REGION_CREATE))
|
||||
if ((ret = __mutex_region_init(env, mtxmgr)) != 0)
|
||||
goto err;
|
||||
|
||||
/* Set the local addresses. */
|
||||
mtxregion = mtxmgr->reginfo.primary =
|
||||
R_ADDR(&mtxmgr->reginfo, mtxmgr->reginfo.rp->primary);
|
||||
mtxmgr->mutex_array = R_ADDR(&mtxmgr->reginfo, mtxregion->mutex_off);
|
||||
|
||||
env->mutex_handle = mtxmgr;
|
||||
|
||||
/* Allocate initial queue of mutexes. */
|
||||
if (env->mutex_iq != NULL) {
|
||||
DB_ASSERT(env, F_ISSET(&mtxmgr->reginfo, REGION_CREATE));
|
||||
for (i = 0; i < env->mutex_iq_next; ++i) {
|
||||
if ((ret = __mutex_alloc_int(
|
||||
env, 0, env->mutex_iq[i].alloc_id,
|
||||
env->mutex_iq[i].flags, &mutex)) != 0)
|
||||
goto err;
|
||||
/*
|
||||
* Confirm we allocated the right index, correcting
|
||||
* for avoiding slot 0 (MUTEX_INVALID).
|
||||
*/
|
||||
DB_ASSERT(env, mutex == i + 1);
|
||||
}
|
||||
__os_free(env, env->mutex_iq);
|
||||
env->mutex_iq = NULL;
|
||||
|
||||
/*
|
||||
* This is the first place we can test mutexes and we need to
|
||||
* know if they're working. (They CAN fail, for example on
|
||||
* SunOS, when using fcntl(2) for locking and using an
|
||||
* in-memory filesystem as the database environment directory.
|
||||
* But you knew that, I'm sure -- it probably wasn't worth
|
||||
* mentioning.)
|
||||
*/
|
||||
mutex = MUTEX_INVALID;
|
||||
if ((ret =
|
||||
__mutex_alloc(env, MTX_MUTEX_TEST, 0, &mutex) != 0) ||
|
||||
(ret = __mutex_lock(env, mutex)) != 0 ||
|
||||
(ret = __mutex_unlock(env, mutex)) != 0 ||
|
||||
(ret = __mutex_free(env, &mutex)) != 0) {
|
||||
__db_errx(env,
|
||||
"Unable to acquire/release a mutex; check configuration");
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
|
||||
return (0);
|
||||
|
||||
err: env->mutex_handle = NULL;
|
||||
if (mtxmgr->reginfo.addr != NULL)
|
||||
(void)__env_region_detach(env, &mtxmgr->reginfo, 0);
|
||||
|
||||
__os_free(env, mtxmgr);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_region_init --
|
||||
* Initialize a mutex region in shared memory.
|
||||
*/
|
||||
static int
|
||||
__mutex_region_init(env, mtxmgr)
|
||||
ENV *env;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t i;
|
||||
int ret;
|
||||
void *mutex_array;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
COMPQUIET(mutexp, NULL);
|
||||
|
||||
if ((ret = __env_alloc(&mtxmgr->reginfo,
|
||||
sizeof(DB_MUTEXREGION), &mtxmgr->reginfo.primary)) != 0) {
|
||||
__db_errx(env,
|
||||
"Unable to allocate memory for the mutex region");
|
||||
return (ret);
|
||||
}
|
||||
mtxmgr->reginfo.rp->primary =
|
||||
R_OFFSET(&mtxmgr->reginfo, mtxmgr->reginfo.primary);
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
memset(mtxregion, 0, sizeof(*mtxregion));
|
||||
|
||||
if ((ret = __mutex_alloc(
|
||||
env, MTX_MUTEX_REGION, 0, &mtxregion->mtx_region)) != 0)
|
||||
return (ret);
|
||||
|
||||
mtxregion->mutex_size = __mutex_align_size(env);
|
||||
|
||||
mtxregion->stat.st_mutex_align = dbenv->mutex_align;
|
||||
mtxregion->stat.st_mutex_cnt = dbenv->mutex_cnt;
|
||||
mtxregion->stat.st_mutex_tas_spins = dbenv->mutex_tas_spins;
|
||||
|
||||
/*
|
||||
* Get a chunk of memory to be used for the mutexes themselves. Each
|
||||
* piece of the memory must be properly aligned, and that alignment
|
||||
* may be more restrictive than the memory alignment returned by the
|
||||
* underlying allocation code. We already know how much memory each
|
||||
* mutex in the array will take up, but we need to offset the first
|
||||
* mutex in the array so the array begins properly aligned.
|
||||
*
|
||||
* The OOB mutex (MUTEX_INVALID) is 0. To make this work, we ignore
|
||||
* the first allocated slot when we build the free list. We have to
|
||||
* correct the count by 1 here, though, otherwise our counter will be
|
||||
* off by 1.
|
||||
*/
|
||||
if ((ret = __env_alloc(&mtxmgr->reginfo,
|
||||
mtxregion->stat.st_mutex_align +
|
||||
(mtxregion->stat.st_mutex_cnt + 1) * mtxregion->mutex_size,
|
||||
&mutex_array)) != 0) {
|
||||
__db_errx(env,
|
||||
"Unable to allocate memory for mutexes from the region");
|
||||
return (ret);
|
||||
}
|
||||
|
||||
mtxregion->mutex_off_alloc = R_OFFSET(&mtxmgr->reginfo, mutex_array);
|
||||
mutex_array = ALIGNP_INC(mutex_array, mtxregion->stat.st_mutex_align);
|
||||
mtxregion->mutex_off = R_OFFSET(&mtxmgr->reginfo, mutex_array);
|
||||
mtxmgr->mutex_array = mutex_array;
|
||||
|
||||
/*
|
||||
* Put the mutexes on a free list and clear the allocated flag.
|
||||
*
|
||||
* The OOB mutex (MUTEX_INVALID) is 0, skip it.
|
||||
*
|
||||
* The comparison is <, not <=, because we're looking ahead one
|
||||
* in each link.
|
||||
*/
|
||||
for (i = 1; i < mtxregion->stat.st_mutex_cnt; ++i) {
|
||||
mutexp = MUTEXP_SET(i);
|
||||
mutexp->flags = 0;
|
||||
mutexp->mutex_next_link = i + 1;
|
||||
}
|
||||
mutexp = MUTEXP_SET(i);
|
||||
mutexp->flags = 0;
|
||||
mutexp->mutex_next_link = MUTEX_INVALID;
|
||||
mtxregion->mutex_next = 1;
|
||||
mtxregion->stat.st_mutex_free = mtxregion->stat.st_mutex_cnt;
|
||||
mtxregion->stat.st_mutex_inuse = mtxregion->stat.st_mutex_inuse_max = 0;
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_env_refresh --
|
||||
* Clean up after the mutex region on a close or failed open.
|
||||
*
|
||||
* PUBLIC: int __mutex_env_refresh __P((ENV *));
|
||||
*/
|
||||
int
|
||||
__mutex_env_refresh(env)
|
||||
ENV *env;
|
||||
{
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
REGINFO *reginfo;
|
||||
int ret;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
reginfo = &mtxmgr->reginfo;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
|
||||
/*
|
||||
* If a private region, return the memory to the heap. Not needed for
|
||||
* filesystem-backed or system shared memory regions, that memory isn't
|
||||
* owned by any particular process.
|
||||
*/
|
||||
if (F_ISSET(env, ENV_PRIVATE)) {
|
||||
#ifdef HAVE_MUTEX_SYSTEM_RESOURCES
|
||||
/*
|
||||
* If destroying the mutex region, return any system resources
|
||||
* to the system.
|
||||
*/
|
||||
__mutex_resource_return(env, reginfo);
|
||||
#endif
|
||||
/* Discard the mutex array. */
|
||||
__env_alloc_free(
|
||||
reginfo, R_ADDR(reginfo, mtxregion->mutex_off_alloc));
|
||||
}
|
||||
|
||||
/* Detach from the region. */
|
||||
ret = __env_region_detach(env, reginfo, 0);
|
||||
|
||||
__os_free(env, mtxmgr);
|
||||
|
||||
env->mutex_handle = NULL;
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_align_size --
|
||||
* Return how much memory each mutex will take up if an array of them
|
||||
* are to be properly aligned, individually, within the array.
|
||||
*/
|
||||
static size_t
|
||||
__mutex_align_size(env)
|
||||
ENV *env;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
return ((size_t)DB_ALIGN(sizeof(DB_MUTEX), dbenv->mutex_align));
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_region_size --
|
||||
* Return the amount of space needed for the mutex region.
|
||||
*/
|
||||
static size_t
|
||||
__mutex_region_size(env)
|
||||
ENV *env;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
size_t s;
|
||||
|
||||
dbenv = env->dbenv;
|
||||
|
||||
s = sizeof(DB_MUTEXMGR) + 1024;
|
||||
|
||||
/* We discard one mutex for the OOB slot. */
|
||||
s += __env_alloc_size(
|
||||
(dbenv->mutex_cnt + 1) *__mutex_align_size(env));
|
||||
|
||||
return (s);
|
||||
}
|
||||
|
||||
#ifdef HAVE_MUTEX_SYSTEM_RESOURCES
|
||||
/*
|
||||
* __mutex_resource_return
|
||||
* Return any system-allocated mutex resources to the system.
|
||||
*
|
||||
* PUBLIC: void __mutex_resource_return __P((ENV *, REGINFO *));
|
||||
*/
|
||||
void
|
||||
__mutex_resource_return(env, infop)
|
||||
ENV *env;
|
||||
REGINFO *infop;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr, mtxmgr_st;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t i;
|
||||
void *orig_handle;
|
||||
|
||||
/*
|
||||
* This routine is called in two cases: when discarding the regions
|
||||
* from a previous Berkeley DB run, during recovery, and two, when
|
||||
* discarding regions as we shut down the database environment.
|
||||
*
|
||||
* Walk the list of mutexes and destroy any live ones.
|
||||
*
|
||||
* This is just like joining a region -- the REGINFO we're handed is
|
||||
* the same as the one returned by __env_region_attach(), all we have
|
||||
* to do is fill in the links.
|
||||
*
|
||||
* !!!
|
||||
* The region may be corrupted, of course. We're safe because the
|
||||
* only things we look at are things that are initialized when the
|
||||
* region is created, and never modified after that.
|
||||
*/
|
||||
memset(&mtxmgr_st, 0, sizeof(mtxmgr_st));
|
||||
mtxmgr = &mtxmgr_st;
|
||||
mtxmgr->reginfo = *infop;
|
||||
mtxregion = mtxmgr->reginfo.primary =
|
||||
R_ADDR(&mtxmgr->reginfo, mtxmgr->reginfo.rp->primary);
|
||||
mtxmgr->mutex_array = R_ADDR(&mtxmgr->reginfo, mtxregion->mutex_off);
|
||||
|
||||
/*
|
||||
* This is a little strange, but the mutex_handle is what all of the
|
||||
* underlying mutex routines will use to determine if they should do
|
||||
* any work and to find their information. Save/restore the handle
|
||||
* around the work loop.
|
||||
*
|
||||
* The OOB mutex (MUTEX_INVALID) is 0, skip it.
|
||||
*/
|
||||
orig_handle = env->mutex_handle;
|
||||
env->mutex_handle = mtxmgr;
|
||||
for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
|
||||
mutexp = MUTEXP_SET(i);
|
||||
if (F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
|
||||
(void)__mutex_destroy(env, i);
|
||||
}
|
||||
env->mutex_handle = orig_handle;
|
||||
}
|
||||
#endif
|
||||
483
mutex/mut_stat.c
Normal file
483
mutex/mut_stat.c
Normal file
@@ -0,0 +1,483 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_stat.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/db_page.h"
|
||||
#include "dbinc/db_am.h"
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
#ifdef HAVE_STATISTICS
|
||||
static int __mutex_print_all __P((ENV *, u_int32_t));
|
||||
static const char *__mutex_print_id __P((int));
|
||||
static int __mutex_print_stats __P((ENV *, u_int32_t));
|
||||
static void __mutex_print_summary __P((ENV *));
|
||||
static int __mutex_stat __P((ENV *, DB_MUTEX_STAT **, u_int32_t));
|
||||
|
||||
/*
|
||||
* __mutex_stat_pp --
|
||||
* ENV->mutex_stat pre/post processing.
|
||||
*
|
||||
* PUBLIC: int __mutex_stat_pp __P((DB_ENV *, DB_MUTEX_STAT **, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_stat_pp(dbenv, statp, flags)
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX_STAT **statp;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if ((ret = __db_fchk(env,
|
||||
"DB_ENV->mutex_stat", flags, DB_STAT_CLEAR)) != 0)
|
||||
return (ret);
|
||||
|
||||
ENV_ENTER(env, ip);
|
||||
REPLICATION_WRAP(env, (__mutex_stat(env, statp, flags)), 0, ret);
|
||||
ENV_LEAVE(env, ip);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_stat --
|
||||
* ENV->mutex_stat.
|
||||
*/
|
||||
static int
|
||||
__mutex_stat(env, statp, flags)
|
||||
ENV *env;
|
||||
DB_MUTEX_STAT **statp;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
DB_MUTEX_STAT *stats;
|
||||
int ret;
|
||||
|
||||
*statp = NULL;
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
|
||||
if ((ret = __os_umalloc(env, sizeof(DB_MUTEX_STAT), &stats)) != 0)
|
||||
return (ret);
|
||||
|
||||
MUTEX_SYSTEM_LOCK(env);
|
||||
|
||||
/*
|
||||
* Most fields are maintained in the underlying region structure.
|
||||
* Region size and region mutex are not.
|
||||
*/
|
||||
*stats = mtxregion->stat;
|
||||
stats->st_regsize = mtxmgr->reginfo.rp->size;
|
||||
__mutex_set_wait_info(env, mtxregion->mtx_region,
|
||||
&stats->st_region_wait, &stats->st_region_nowait);
|
||||
if (LF_ISSET(DB_STAT_CLEAR))
|
||||
__mutex_clear(env, mtxregion->mtx_region);
|
||||
|
||||
MUTEX_SYSTEM_UNLOCK(env);
|
||||
|
||||
*statp = stats;
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_stat_print_pp --
|
||||
* ENV->mutex_stat_print pre/post processing.
|
||||
*
|
||||
* PUBLIC: int __mutex_stat_print_pp __P((DB_ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_stat_print_pp(dbenv, flags)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_THREAD_INFO *ip;
|
||||
ENV *env;
|
||||
int ret;
|
||||
|
||||
env = dbenv->env;
|
||||
|
||||
if ((ret = __db_fchk(env, "DB_ENV->mutex_stat_print",
|
||||
flags, DB_STAT_ALL | DB_STAT_CLEAR)) != 0)
|
||||
return (ret);
|
||||
|
||||
ENV_ENTER(env, ip);
|
||||
REPLICATION_WRAP(env, (__mutex_stat_print(env, flags)), 0, ret);
|
||||
ENV_LEAVE(env, ip);
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_stat_print
|
||||
* ENV->mutex_stat_print method.
|
||||
*
|
||||
* PUBLIC: int __mutex_stat_print __P((ENV *, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__mutex_stat_print(env, flags)
|
||||
ENV *env;
|
||||
u_int32_t flags;
|
||||
{
|
||||
u_int32_t orig_flags;
|
||||
int ret;
|
||||
|
||||
orig_flags = flags;
|
||||
LF_CLR(DB_STAT_CLEAR | DB_STAT_SUBSYSTEM);
|
||||
if (flags == 0 || LF_ISSET(DB_STAT_ALL)) {
|
||||
ret = __mutex_print_stats(env, orig_flags);
|
||||
__mutex_print_summary(env);
|
||||
if (flags == 0 || ret != 0)
|
||||
return (ret);
|
||||
}
|
||||
|
||||
if (LF_ISSET(DB_STAT_ALL))
|
||||
ret = __mutex_print_all(env, orig_flags);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
static void
|
||||
__mutex_print_summary(env)
|
||||
ENV *env;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t i;
|
||||
u_int32_t counts[MTX_MAX_ENTRY + 2];
|
||||
int alloc_id;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
memset(counts, 0, sizeof(counts));
|
||||
|
||||
for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
|
||||
mutexp = MUTEXP_SET(i);
|
||||
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
|
||||
counts[0]++;
|
||||
else if (mutexp->alloc_id > MTX_MAX_ENTRY)
|
||||
counts[MTX_MAX_ENTRY + 1]++;
|
||||
else
|
||||
counts[mutexp->alloc_id]++;
|
||||
}
|
||||
__db_msg(env, "Mutex counts");
|
||||
__db_msg(env, "%d\tUnallocated", counts[0]);
|
||||
for (alloc_id = 1; alloc_id <= MTX_TXN_REGION + 1; alloc_id++)
|
||||
if (counts[alloc_id] != 0)
|
||||
__db_msg(env, "%lu\t%s",
|
||||
(u_long)counts[alloc_id],
|
||||
__mutex_print_id(alloc_id));
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_print_stats --
|
||||
* Display default mutex region statistics.
|
||||
*/
|
||||
static int
|
||||
__mutex_print_stats(env, flags)
|
||||
ENV *env;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_MUTEX_STAT *sp;
|
||||
int ret;
|
||||
|
||||
if ((ret = __mutex_stat(env, &sp, LF_ISSET(DB_STAT_CLEAR))) != 0)
|
||||
return (ret);
|
||||
|
||||
if (LF_ISSET(DB_STAT_ALL))
|
||||
__db_msg(env, "Default mutex region information:");
|
||||
|
||||
__db_dlbytes(env, "Mutex region size",
|
||||
(u_long)0, (u_long)0, (u_long)sp->st_regsize);
|
||||
__db_dl_pct(env,
|
||||
"The number of region locks that required waiting",
|
||||
(u_long)sp->st_region_wait, DB_PCT(sp->st_region_wait,
|
||||
sp->st_region_wait + sp->st_region_nowait), NULL);
|
||||
STAT_ULONG("Mutex alignment", sp->st_mutex_align);
|
||||
STAT_ULONG("Mutex test-and-set spins", sp->st_mutex_tas_spins);
|
||||
STAT_ULONG("Mutex total count", sp->st_mutex_cnt);
|
||||
STAT_ULONG("Mutex free count", sp->st_mutex_free);
|
||||
STAT_ULONG("Mutex in-use count", sp->st_mutex_inuse);
|
||||
STAT_ULONG("Mutex maximum in-use count", sp->st_mutex_inuse_max);
|
||||
|
||||
__os_ufree(env, sp);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_print_all --
|
||||
* Display debugging mutex region statistics.
|
||||
*/
|
||||
static int
|
||||
__mutex_print_all(env, flags)
|
||||
ENV *env;
|
||||
u_int32_t flags;
|
||||
{
|
||||
static const FN fn[] = {
|
||||
{ DB_MUTEX_ALLOCATED, "alloc" },
|
||||
{ DB_MUTEX_LOCKED, "locked" },
|
||||
{ DB_MUTEX_LOGICAL_LOCK, "logical" },
|
||||
{ DB_MUTEX_PROCESS_ONLY, "process-private" },
|
||||
{ DB_MUTEX_SELF_BLOCK, "self-block" },
|
||||
{ 0, NULL }
|
||||
};
|
||||
DB_MSGBUF mb, *mbp;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
db_mutex_t i;
|
||||
|
||||
DB_MSGBUF_INIT(&mb);
|
||||
mbp = &mb;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
|
||||
__db_print_reginfo(env, &mtxmgr->reginfo, "Mutex", flags);
|
||||
__db_msg(env, "%s", DB_GLOBAL(db_line));
|
||||
|
||||
__db_msg(env, "DB_MUTEXREGION structure:");
|
||||
__mutex_print_debug_single(env,
|
||||
"DB_MUTEXREGION region mutex", mtxregion->mtx_region, flags);
|
||||
STAT_ULONG("Size of the aligned mutex", mtxregion->mutex_size);
|
||||
STAT_ULONG("Next free mutex", mtxregion->mutex_next);
|
||||
|
||||
/*
|
||||
* The OOB mutex (MUTEX_INVALID) is 0, skip it.
|
||||
*
|
||||
* We're not holding the mutex region lock, so we're racing threads of
|
||||
* control allocating mutexes. That's OK, it just means we display or
|
||||
* clear statistics while mutexes are moving.
|
||||
*/
|
||||
__db_msg(env, "%s", DB_GLOBAL(db_line));
|
||||
__db_msg(env, "mutex\twait/nowait, pct wait, holder, flags");
|
||||
for (i = 1; i <= mtxregion->stat.st_mutex_cnt; ++i, ++mutexp) {
|
||||
mutexp = MUTEXP_SET(i);
|
||||
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_ALLOCATED))
|
||||
continue;
|
||||
|
||||
__db_msgadd(env, mbp, "%5lu\t", (u_long)i);
|
||||
|
||||
__mutex_print_debug_stats(env, mbp, i, flags);
|
||||
|
||||
if (mutexp->alloc_id != 0)
|
||||
__db_msgadd(env,
|
||||
mbp, ", %s", __mutex_print_id(mutexp->alloc_id));
|
||||
|
||||
__db_prflags(env, mbp, mutexp->flags, fn, " (", ")");
|
||||
|
||||
DB_MSGBUF_FLUSH(env, mbp);
|
||||
}
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_print_debug_single --
|
||||
* Print mutex internal debugging statistics for a single mutex on a
|
||||
* single output line.
|
||||
*
|
||||
* PUBLIC: void __mutex_print_debug_single
|
||||
* PUBLIC: __P((ENV *, const char *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
void
|
||||
__mutex_print_debug_single(env, tag, mutex, flags)
|
||||
ENV *env;
|
||||
const char *tag;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_MSGBUF mb, *mbp;
|
||||
|
||||
DB_MSGBUF_INIT(&mb);
|
||||
mbp = &mb;
|
||||
|
||||
if (LF_ISSET(DB_STAT_SUBSYSTEM))
|
||||
LF_CLR(DB_STAT_CLEAR);
|
||||
__db_msgadd(env, mbp, "%lu\t%s ", (u_long)mutex, tag);
|
||||
__mutex_print_debug_stats(env, mbp, mutex, flags);
|
||||
DB_MSGBUF_FLUSH(env, mbp);
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_print_debug_stats --
|
||||
* Print mutex internal debugging statistics, that is, the statistics
|
||||
* in the [] square brackets.
|
||||
*
|
||||
* PUBLIC: void __mutex_print_debug_stats
|
||||
* PUBLIC: __P((ENV *, DB_MSGBUF *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
void
|
||||
__mutex_print_debug_stats(env, mbp, mutex, flags)
|
||||
ENV *env;
|
||||
DB_MSGBUF *mbp;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
u_long value;
|
||||
char buf[DB_THREADID_STRLEN];
|
||||
|
||||
if (mutex == MUTEX_INVALID) {
|
||||
__db_msgadd(env, mbp, "[!Set]");
|
||||
return;
|
||||
}
|
||||
|
||||
dbenv = env->dbenv;
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
__db_msgadd(env, mbp, "[");
|
||||
if ((value = mutexp->mutex_set_wait) < 10000000)
|
||||
__db_msgadd(env, mbp, "%lu", value);
|
||||
else
|
||||
__db_msgadd(env, mbp, "%luM", value / 1000000);
|
||||
if ((value = mutexp->mutex_set_nowait) < 10000000)
|
||||
__db_msgadd(env, mbp, "/%lu", value);
|
||||
else
|
||||
__db_msgadd(env, mbp, "/%luM", value / 1000000);
|
||||
|
||||
__db_msgadd(env, mbp, " %d%%",
|
||||
DB_PCT(mutexp->mutex_set_wait,
|
||||
mutexp->mutex_set_wait + mutexp->mutex_set_nowait));
|
||||
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
__db_msgadd(env, mbp, " %s]",
|
||||
dbenv->thread_id_string(dbenv,
|
||||
mutexp->pid, mutexp->tid, buf));
|
||||
else
|
||||
__db_msgadd(env, mbp, " !Own]");
|
||||
|
||||
if (LF_ISSET(DB_STAT_CLEAR))
|
||||
__mutex_clear(env, mutex);
|
||||
}
|
||||
|
||||
static const char *
|
||||
__mutex_print_id(alloc_id)
|
||||
int alloc_id;
|
||||
{
|
||||
switch (alloc_id) {
|
||||
case MTX_APPLICATION: return ("application allocated");
|
||||
case MTX_DB_HANDLE: return ("db handle");
|
||||
case MTX_ENV_DBLIST: return ("env dblist");
|
||||
case MTX_ENV_HANDLE: return ("env handle");
|
||||
case MTX_ENV_REGION: return ("env region");
|
||||
case MTX_LOCK_REGION: return ("lock region");
|
||||
case MTX_LOGICAL_LOCK: return ("logical lock");
|
||||
case MTX_LOG_FILENAME: return ("log filename");
|
||||
case MTX_LOG_FLUSH: return ("log flush");
|
||||
case MTX_LOG_HANDLE: return ("log handle");
|
||||
case MTX_LOG_REGION: return ("log region");
|
||||
case MTX_MPOOLFILE_HANDLE: return ("mpoolfile handle");
|
||||
case MTX_MPOOL_FH: return ("mpool filehandle");
|
||||
case MTX_MPOOL_FILE_BUCKET: return ("mpool file bucket");
|
||||
case MTX_MPOOL_HANDLE: return ("mpool handle");
|
||||
case MTX_MPOOL_HASH_BUCKET: return ("mpool hash bucket");
|
||||
case MTX_MPOOL_IO: return ("mpool buffer I/O");
|
||||
case MTX_MPOOL_REGION: return ("mpool region");
|
||||
case MTX_MUTEX_REGION: return ("mutex region");
|
||||
case MTX_MUTEX_TEST: return ("mutex test");
|
||||
case MTX_REP_CHKPT: return ("replication checkpoint");
|
||||
case MTX_REP_DATABASE: return ("replication database");
|
||||
case MTX_REP_EVENT: return ("replication event");
|
||||
case MTX_REP_REGION: return ("replication region");
|
||||
case MTX_SEQUENCE: return ("sequence");
|
||||
case MTX_TWISTER: return ("twister");
|
||||
case MTX_TXN_ACTIVE: return ("txn active list");
|
||||
case MTX_TXN_CHKPT: return ("transaction checkpoint");
|
||||
case MTX_TXN_COMMIT: return ("txn commit");
|
||||
case MTX_TXN_MVCC: return ("txn mvcc");
|
||||
case MTX_TXN_REGION: return ("txn region");
|
||||
default: return ("unknown mutex type");
|
||||
/* NOTREACHED */
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_set_wait_info --
|
||||
* Return mutex statistics.
|
||||
*
|
||||
* PUBLIC: void __mutex_set_wait_info
|
||||
* PUBLIC: __P((ENV *, db_mutex_t, u_int32_t *, u_int32_t *));
|
||||
*/
|
||||
void
|
||||
__mutex_set_wait_info(env, mutex, waitp, nowaitp)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t *waitp, *nowaitp;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
*waitp = mutexp->mutex_set_wait;
|
||||
*nowaitp = mutexp->mutex_set_nowait;
|
||||
}
|
||||
|
||||
/*
|
||||
* __mutex_clear --
|
||||
* Clear mutex statistics.
|
||||
*
|
||||
* PUBLIC: void __mutex_clear __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
void
|
||||
__mutex_clear(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
mutexp->mutex_set_wait = mutexp->mutex_set_nowait = 0;
|
||||
}
|
||||
|
||||
#else /* !HAVE_STATISTICS */
|
||||
|
||||
int
|
||||
__mutex_stat_pp(dbenv, statp, flags)
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX_STAT **statp;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(statp, NULL);
|
||||
COMPQUIET(flags, 0);
|
||||
|
||||
return (__db_stat_not_built(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_stat_print_pp(dbenv, flags)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(flags, 0);
|
||||
|
||||
return (__db_stat_not_built(dbenv->env));
|
||||
}
|
||||
#endif
|
||||
233
mutex/mut_stub.c
Normal file
233
mutex/mut_stub.c
Normal file
@@ -0,0 +1,233 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_stub.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#ifndef HAVE_MUTEX_SUPPORT
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
#include "dbinc/db_page.h"
|
||||
#include "dbinc/db_am.h"
|
||||
|
||||
/*
|
||||
* If the library wasn't compiled with mutex support, various routines
|
||||
* aren't available. Stub them here, returning an appropriate error.
|
||||
*/
|
||||
static int __db_nomutex __P((ENV *));
|
||||
|
||||
/*
|
||||
* __db_nomutex --
|
||||
* Error when a Berkeley DB build doesn't include mutexes.
|
||||
*/
|
||||
static int
|
||||
__db_nomutex(env)
|
||||
ENV *env;
|
||||
{
|
||||
__db_errx(env, "library build did not include support for mutexes");
|
||||
return (DB_OPNOTSUP);
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_alloc_pp(dbenv, flags, indxp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t flags;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
COMPQUIET(flags, 0);
|
||||
COMPQUIET(indxp, NULL);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_alloc(env, alloc_id, flags, indxp)
|
||||
ENV *env;
|
||||
int alloc_id;
|
||||
u_int32_t flags;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(alloc_id, 0);
|
||||
COMPQUIET(flags, 0);
|
||||
*indxp = MUTEX_INVALID;
|
||||
return (0);
|
||||
}
|
||||
|
||||
void
|
||||
__mutex_clear(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_free_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
COMPQUIET(indx, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_free(env, indxp)
|
||||
ENV *env;
|
||||
db_mutex_t *indxp;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
*indxp = MUTEX_INVALID;
|
||||
return (0);
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_get_align(dbenv, alignp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *alignp;
|
||||
{
|
||||
COMPQUIET(alignp, NULL);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_get_increment(dbenv, incrementp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *incrementp;
|
||||
{
|
||||
COMPQUIET(incrementp, NULL);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_get_max(dbenv, maxp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *maxp;
|
||||
{
|
||||
COMPQUIET(maxp, NULL);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_get_tas_spins(dbenv, tas_spinsp)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t *tas_spinsp;
|
||||
{
|
||||
COMPQUIET(tas_spinsp, NULL);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_lock_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
COMPQUIET(indx, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
void
|
||||
__mutex_print_debug_single(env, tag, mutex, flags)
|
||||
ENV *env;
|
||||
const char *tag;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(tag, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
COMPQUIET(flags, 0);
|
||||
}
|
||||
|
||||
void
|
||||
__mutex_print_debug_stats(env, mbp, mutex, flags)
|
||||
ENV *env;
|
||||
DB_MSGBUF *mbp;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(mbp, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
COMPQUIET(flags, 0);
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_set_align(dbenv, align)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t align;
|
||||
{
|
||||
COMPQUIET(align, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_set_increment(dbenv, increment)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t increment;
|
||||
{
|
||||
COMPQUIET(increment, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_set_max(dbenv, max)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t max;
|
||||
{
|
||||
COMPQUIET(max, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_set_tas_spins(dbenv, tas_spins)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t tas_spins;
|
||||
{
|
||||
COMPQUIET(tas_spins, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
void
|
||||
__mutex_set_wait_info(env, mutex, waitp, nowaitp)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t *waitp, *nowaitp;
|
||||
{
|
||||
COMPQUIET(env, NULL);
|
||||
COMPQUIET(mutex, MUTEX_INVALID);
|
||||
*waitp = *nowaitp = 0;
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_stat_pp(dbenv, statp, flags)
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX_STAT **statp;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(statp, NULL);
|
||||
COMPQUIET(flags, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_stat_print_pp(dbenv, flags)
|
||||
DB_ENV *dbenv;
|
||||
u_int32_t flags;
|
||||
{
|
||||
COMPQUIET(flags, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
|
||||
int
|
||||
__mutex_unlock_pp(dbenv, indx)
|
||||
DB_ENV *dbenv;
|
||||
db_mutex_t indx;
|
||||
{
|
||||
COMPQUIET(indx, 0);
|
||||
return (__db_nomutex(dbenv->env));
|
||||
}
|
||||
#endif /* !HAVE_MUTEX_SUPPORT */
|
||||
293
mutex/mut_tas.c
Normal file
293
mutex/mut_tas.c
Normal file
@@ -0,0 +1,293 @@
|
||||
/*-
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 1996,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_tas.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
|
||||
/*
|
||||
* This is where we load in architecture/compiler specific mutex code.
|
||||
*/
|
||||
#define LOAD_ACTUAL_MUTEX_CODE
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/*
|
||||
* __db_tas_mutex_init --
|
||||
* Initialize a test-and-set mutex.
|
||||
*
|
||||
* PUBLIC: int __db_tas_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__db_tas_mutex_init(env, mutex, flags)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
int ret;
|
||||
|
||||
COMPQUIET(flags, 0);
|
||||
|
||||
dbenv = env->dbenv;
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
/* Check alignment. */
|
||||
if (((uintptr_t)mutexp & (dbenv->mutex_align - 1)) != 0) {
|
||||
__db_errx(env, "TAS: mutex not appropriately aligned");
|
||||
return (EINVAL);
|
||||
}
|
||||
|
||||
if (MUTEX_INIT(&mutexp->tas)) {
|
||||
ret = __os_get_syserr();
|
||||
__db_syserr(env, ret, "TAS: mutex initialize");
|
||||
return (__os_posix_err(ret));
|
||||
}
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
if ((ret = __db_pthread_mutex_init(env,
|
||||
mutex, flags | DB_MUTEX_SELF_BLOCK)) != 0)
|
||||
return (ret);
|
||||
#endif
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_tas_mutex_lock
|
||||
* Lock on a mutex, blocking if necessary.
|
||||
*
|
||||
* PUBLIC: int __db_tas_mutex_lock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_tas_mutex_lock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
u_int32_t nspins;
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
int ret;
|
||||
#else
|
||||
u_long ms, max_ms;
|
||||
#endif
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
CHECK_MTX_THREAD(env, mutexp);
|
||||
|
||||
#ifdef HAVE_STATISTICS
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
++mutexp->mutex_set_wait;
|
||||
else
|
||||
++mutexp->mutex_set_nowait;
|
||||
#endif
|
||||
|
||||
#ifndef HAVE_MUTEX_HYBRID
|
||||
/*
|
||||
* Wait 1ms initially, up to 10ms for mutexes backing logical database
|
||||
* locks, and up to 25 ms for mutual exclusion data structure mutexes.
|
||||
* SR: #7675
|
||||
*/
|
||||
ms = 1;
|
||||
max_ms = F_ISSET(mutexp, DB_MUTEX_LOGICAL_LOCK) ? 10 : 25;
|
||||
#endif
|
||||
|
||||
loop: /* Attempt to acquire the resource for N spins. */
|
||||
for (nspins =
|
||||
mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
|
||||
#ifdef HAVE_MUTEX_HPPA_MSEM_INIT
|
||||
relock:
|
||||
#endif
|
||||
#ifdef HAVE_MUTEX_S390_CC_ASSEMBLY
|
||||
tsl_t zero = 0;
|
||||
#endif
|
||||
/*
|
||||
* Avoid interlocked instructions until they're likely to
|
||||
* succeed.
|
||||
*/
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED) ||
|
||||
!MUTEX_SET(&mutexp->tas)) {
|
||||
/*
|
||||
* Some systems (notably those with newer Intel CPUs)
|
||||
* need a small pause here. [#6975]
|
||||
*/
|
||||
#ifdef MUTEX_PAUSE
|
||||
MUTEX_PAUSE
|
||||
#endif
|
||||
continue;
|
||||
}
|
||||
|
||||
#ifdef HAVE_MUTEX_HPPA_MSEM_INIT
|
||||
/*
|
||||
* HP semaphores are unlocked automatically when a holding
|
||||
* process exits. If the mutex appears to be locked
|
||||
* (F_ISSET(DB_MUTEX_LOCKED)) but we got here, assume this
|
||||
* has happened. Set the pid and tid into the mutex and
|
||||
* lock again. (The default state of the mutexes used to
|
||||
* block in __lock_get_internal is locked, so exiting with
|
||||
* a locked mutex is reasonable behavior for a process that
|
||||
* happened to initialize or use one of them.)
|
||||
*/
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
goto relock;
|
||||
}
|
||||
/*
|
||||
* If we make it here, the mutex isn't locked, the diagnostic
|
||||
* won't fire, and we were really unlocked by someone calling
|
||||
* the DB mutex unlock function.
|
||||
*/
|
||||
#endif
|
||||
#ifdef DIAGNOSTIC
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
char buf[DB_THREADID_STRLEN];
|
||||
__db_errx(env,
|
||||
"TAS lock failed: lock currently in use: ID: %s",
|
||||
dbenv->thread_id_string(dbenv,
|
||||
mutexp->pid, mutexp->tid, buf));
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
/*
|
||||
* We want to switch threads as often as possible. Yield
|
||||
* every time we get a mutex to ensure contention.
|
||||
*/
|
||||
if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
|
||||
__os_yield(env, 0, 0);
|
||||
#endif
|
||||
return (0);
|
||||
}
|
||||
|
||||
/* Wait for the lock to become available. */
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
/*
|
||||
* By yielding here we can get the other thread to give up the
|
||||
* mutex before calling the more expensive library mutex call.
|
||||
* Tests have shown this to be a big win when there is contention.
|
||||
*/
|
||||
__os_yield(env, 0, 0);
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_LOCKED))
|
||||
goto loop;
|
||||
if ((ret = __db_pthread_mutex_lock(env, mutex)) != 0)
|
||||
return (ret);
|
||||
#else
|
||||
__os_yield(env, 0, ms * US_PER_MS);
|
||||
if ((ms <<= 1) > max_ms)
|
||||
ms = max_ms;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* We're spinning. The environment might be hung, and somebody else
|
||||
* has already recovered it. The first thing recovery does is panic
|
||||
* the environment. Check to see if we're never going to get this
|
||||
* mutex.
|
||||
*/
|
||||
PANIC_CHECK(env);
|
||||
|
||||
goto loop;
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_tas_mutex_unlock --
|
||||
* Release a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_tas_mutex_unlock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_tas_mutex_unlock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
int ret;
|
||||
#endif
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
if (!F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
__db_errx(env, "TAS unlock failed: lock already unlocked");
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
|
||||
F_CLR(mutexp, DB_MUTEX_LOCKED);
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
MUTEX_MEMBAR(mutexp->flags);
|
||||
|
||||
if (mutexp->wait &&
|
||||
(ret = __db_pthread_mutex_unlock(env, mutex)) != 0)
|
||||
return (ret);
|
||||
#endif
|
||||
MUTEX_UNSET(&mutexp->tas);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_tas_mutex_destroy --
|
||||
* Destroy a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_tas_mutex_destroy __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_tas_mutex_destroy(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
int ret;
|
||||
#endif
|
||||
|
||||
if (!MUTEX_ON(env))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
MUTEX_DESTROY(&mutexp->tas);
|
||||
|
||||
#ifdef HAVE_MUTEX_HYBRID
|
||||
if ((ret = __db_pthread_mutex_destroy(env, mutex)) != 0)
|
||||
return (ret);
|
||||
#endif
|
||||
|
||||
COMPQUIET(mutexp, NULL); /* MUTEX_DESTROY may not be defined. */
|
||||
return (0);
|
||||
}
|
||||
314
mutex/mut_win32.c
Normal file
314
mutex/mut_win32.c
Normal file
@@ -0,0 +1,314 @@
|
||||
/*
|
||||
* See the file LICENSE for redistribution information.
|
||||
*
|
||||
* Copyright (c) 2002,2008 Oracle. All rights reserved.
|
||||
*
|
||||
* $Id: mut_win32.c 63573 2008-05-23 21:43:21Z trent.nelson $
|
||||
*/
|
||||
|
||||
#include "db_config.h"
|
||||
|
||||
#include "db_int.h"
|
||||
|
||||
/*
|
||||
* This is where we load in the actual test-and-set mutex code.
|
||||
*/
|
||||
#define LOAD_ACTUAL_MUTEX_CODE
|
||||
#include "dbinc/mutex_int.h"
|
||||
|
||||
/* We don't want to run this code even in "ordinary" diagnostic mode. */
|
||||
#undef MUTEX_DIAG
|
||||
|
||||
/*
|
||||
* Common code to get an event handle. This is executed whenever a mutex
|
||||
* blocks, or when unlocking a mutex that a thread is waiting on. We can't
|
||||
* keep these handles around, since the mutex structure is in shared memory,
|
||||
* and each process gets its own handle value.
|
||||
*
|
||||
* We pass security attributes so that the created event is accessible by all
|
||||
* users, in case a Windows service is sharing an environment with a local
|
||||
* process run as a different user.
|
||||
*/
|
||||
static _TCHAR hex_digits[] = _T("0123456789abcdef");
|
||||
static SECURITY_DESCRIPTOR null_sd;
|
||||
static SECURITY_ATTRIBUTES all_sa;
|
||||
static int security_initialized = 0;
|
||||
|
||||
static __inline int get_handle(env, mutexp, eventp)
|
||||
ENV *env;
|
||||
DB_MUTEX *mutexp;
|
||||
HANDLE *eventp;
|
||||
{
|
||||
_TCHAR idbuf[] = _T("db.m00000000");
|
||||
_TCHAR *p = idbuf + 12;
|
||||
int ret = 0;
|
||||
u_int32_t id;
|
||||
|
||||
for (id = (mutexp)->id; id != 0; id >>= 4)
|
||||
*--p = hex_digits[id & 0xf];
|
||||
|
||||
#ifndef DB_WINCE
|
||||
if (!security_initialized) {
|
||||
InitializeSecurityDescriptor(&null_sd,
|
||||
SECURITY_DESCRIPTOR_REVISION);
|
||||
SetSecurityDescriptorDacl(&null_sd, TRUE, 0, FALSE);
|
||||
all_sa.nLength = sizeof(SECURITY_ATTRIBUTES);
|
||||
all_sa.bInheritHandle = FALSE;
|
||||
all_sa.lpSecurityDescriptor = &null_sd;
|
||||
security_initialized = 1;
|
||||
}
|
||||
#endif
|
||||
|
||||
if ((*eventp = CreateEvent(&all_sa, FALSE, FALSE, idbuf)) == NULL) {
|
||||
ret = __os_get_syserr();
|
||||
__db_syserr(env, ret, "Win32 create event failed");
|
||||
}
|
||||
|
||||
return (ret);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_win32_mutex_init --
|
||||
* Initialize a Win32 mutex.
|
||||
*
|
||||
* PUBLIC: int __db_win32_mutex_init __P((ENV *, db_mutex_t, u_int32_t));
|
||||
*/
|
||||
int
|
||||
__db_win32_mutex_init(env, mutex, flags)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
u_int32_t flags;
|
||||
{
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
mutexp->id = ((getpid() & 0xffff) << 16) ^ P_TO_UINT32(mutexp);
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_win32_mutex_lock
|
||||
* Lock on a mutex, blocking if necessary.
|
||||
*
|
||||
* PUBLIC: int __db_win32_mutex_lock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_win32_mutex_lock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
HANDLE event;
|
||||
u_int32_t nspins;
|
||||
int ms, ret;
|
||||
#ifdef MUTEX_DIAG
|
||||
LARGE_INTEGER now;
|
||||
#endif
|
||||
#ifdef DB_WINCE
|
||||
volatile db_threadid_t tmp_tid;
|
||||
#endif
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
CHECK_MTX_THREAD(env, mutexp);
|
||||
|
||||
event = NULL;
|
||||
ms = 50;
|
||||
ret = 0;
|
||||
|
||||
loop: /* Attempt to acquire the resource for N spins. */
|
||||
for (nspins =
|
||||
mtxregion->stat.st_mutex_tas_spins; nspins > 0; --nspins) {
|
||||
/*
|
||||
* We can avoid the (expensive) interlocked instructions if
|
||||
* the mutex is already "set".
|
||||
*/
|
||||
#ifdef DB_WINCE
|
||||
/*
|
||||
* Memory mapped regions on Windows CE cause problems with
|
||||
* InterlockedExchange calls. Each page in a mapped region
|
||||
* needs to have been written to prior to an
|
||||
* InterlockedExchange call, or the InterlockedExchange call
|
||||
* hangs. This does not seem to be documented anywhere. For
|
||||
* now, read/write a non-critical piece of memory from the
|
||||
* shared region prior to attempting an InterlockedExchange
|
||||
* operation.
|
||||
*/
|
||||
tmp_tid = mutexp->tid;
|
||||
mutexp->tid = tmp_tid;
|
||||
#endif
|
||||
if (mutexp->tas || !MUTEX_SET(&mutexp->tas)) {
|
||||
/*
|
||||
* Some systems (notably those with newer Intel CPUs)
|
||||
* need a small pause here. [#6975]
|
||||
*/
|
||||
#ifdef MUTEX_PAUSE
|
||||
MUTEX_PAUSE
|
||||
#endif
|
||||
continue;
|
||||
}
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
if (F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
char buf[DB_THREADID_STRLEN];
|
||||
__db_errx(env,
|
||||
"Win32 lock failed: mutex already locked by %s",
|
||||
dbenv->thread_id_string(dbenv,
|
||||
mutexp->pid, mutexp->tid, buf));
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
F_SET(mutexp, DB_MUTEX_LOCKED);
|
||||
dbenv->thread_id(dbenv, &mutexp->pid, &mutexp->tid);
|
||||
|
||||
#ifdef HAVE_STATISTICS
|
||||
if (event == NULL)
|
||||
++mutexp->mutex_set_nowait;
|
||||
else
|
||||
++mutexp->mutex_set_wait;
|
||||
#endif
|
||||
if (event != NULL) {
|
||||
CloseHandle(event);
|
||||
InterlockedDecrement(&mutexp->nwaiters);
|
||||
#ifdef MUTEX_DIAG
|
||||
if (ret != WAIT_OBJECT_0) {
|
||||
QueryPerformanceCounter(&now);
|
||||
printf("[%I64d]: Lost signal on mutex %p, "
|
||||
"id %d, ms %d\n",
|
||||
now.QuadPart, mutexp, mutexp->id, ms);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
/*
|
||||
* We want to switch threads as often as possible. Yield
|
||||
* every time we get a mutex to ensure contention.
|
||||
*/
|
||||
if (F_ISSET(dbenv, DB_ENV_YIELDCPU))
|
||||
__os_yield(env, 0, 0);
|
||||
#endif
|
||||
|
||||
return (0);
|
||||
}
|
||||
|
||||
/*
|
||||
* Yield the processor; wait 50 ms initially, up to 1 second. This
|
||||
* loop is needed to work around a race where the signal from the
|
||||
* unlocking thread gets lost. We start at 50 ms because it's unlikely
|
||||
* to happen often and we want to avoid wasting CPU.
|
||||
*/
|
||||
if (event == NULL) {
|
||||
#ifdef MUTEX_DIAG
|
||||
QueryPerformanceCounter(&now);
|
||||
printf("[%I64d]: Waiting on mutex %p, id %d\n",
|
||||
now.QuadPart, mutexp, mutexp->id);
|
||||
#endif
|
||||
InterlockedIncrement(&mutexp->nwaiters);
|
||||
if ((ret = get_handle(env, mutexp, &event)) != 0)
|
||||
goto err;
|
||||
}
|
||||
if ((ret = WaitForSingleObject(event, ms)) == WAIT_FAILED) {
|
||||
ret = __os_get_syserr();
|
||||
goto err;
|
||||
}
|
||||
if ((ms <<= 1) > MS_PER_SEC)
|
||||
ms = MS_PER_SEC;
|
||||
|
||||
PANIC_CHECK(env);
|
||||
goto loop;
|
||||
|
||||
err: __db_syserr(env, ret, "Win32 lock failed");
|
||||
return (__env_panic(env, __os_posix_err(ret)));
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_win32_mutex_unlock --
|
||||
* Release a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_win32_mutex_unlock __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_win32_mutex_unlock(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
DB_ENV *dbenv;
|
||||
DB_MUTEX *mutexp;
|
||||
DB_MUTEXMGR *mtxmgr;
|
||||
DB_MUTEXREGION *mtxregion;
|
||||
HANDLE event;
|
||||
int ret;
|
||||
#ifdef MUTEX_DIAG
|
||||
LARGE_INTEGER now;
|
||||
#endif
|
||||
dbenv = env->dbenv;
|
||||
|
||||
if (!MUTEX_ON(env) || F_ISSET(dbenv, DB_ENV_NOLOCKING))
|
||||
return (0);
|
||||
|
||||
mtxmgr = env->mutex_handle;
|
||||
mtxregion = mtxmgr->reginfo.primary;
|
||||
mutexp = MUTEXP_SET(mutex);
|
||||
|
||||
#ifdef DIAGNOSTIC
|
||||
if (!mutexp->tas || !F_ISSET(mutexp, DB_MUTEX_LOCKED)) {
|
||||
__db_errx(env, "Win32 unlock failed: lock already unlocked");
|
||||
return (__env_panic(env, EACCES));
|
||||
}
|
||||
#endif
|
||||
F_CLR(mutexp, DB_MUTEX_LOCKED);
|
||||
MUTEX_UNSET(&mutexp->tas);
|
||||
|
||||
if (mutexp->nwaiters > 0) {
|
||||
if ((ret = get_handle(env, mutexp, &event)) != 0)
|
||||
goto err;
|
||||
|
||||
#ifdef MUTEX_DIAG
|
||||
QueryPerformanceCounter(&now);
|
||||
printf("[%I64d]: Signalling mutex %p, id %d\n",
|
||||
now.QuadPart, mutexp, mutexp->id);
|
||||
#endif
|
||||
if (!PulseEvent(event)) {
|
||||
ret = __os_get_syserr();
|
||||
CloseHandle(event);
|
||||
goto err;
|
||||
}
|
||||
|
||||
CloseHandle(event);
|
||||
}
|
||||
|
||||
return (0);
|
||||
|
||||
err: __db_syserr(env, ret, "Win32 unlock failed");
|
||||
return (__env_panic(env, __os_posix_err(ret)));
|
||||
}
|
||||
|
||||
/*
|
||||
* __db_win32_mutex_destroy --
|
||||
* Destroy a mutex.
|
||||
*
|
||||
* PUBLIC: int __db_win32_mutex_destroy __P((ENV *, db_mutex_t));
|
||||
*/
|
||||
int
|
||||
__db_win32_mutex_destroy(env, mutex)
|
||||
ENV *env;
|
||||
db_mutex_t mutex;
|
||||
{
|
||||
return (0);
|
||||
}
|
||||
1042
mutex/test_mutex.c
Normal file
1042
mutex/test_mutex.c
Normal file
File diff suppressed because it is too large
Load Diff
26
mutex/uts4_cc.s
Normal file
26
mutex/uts4_cc.s
Normal file
@@ -0,0 +1,26 @@
|
||||
/ See the file LICENSE for redistribution information.
|
||||
/
|
||||
/ Copyright (c) 1997,2008 Oracle. All rights reserved.
|
||||
/
|
||||
/ $Id: uts4_cc.s,v 12.6 2008/01/08 20:58:43 bostic Exp $
|
||||
/
|
||||
/ int uts_lock ( int *p, int i );
|
||||
/ Update the lock word pointed to by p with the
|
||||
/ value i, using compare-and-swap.
|
||||
/ Returns 0 if update was successful.
|
||||
/ Returns 1 if update failed.
|
||||
/
|
||||
entry uts_lock
|
||||
uts_lock:
|
||||
using .,r15
|
||||
st r2,8(sp) / Save R2
|
||||
l r2,64+0(sp) / R2 -> word to update
|
||||
slr r0, r0 / R0 = current lock value must be 0
|
||||
l r1,64+4(sp) / R1 = new lock value
|
||||
cs r0,r1,0(r2) / Try the update ...
|
||||
be x / ... Success. Return 0
|
||||
la r0,1 / ... Failure. Return 1
|
||||
x: /
|
||||
l r2,8(sp) / Restore R2
|
||||
b 2(,r14) / Return to caller
|
||||
drop r15
|
||||
Reference in New Issue
Block a user