mirror of
https://gitee.com/bianbu-linux/linux-6.6
synced 2025-04-24 14:07:52 -04:00
atomic_add_negative() does not provide the relaxed/acquire/release variants. Provide them in preparation for a new scalable reference count algorithm. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230323102800.101763813@linutronix.de
15 lines
435 B
Text
Executable file
15 lines
435 B
Text
Executable file
cat <<EOF
|
|
/**
|
|
* arch_${atomic}_add_negative${order} - Add and test if negative
|
|
* @i: integer value to add
|
|
* @v: pointer of type ${atomic}_t
|
|
*
|
|
* Atomically adds @i to @v and returns true if the result is negative,
|
|
* or false when the result is greater than or equal to zero.
|
|
*/
|
|
static __always_inline bool
|
|
arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
|
|
{
|
|
return arch_${atomic}_add_return${order}(i, v) < 0;
|
|
}
|
|
EOF
|