Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spin_lock_irqsave+sched_lock #14578

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

hujun260
Copy link
Contributor

@hujun260 hujun260 commented Oct 31, 2024

Summary

1 Accelerated the implementation of sched_lock, remove enter_critical_section in sched_lock and
only enter_critical_section when task scheduling is required.
2 we add sched_lock_wo_note/sched_unlock_wo_note and it does not perform instrumentation logic
3 We aim to replace big locks with smaller ones. So we will use spin_lock_irqsave extensively to
replace enter_critical_section in the subsequent process. We imitate the implementation of Linux
by adding sched_lock to spin_lock_irqsave in order to address scenarios where sem_post occurs
within spin_lock_irqsave, which can lead to spinlock failures and deadlocks.

The entire implementation process includes:

1 spin_lock_irqsave + sched_lock
2 spin_lock/rw/spin_trylock + sched_lock
3 enter_critical_section + sched_lock
We are currently implementing the first step.

Impact

spinlock and sched_lock

Testing

Build Host:

  • OS: Ubuntu 20.04
  • CPU: x86_64
  • Compiler: GCC 9.4.0

Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx

@github-actions github-actions bot added Area: OS Components OS Components issues Size: L The size of the change in this PR is large labels Oct 31, 2024
@github-actions github-actions bot added the Arch: xtensa Issues related to the Xtensa architecture label Oct 31, 2024
include/nuttx/spinlock.h Outdated Show resolved Hide resolved
sched/sched/sched.h Outdated Show resolved Hide resolved
include/sched.h Show resolved Hide resolved
sched/sched/sched.h Outdated Show resolved Hide resolved
sched/sched/sched_lock.c Outdated Show resolved Hide resolved
sched/sched/sched_lock.c Outdated Show resolved Hide resolved
sched/sched/sched_lock.c Outdated Show resolved Hide resolved
sched/sched/sched_unlock.c Outdated Show resolved Hide resolved
sched/sched/sched_unlock.c Show resolved Hide resolved
sched/sched/sched_unlock.c Outdated Show resolved Hide resolved
include/nuttx/spinlock.h Show resolved Hide resolved
@xiaoxiang781216 xiaoxiang781216 linked an issue Nov 18, 2024 that may be closed by this pull request
@xiaoxiang781216
Copy link
Contributor

@patacongo please review this patch which fix the long issue about sched lock.

sched/sched/sched_lock.c Outdated Show resolved Hide resolved
sched/sched/sched_lock.c Outdated Show resolved Hide resolved
sched/sched/sched_lock.c Outdated Show resolved Hide resolved
@hujun260 hujun260 force-pushed the apache_3 branch 2 times, most recently from 87f9b88 to cde599d Compare November 20, 2024 01:38
@github-actions github-actions bot added the Arch: risc-v Issues related to the RISC-V (32-bit or 64-bit) architecture label Nov 20, 2024
@hujun260 hujun260 force-pushed the apache_3 branch 4 times, most recently from 72f29e9 to c2c8fb5 Compare November 25, 2024 23:33
@github-actions github-actions bot added the Arch: risc-v Issues related to the RISC-V (32-bit or 64-bit) architecture label Nov 25, 2024
@hujun260 hujun260 force-pushed the apache_3 branch 2 times, most recently from ffa975d to 2f8fec9 Compare November 26, 2024 01:28
@@ -41,10 +41,12 @@ SYSCALL_LOOKUP(sched_getparam, 2)
SYSCALL_LOOKUP(sched_getscheduler, 1)
SYSCALL_LOOKUP(sched_lock, 0)
SYSCALL_LOOKUP(sched_lockcount, 0)
SYSCALL_LOOKUP(sched_lock_wo_note, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove no trace version from syscall

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sched_lock_wo_note may be called in userspace
image


nxsched_sporadic_lowpriority(rtcb);
void sched_unlock(void)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

void sched_unlock(void)                                              
{                                                                    
  FAR struct tcb_s *tcb;                                             
                                                                     
  if (!up_interrupt_context())                                       
    {                                                                
      return;                                                        
    }                                                                
                                                                     
  tcb = this_task();                                                 
  if (tcb != NULL)                                                   
    {                                                                
...     
    }                                                                
}                                                                    

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

****************************************************************************/

#if defined(CONFIG_SCHED_TICKLESS) && \
(CONFIG_RR_INTERVAL > 0 || defined(CONFIG_SCHED_SPORADIC))
static int preempt_schedule_nestcount = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we remove this global variable?

Copy link
Contributor Author

@hujun260 hujun260 Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nxsched_reassess_timer may also trigger sched_preempt_schedule
preempt_schedule_nestcount avoid recursive call

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic before optimization does not have this issue? do you have the backtrace of recursive call?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we have placed sched_lock inside spin_lock_irqsave,
and spin_lock_irqsave is also extensively used to replace critical sections,
the usage of sched_lock has become very widespread, which makes recursion highly likely to occur.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, how about decrease lockcount after call sched_preemt_schedule(), This will ensure that the unlock operation is not recursively processed in context switch flow.
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to the scheduling logic in sched_preemt_schedule that relies on checking the lockcount to determine the state,
if the lockcount is still greater than 0, we cannot distinguish whether the schedlock has been released or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or should we move this variable into tcb? I don't agree a global variable constantly updated in the scheduler, this is bad design.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tcb->flags should be able to solve this problem.

@hujun260 hujun260 force-pushed the apache_3 branch 2 times, most recently from d5b0088 to 178dc49 Compare November 26, 2024 12:57
@github-actions github-actions bot removed the Arch: risc-v Issues related to the RISC-V (32-bit or 64-bit) architecture label Nov 26, 2024
@patacongo
Copy link
Contributor

patacongo commented Nov 26, 2024

[Note: my original quick email response had a few errors. I hope those are corrected here.]

The spinlock has always been a very light weight way of waiting. In SMP, sched_lock() is heavy weight and and also calls enter_critical_section() which is even heavier. This will effect performance.

General mutual exclusion is not required. All that is required is that the waiting thread no be suspended. You should consider a redesign to keep this as lightweight as possible. Consider this:

  • Since general mutual exclusion, is not required. It is not necessary to interact with the other CPUs at all
  • A single bit of information in the TCB will do the job.: If the thread holds the spinlock, that bit is set and, in that case the thread is never suspended. The bit is cleared after the spinlock is released.

This is a very simple change, similar to the way sched_lock() works in non-SMP mode: If the lockcount is greater than zero, then the thread cannot be suspended. And would return the spinlock to being a simple efficient interface. This would only affect the spinlock and scheduler. With would not affect other cpus or releases the spinlock.

Copy link
Contributor

@anchao anchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested this code with SCHED_INSTRUMENTATION_CSECTION?

sched/irq/irq_csection.c Outdated Show resolved Hide resolved
sched/irq/irq_csection.c Outdated Show resolved Hide resolved
sched/irq/irq_csection.c Outdated Show resolved Hide resolved
sched/irq/irq_csection.c Outdated Show resolved Hide resolved
@hujun260
Copy link
Contributor Author

Have you tested this code with SCHED_INSTRUMENTATION_CSECTION?

yes

Comment on lines 451 to 467

rtcb = this_task();
if (rtcb->irqcount == 1 && !up_interrupt_context())
{
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0
nxsched_critmon_csection(rtcb, false, return_address(0));
# endif
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION
sched_note_csection(rtcb, false);
# endif
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
rtcb = this_task();
if (rtcb->irqcount == 1 && !up_interrupt_context())
{
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0
nxsched_critmon_csection(rtcb, false, return_address(0));
# endif
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION
sched_note_csection(rtcb, false);
# endif
}
if (!up_interrupt_context())
{
rtcb = this_task();
if (rtcb->irqcount == 1)
{
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0
nxsched_critmon_csection(rtcb, false, return_address(0));
# endif
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION
sched_note_csection(rtcb, false);
# endif
}
}

…ed_[un]lock

reason:
1 Accelerated the implementation of sched_lock, remove enter_critical_section in sched_lock and
  only enter_critical_section when task scheduling is required.
2 we add sched_lock_wo_note/sched_unlock_wo_note and it does not perform instrumentation logic

Signed-off-by: hujun5 <[email protected]>
reason:
We aim to replace big locks with smaller ones. So we will use spin_lock_irqsave extensively to
replace enter_critical_section in the subsequent process. We imitate the implementation of Linux
by adding sched_lock to spin_lock_irqsave in order to address scenarios where sem_post occurs
within spin_lock_irqsave, which can lead to spinlock failures and deadlocks.

Signed-off-by: hujun5 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arch: xtensa Issues related to the Xtensa architecture Area: OS Components OS Components issues Size: L The size of the change in this PR is large
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Thread holding spinlock can be blocked. Thread can be unexpectedly suspended within a critical section.
5 participants