-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spin_lock_irqsave+sched_lock #14578
base: master
Are you sure you want to change the base?
spin_lock_irqsave+sched_lock #14578
Conversation
157736b
to
175819e
Compare
@patacongo please review this patch which fix the long issue about sched lock. |
87f9b88
to
cde599d
Compare
72f29e9
to
c2c8fb5
Compare
ffa975d
to
2f8fec9
Compare
@@ -41,10 +41,12 @@ SYSCALL_LOOKUP(sched_getparam, 2) | |||
SYSCALL_LOOKUP(sched_getscheduler, 1) | |||
SYSCALL_LOOKUP(sched_lock, 0) | |||
SYSCALL_LOOKUP(sched_lockcount, 0) | |||
SYSCALL_LOOKUP(sched_lock_wo_note, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove no trace version from syscall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
nxsched_sporadic_lowpriority(rtcb); | ||
void sched_unlock(void) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
void sched_unlock(void)
{
FAR struct tcb_s *tcb;
if (!up_interrupt_context())
{
return;
}
tcb = this_task();
if (tcb != NULL)
{
...
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
sched/sched/sched_unlock.c
Outdated
****************************************************************************/ | ||
|
||
#if defined(CONFIG_SCHED_TICKLESS) && \ | ||
(CONFIG_RR_INTERVAL > 0 || defined(CONFIG_SCHED_SPORADIC)) | ||
static int preempt_schedule_nestcount = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we remove this global variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nxsched_reassess_timer may also trigger sched_preempt_schedule
preempt_schedule_nestcount avoid recursive call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic before optimization does not have this issue? do you have the backtrace of recursive call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we have placed sched_lock inside spin_lock_irqsave,
and spin_lock_irqsave is also extensively used to replace critical sections,
the usage of sched_lock has become very widespread, which makes recursion highly likely to occur.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to the scheduling logic in sched_preemt_schedule that relies on checking the lockcount to determine the state,
if the lockcount is still greater than 0, we cannot distinguish whether the schedlock has been released or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or should we move this variable into tcb? I don't agree a global variable constantly updated in the scheduler, this is bad design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tcb->flags should be able to solve this problem.
d5b0088
to
178dc49
Compare
[Note: my original quick email response had a few errors. I hope those are corrected here.] The spinlock has always been a very light weight way of waiting. In SMP, sched_lock() is heavy weight and and also calls enter_critical_section() which is even heavier. This will effect performance. General mutual exclusion is not required. All that is required is that the waiting thread no be suspended. You should consider a redesign to keep this as lightweight as possible. Consider this:
This is a very simple change, similar to the way sched_lock() works in non-SMP mode: If the lockcount is greater than zero, then the thread cannot be suspended. And would return the spinlock to being a simple efficient interface. This would only affect the spinlock and scheduler. With would not affect other cpus or releases the spinlock. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tested this code with SCHED_INSTRUMENTATION_CSECTION?
yes |
sched/irq/irq_csection.c
Outdated
|
||
rtcb = this_task(); | ||
if (rtcb->irqcount == 1 && !up_interrupt_context()) | ||
{ | ||
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0 | ||
nxsched_critmon_csection(rtcb, false, return_address(0)); | ||
# endif | ||
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION | ||
sched_note_csection(rtcb, false); | ||
# endif | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rtcb = this_task(); | |
if (rtcb->irqcount == 1 && !up_interrupt_context()) | |
{ | |
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0 | |
nxsched_critmon_csection(rtcb, false, return_address(0)); | |
# endif | |
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION | |
sched_note_csection(rtcb, false); | |
# endif | |
} | |
if (!up_interrupt_context()) | |
{ | |
rtcb = this_task(); | |
if (rtcb->irqcount == 1) | |
{ | |
# if CONFIG_SCHED_CRITMONITOR_MAXTIME_CSECTION >= 0 | |
nxsched_critmon_csection(rtcb, false, return_address(0)); | |
# endif | |
# ifdef CONFIG_SCHED_INSTRUMENTATION_CSECTION | |
sched_note_csection(rtcb, false); | |
# endif | |
} | |
} |
Signed-off-by: hujun5 <[email protected]>
…ed_[un]lock reason: 1 Accelerated the implementation of sched_lock, remove enter_critical_section in sched_lock and only enter_critical_section when task scheduling is required. 2 we add sched_lock_wo_note/sched_unlock_wo_note and it does not perform instrumentation logic Signed-off-by: hujun5 <[email protected]>
reason: We aim to replace big locks with smaller ones. So we will use spin_lock_irqsave extensively to replace enter_critical_section in the subsequent process. We imitate the implementation of Linux by adding sched_lock to spin_lock_irqsave in order to address scenarios where sem_post occurs within spin_lock_irqsave, which can lead to spinlock failures and deadlocks. Signed-off-by: hujun5 <[email protected]>
Summary
1 Accelerated the implementation of sched_lock, remove enter_critical_section in sched_lock and
only enter_critical_section when task scheduling is required.
2 we add sched_lock_wo_note/sched_unlock_wo_note and it does not perform instrumentation logic
3 We aim to replace big locks with smaller ones. So we will use spin_lock_irqsave extensively to
replace enter_critical_section in the subsequent process. We imitate the implementation of Linux
by adding sched_lock to spin_lock_irqsave in order to address scenarios where sem_post occurs
within spin_lock_irqsave, which can lead to spinlock failures and deadlocks.
The entire implementation process includes:
1 spin_lock_irqsave + sched_lock
2 spin_lock/rw/spin_trylock + sched_lock
3 enter_critical_section + sched_lock
We are currently implementing the first step.
Impact
spinlock and sched_lock
Testing
Build Host:
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx