forked from thesofproject/linux
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
audio mediator enabling - new topology mode support #2
Comments
libinyang
pushed a commit
that referenced
this issue
May 13, 2019
On ESP output, sk_wmem_alloc is incremented for the added padding if a socket is associated to the skb. When replying with TCP SYNACKs over IPsec, the associated sk is a casted request socket, only. Increasing sk_wmem_alloc on a request socket results in a write at an arbitrary struct offset. In the best case, this produces the following WARNING: WARNING: CPU: 1 PID: 0 at lib/refcount.c:102 esp_output_head+0x2e4/0x308 [esp4] refcount_t: addition on 0; use-after-free. CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.0.0-rc3 #2 Hardware name: Marvell Armada 380/385 (Device Tree) [...] [<bf0ff354>] (esp_output_head [esp4]) from [<bf1006a4>] (esp_output+0xb8/0x180 [esp4]) [<bf1006a4>] (esp_output [esp4]) from [<c05dee64>] (xfrm_output_resume+0x558/0x664) [<c05dee64>] (xfrm_output_resume) from [<c05d07b0>] (xfrm4_output+0x44/0xc4) [<c05d07b0>] (xfrm4_output) from [<c05956bc>] (tcp_v4_send_synack+0xa8/0xe8) [<c05956bc>] (tcp_v4_send_synack) from [<c0586ad8>] (tcp_conn_request+0x7f4/0x948) [<c0586ad8>] (tcp_conn_request) from [<c058c404>] (tcp_rcv_state_process+0x2a0/0xe64) [<c058c404>] (tcp_rcv_state_process) from [<c05958ac>] (tcp_v4_do_rcv+0xf0/0x1f4) [<c05958ac>] (tcp_v4_do_rcv) from [<c0598a4c>] (tcp_v4_rcv+0xdb8/0xe20) [<c0598a4c>] (tcp_v4_rcv) from [<c056eb74>] (ip_protocol_deliver_rcu+0x2c/0x2dc) [<c056eb74>] (ip_protocol_deliver_rcu) from [<c056ee6c>] (ip_local_deliver_finish+0x48/0x54) [<c056ee6c>] (ip_local_deliver_finish) from [<c056eecc>] (ip_local_deliver+0x54/0xec) [<c056eecc>] (ip_local_deliver) from [<c056efac>] (ip_rcv+0x48/0xb8) [<c056efac>] (ip_rcv) from [<c0519c2c>] (__netif_receive_skb_one_core+0x50/0x6c) [...] The issue triggers only when not using TCP syncookies, as for syncookies no socket is associated. Fixes: cac2661 ("esp4: Avoid skb_cow_data whenever possible") Fixes: 03e2a30 ("esp6: Avoid skb_cow_data whenever possible") Signed-off-by: Martin Willi <[email protected]> Signed-off-by: Steffen Klassert <[email protected]>
libinyang
pushed a commit
that referenced
this issue
May 13, 2019
It can be reproduced by following steps: 1. virtio_net NIC is configured with gso/tso on 2. configure nginx as http server with an index file bigger than 1M bytes 3. use tc netem to produce duplicate packets and delay: tc qdisc add dev eth0 root netem delay 100ms 10ms 30% duplicate 90% 4. continually curl the nginx http server to get index file on client 5. BUG_ON is seen quickly [10258690.371129] kernel BUG at net/core/skbuff.c:4028! [10258690.371748] invalid opcode: 0000 [#1] SMP PTI [10258690.372094] CPU: 5 PID: 0 Comm: swapper/5 Tainted: G W 5.0.0-rc6 #2 [10258690.372094] RSP: 0018:ffffa05797b43da0 EFLAGS: 00010202 [10258690.372094] RBP: 00000000000005ea R08: 0000000000000000 R09: 00000000000005ea [10258690.372094] R10: ffffa0579334d800 R11: 00000000000002c0 R12: 0000000000000002 [10258690.372094] R13: 0000000000000000 R14: ffffa05793122900 R15: ffffa0578f7cb028 [10258690.372094] FS: 0000000000000000(0000) GS:ffffa05797b40000(0000) knlGS:0000000000000000 [10258690.372094] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [10258690.372094] CR2: 00007f1a6dc00868 CR3: 000000001000e000 CR4: 00000000000006e0 [10258690.372094] Call Trace: [10258690.372094] <IRQ> [10258690.372094] skb_to_sgvec+0x11/0x40 [10258690.372094] start_xmit+0x38c/0x520 [virtio_net] [10258690.372094] dev_hard_start_xmit+0x9b/0x200 [10258690.372094] sch_direct_xmit+0xff/0x260 [10258690.372094] __qdisc_run+0x15e/0x4e0 [10258690.372094] net_tx_action+0x137/0x210 [10258690.372094] __do_softirq+0xd6/0x2a9 [10258690.372094] irq_exit+0xde/0xf0 [10258690.372094] smp_apic_timer_interrupt+0x74/0x140 [10258690.372094] apic_timer_interrupt+0xf/0x20 [10258690.372094] </IRQ> In __skb_to_sgvec(), the skb->len is not equal to the sum of the skb's linear data size and nonlinear data size, thus BUG_ON triggered. Because the skb is cloned and a part of nonlinear data is split off. Duplicate packet is cloned in netem_enqueue() and may be delayed some time in qdisc. When qdisc len reached the limit and returns NET_XMIT_DROP, the skb will be retransmit later in write queue. the skb will be fragmented by tso_fragment(), the limit size that depends on cwnd and mss decrease, the skb's nonlinear data will be split off. The length of the skb cloned by netem will not be updated. When we use virtio_net NIC and invoke skb_to_sgvec(), the BUG_ON trigger. To fix it, netem returns NET_XMIT_SUCCESS to upper stack when it clones a duplicate packet. Fixes: 35d889d ("sch_netem: fix skb leak in netem_enqueue()") Signed-off-by: Sheng Lan <[email protected]> Reported-by: Qin Ji <[email protected]> Suggested-by: Eric Dumazet <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Apr 7, 2020
Sigh, this is mostly my fault for not giving commit cd82d82 ("drm/dp_mst: Add branch bandwidth validation to MST atomic check") enough scrutiny during review. The way we're checking bandwidth limitations here is mostly wrong: For starters, drm_dp_mst_atomic_check_bw_limit() determines the pbn_limit of a branch by simply scanning each port on the current branch device, then uses the last non-zero full_pbn value that it finds. It then counts the sum of the PBN used on each branch device for that level, and compares against the full_pbn value it found before. This is wrong because ports can and will have different PBN limitations on many hubs, especially since a number of DisplayPort hubs out there will be clever and only use the smallest link rate required for each downstream sink - potentially giving every port a different full_pbn value depending on what link rate it's trained at. This means with our current code, which max PBN value we end up with is not well defined. Additionally, we also need to remember when checking bandwidth limitations that the top-most device in any MST topology is a branch device, not a port. This means that the first level of a topology doesn't technically have a full_pbn value that needs to be checked. Instead, we should assume that so long as our VCPI allocations fit we're within the bandwidth limitations of the primary MSTB. We do however, want to check full_pbn on every port including those of the primary MSTB. However, it's important to keep in mind that this value represents the minimum link rate /between a port's sink or mstb, and the mstb itself/. A quick diagram to explain: MSTB #1 / \ / \ Port #1 Port #2 full_pbn for Port #1 → | | ← full_pbn for Port #2 Sink #1 MSTB #2 | etc... Note that in the above diagram, the combined PBN from all VCPI allocations on said hub should not exceed the full_pbn value of port #2, and the display configuration on sink #1 should not exceed the full_pbn value of port #1. However, port #1 and port #2 can otherwise consume as much bandwidth as they want so long as their VCPI allocations still fit. And finally - our current bandwidth checking code also makes the mistake of not checking whether something is an end device or not before trying to traverse down it. So, let's fix it by rewriting our bandwidth checking helpers. We split the function into one part for handling branches which simply adds up the total PBN on each branch and returns it, and one for checking each port to ensure we're not going over its PBN limit. Phew. This should fix regressions seen, where we erroneously reject display configurations due to thinking they're going over our bandwidth limits when they're not. Changes since v1: * Took an even closer look at how PBN limitations are supposed to be handled, and did some experimenting with Sean Paul. Ended up rewriting these helpers again, but this time they should actually be correct! Changes since v2: * Small indenting fix * Fix pbn_used check in drm_dp_mst_atomic_check_port_bw_limit() Signed-off-by: Lyude Paul <[email protected]> Fixes: cd82d82 ("drm/dp_mst: Add branch bandwidth validation to MST atomic check") Cc: Sean Paul <[email protected]> Acked-by: Alex Deucher <[email protected]> Reviewed-by: Mikita Lipski <[email protected]> Tested-by: Hans de Goede <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
libinyang
pushed a commit
that referenced
this issue
Apr 7, 2020
The vector management code assumes that managed interrupts cannot be migrated away from an online CPU. free_moved_vector() has a WARN_ON_ONCE() which triggers when a managed interrupt vector association on a online CPU is cleared. The CPU offline code uses a different mechanism which cannot trigger this. This assumption is not longer correct because the new CPU isolation feature which affects the placement of managed interrupts must be able to move a managed interrupt away from an online CPU. There are two reasons why this can happen: 1) When the interrupt is activated the affinity mask which was established in irq_create_affinity_masks() is handed in to the vector allocation code. This mask contains all CPUs to which the interrupt can be made affine to, but this does not take the CPU isolation 'managed_irq' mask into account. When the interrupt is finally requested by the device driver then the affinity is checked again and the CPU isolation 'managed_irq' mask is taken into account, which moves the interrupt to a non-isolated CPU if possible. 2) The interrupt can be affine to an isolated CPU because the non-isolated CPUs in the calculated affinity mask are not online. Once a non-isolated CPU which is in the mask comes online the interrupt is migrated to this non-isolated CPU In both cases the regular online migration mechanism is used which triggers the WARN_ON_ONCE() in free_moved_vector(). Case #1 could have been addressed by taking the isolation mask into account, but that would require a massive code change in the activation logic and the eventual migration event was accepted as a reasonable tradeoff when the isolation feature was developed. But even if #1 would be addressed, #2 would still trigger it. Of course the warning in free_moved_vector() was overlooked at that time and the above two cases which have been discussed during patch review have obviously never been tested before the final submission. So keep it simple and remove the warning. [ tglx: Rewrote changelog and added a comment to free_moved_vector() ] Fixes: 11ea68f ("genirq, sched/isolation: Isolate from handling managed interrupts") Signed-off-by: Peter Xu <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Ming Lei <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
libinyang
pushed a commit
that referenced
this issue
Apr 14, 2020
It is safe to traverse mm->notifier_subscriptions->list either under SRCU read lock or mm->notifier_subscriptions->lock using hlist_for_each_entry_rcu(). Silence the PROVE_RCU_LIST false positives, for example, WARNING: suspicious RCU usage ----------------------------- mm/mmu_notifier.c:484 RCU-list traversed in non-reader section!! other info that might help us debug this: rcu_scheduler_active = 2, debug_locks = 1 3 locks held by libvirtd/802: #0: ffff9321e3f58148 (&mm->mmap_sem#2){++++}, at: do_mprotect_pkey+0xe1/0x3e0 #1: ffffffff91ae6160 (mmu_notifier_invalidate_range_start){+.+.}, at: change_p4d_range+0x5fa/0x800 #2: ffffffff91ae6e08 (srcu){....}, at: __mmu_notifier_invalidate_range_start+0x178/0x460 stack backtrace: CPU: 7 PID: 802 Comm: libvirtd Tainted: G I 5.6.0-rc6-next-20200317+ #2 Hardware name: HP ProLiant BL460c Gen8, BIOS I31 11/02/2014 Call Trace: dump_stack+0xa4/0xfe lockdep_rcu_suspicious+0xeb/0xf5 __mmu_notifier_invalidate_range_start+0x3ff/0x460 change_p4d_range+0x746/0x800 change_protection+0x1df/0x300 mprotect_fixup+0x245/0x3e0 do_mprotect_pkey+0x23b/0x3e0 __x64_sys_mprotect+0x51/0x70 do_syscall_64+0x91/0xae8 entry_SYSCALL_64_after_hwframe+0x49/0xb3 Signed-off-by: Qian Cai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Apr 7, 2021
I got several memory leak reports from Asan with a simple command. It was because VDSO is not released due to the refcount. Like in __dsos_addnew_id(), it should put the refcount after adding to the list. $ perf record true [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.030 MB perf.data (10 samples) ] ================================================================= ==692599==ERROR: LeakSanitizer: detected memory leaks Direct leak of 439 byte(s) in 1 object(s) allocated from: #0 0x7fea52341037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154 #1 0x559bce4aa8ee in dso__new_id util/dso.c:1256 #2 0x559bce59245a in __machine__addnew_vdso util/vdso.c:132 #3 0x559bce59245a in machine__findnew_vdso util/vdso.c:347 #4 0x559bce50826c in map__new util/map.c:175 thesofproject#5 0x559bce503c92 in machine__process_mmap2_event util/machine.c:1787 thesofproject#6 0x559bce512f6b in machines__deliver_event util/session.c:1481 thesofproject#7 0x559bce515107 in perf_session__deliver_event util/session.c:1551 thesofproject#8 0x559bce51d4d2 in do_flush util/ordered-events.c:244 thesofproject#9 0x559bce51d4d2 in __ordered_events__flush util/ordered-events.c:323 thesofproject#10 0x559bce519bea in __perf_session__process_events util/session.c:2268 thesofproject#11 0x559bce519bea in perf_session__process_events util/session.c:2297 thesofproject#12 0x559bce2e7a52 in process_buildids /home/namhyung/project/linux/tools/perf/builtin-record.c:1017 thesofproject#13 0x559bce2e7a52 in record__finish_output /home/namhyung/project/linux/tools/perf/builtin-record.c:1234 thesofproject#14 0x559bce2ed4f6 in __cmd_record /home/namhyung/project/linux/tools/perf/builtin-record.c:2026 thesofproject#15 0x559bce2ed4f6 in cmd_record /home/namhyung/project/linux/tools/perf/builtin-record.c:2858 thesofproject#16 0x559bce422db4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313 thesofproject#17 0x559bce2acac8 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365 thesofproject#18 0x559bce2acac8 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409 thesofproject#19 0x559bce2acac8 in main /home/namhyung/project/linux/tools/perf/perf.c:539 thesofproject#20 0x7fea51e76d09 in __libc_start_main ../csu/libc-start.c:308 Indirect leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7fea52341037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154 #1 0x559bce520907 in nsinfo__copy util/namespaces.c:169 #2 0x559bce50821b in map__new util/map.c:168 #3 0x559bce503c92 in machine__process_mmap2_event util/machine.c:1787 #4 0x559bce512f6b in machines__deliver_event util/session.c:1481 thesofproject#5 0x559bce515107 in perf_session__deliver_event util/session.c:1551 thesofproject#6 0x559bce51d4d2 in do_flush util/ordered-events.c:244 thesofproject#7 0x559bce51d4d2 in __ordered_events__flush util/ordered-events.c:323 thesofproject#8 0x559bce519bea in __perf_session__process_events util/session.c:2268 thesofproject#9 0x559bce519bea in perf_session__process_events util/session.c:2297 thesofproject#10 0x559bce2e7a52 in process_buildids /home/namhyung/project/linux/tools/perf/builtin-record.c:1017 thesofproject#11 0x559bce2e7a52 in record__finish_output /home/namhyung/project/linux/tools/perf/builtin-record.c:1234 thesofproject#12 0x559bce2ed4f6 in __cmd_record /home/namhyung/project/linux/tools/perf/builtin-record.c:2026 thesofproject#13 0x559bce2ed4f6 in cmd_record /home/namhyung/project/linux/tools/perf/builtin-record.c:2858 thesofproject#14 0x559bce422db4 in run_builtin /home/namhyung/project/linux/tools/perf/perf.c:313 thesofproject#15 0x559bce2acac8 in handle_internal_command /home/namhyung/project/linux/tools/perf/perf.c:365 thesofproject#16 0x559bce2acac8 in run_argv /home/namhyung/project/linux/tools/perf/perf.c:409 thesofproject#17 0x559bce2acac8 in main /home/namhyung/project/linux/tools/perf/perf.c:539 thesofproject#18 0x7fea51e76d09 in __libc_start_main ../csu/libc-start.c:308 SUMMARY: AddressSanitizer: 471 byte(s) leaked in 2 allocation(s). Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Apr 28, 2021
Ido Schimmel says: ==================== mlxsw: spectrum: Fix ECN marking in tunnel decapsulation Patch #1 fixes a discrepancy between the software and hardware data paths with regards to ECN marking after decapsulation. See the changelog for a detailed description. Patch #2 extends the ECN decap test to cover all possible combinations of inner and outer ECN markings. The test passes over both data paths. v2: * Only set ECT(1) if inner is ECT(0) * Introduce a new helper to determine inner ECN. Share it between NVE and IP-in-IP tunnels * Extend the selftest ==================== Signed-off-by: David S. Miller <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Apr 28, 2021
Danielle Ratson says: ==================== Fix link_mode derived params functionality Currently, link_mode parameter derives 3 other link parameters, speed, lanes and duplex, and the derived information is sent to user space. Few bugs were found in that functionality. First, some drivers clear the 'ethtool_link_ksettings' struct in their get_link_ksettings() callback and cause receiving wrong link mode information in user space. And also, some drivers can report random values in the 'link_mode' field and cause general protection fault. Second, the link parameters are only derived in netlink path so in ioctl path, we don't any reasonable values. Third, setting 'speed 10000 lanes 1' fails since the lanes parameter wasn't set for ETHTOOL_LINK_MODE_10000baseR_FEC_BIT. Patch #1 solves the first two problems by removing link_mode parameter and deriving the link parameters in driver instead of ethtool. Patch #2 solves the third one, by setting the lanes parameter for the link_mode. v3: * Remove the link_mode parameter in the first patch to solve both two issues from patch#1 and patch#2. * Add the second patch to solve the third issue. v2: * Add patch #2. * Introduce 'cap_link_mode_supported' instead of adding a validity field to 'ethtool_link_ksettings' struct in patch #1. ==================== Signed-off-by: David S. Miller <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Apr 28, 2021
The following deadlock is detected: truncate -> setattr path is waiting for pending direct IO to be done (inode->i_dio_count become zero) with inode->i_rwsem held (down_write). PID: 14827 TASK: ffff881686a9af80 CPU: 20 COMMAND: "ora_p005_hrltd9" #0 __schedule at ffffffff818667cc #1 schedule at ffffffff81866de6 #2 inode_dio_wait at ffffffff812a2d04 #3 ocfs2_setattr at ffffffffc05f322e [ocfs2] #4 notify_change at ffffffff812a5a09 thesofproject#5 do_truncate at ffffffff812808f5 thesofproject#6 do_sys_ftruncate.constprop.18 at ffffffff81280cf2 thesofproject#7 sys_ftruncate at ffffffff81280d8e thesofproject#8 do_syscall_64 at ffffffff81003949 thesofproject#9 entry_SYSCALL_64_after_hwframe at ffffffff81a001ad dio completion path is going to complete one direct IO (decrement inode->i_dio_count), but before that it hung at locking inode->i_rwsem: #0 __schedule+700 at ffffffff818667cc #1 schedule+54 at ffffffff81866de6 #2 rwsem_down_write_failed+536 at ffffffff8186aa28 #3 call_rwsem_down_write_failed+23 at ffffffff8185a1b7 #4 down_write+45 at ffffffff81869c9d thesofproject#5 ocfs2_dio_end_io_write+180 at ffffffffc05d5444 [ocfs2] thesofproject#6 ocfs2_dio_end_io+85 at ffffffffc05d5a85 [ocfs2] thesofproject#7 dio_complete+140 at ffffffff812c873c thesofproject#8 dio_aio_complete_work+25 at ffffffff812c89f9 thesofproject#9 process_one_work+361 at ffffffff810b1889 thesofproject#10 worker_thread+77 at ffffffff810b233d thesofproject#11 kthread+261 at ffffffff810b7fd5 thesofproject#12 ret_from_fork+62 at ffffffff81a0035e Thus above forms ABBA deadlock. The same deadlock was mentioned in upstream commit 28f5a8a ("ocfs2: should wait dio before inode lock in ocfs2_setattr()"). It seems that that commit only removed the cluster lock (the victim of above dead lock) from the ABBA deadlock party. End-user visible effects: Process hang in truncate -> ocfs2_setattr path and other processes hang at ocfs2_dio_end_io_write path. This is to fix the deadlock itself. It removes inode_lock() call from dio completion path to remove the deadlock and add ip_alloc_sem lock in setattr path to synchronize the inode modifications. [[email protected]: remove the "had_alloc_lock" as suggested] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Wengang Wang <[email protected]> Reviewed-by: Joseph Qi <[email protected]> Cc: Mark Fasheh <[email protected]> Cc: Joel Becker <[email protected]> Cc: Junxiao Bi <[email protected]> Cc: Changwei Ge <[email protected]> Cc: Gang He <[email protected]> Cc: Jun Piao <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Sep 15, 2021
Add the following Telit FD980 composition 0x1056: Cfg #1: mass storage Cfg #2: rndis, tty, adb, tty, tty, tty, tty Signed-off-by: Daniele Palmas <[email protected]> Link: https://lore.kernel.org/r/[email protected] Cc: [email protected] Signed-off-by: Johan Hovold <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Sep 15, 2021
Hayes Wang says: ==================== r8169: adjust the setting for RTL8106e These patches are uesed to avoid the delay of link-up interrupt, when enabling ASPM for RTL8106e. The patch #1 is used to enable ASPM if it is possible. And the patch #2 is used to modify the entrance latencies of L0 and L1. ==================== Signed-off-by: David S. Miller <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Sep 15, 2021
…/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 5.14, take #2 - Plug race between enabling MTE and creating vcpus - Fix off-by-one bug when checking whether an address range is RAM
libinyang
pushed a commit
that referenced
this issue
Sep 15, 2021
…lock Add yet another spinlock for the TDP MMU and take it when marking indirect shadow pages unsync. When using the TDP MMU and L1 is running L2(s) with nested TDP, KVM may encounter shadow pages for the TDP entries managed by L1 (controlling L2) when handling a TDP MMU page fault. The unsync logic is not thread safe, e.g. the kvm_mmu_page fields are not atomic, and misbehaves when a shadow page is marked unsync via a TDP MMU page fault, which runs with mmu_lock held for read, not write. Lack of a critical section manifests most visibly as an underflow of unsync_children in clear_unsync_child_bit() due to unsync_children being corrupted when multiple CPUs write it without a critical section and without atomic operations. But underflow is the best case scenario. The worst case scenario is that unsync_children prematurely hits '0' and leads to guest memory corruption due to KVM neglecting to properly sync shadow pages. Use an entirely new spinlock even though piggybacking tdp_mmu_pages_lock would functionally be ok. Usurping the lock could degrade performance when building upper level page tables on different vCPUs, especially since the unsync flow could hold the lock for a comparatively long time depending on the number of indirect shadow pages and the depth of the paging tree. For simplicity, take the lock for all MMUs, even though KVM could fairly easily know that mmu_lock is held for write. If mmu_lock is held for write, there cannot be contention for the inner spinlock, and marking shadow pages unsync across multiple vCPUs will be slow enough that bouncing the kvm_arch cacheline should be in the noise. Note, even though L2 could theoretically be given access to its own EPT entries, a nested MMU must hold mmu_lock for write and thus cannot race against a TDP MMU page fault. I.e. the additional spinlock only _needs_ to be taken by the TDP MMU, as opposed to being taken by any MMU for a VM that is running with the TDP MMU enabled. Holding mmu_lock for read also prevents the indirect shadow page from being freed. But as above, keep it simple and always take the lock. Alternative #1, the TDP MMU could simply pass "false" for can_unsync and effectively disable unsync behavior for nested TDP. Write protecting leaf shadow pages is unlikely to noticeably impact traditional L1 VMMs, as such VMMs typically don't modify TDP entries, but the same may not hold true for non-standard use cases and/or VMMs that are migrating physical pages (from L1's perspective). Alternative #2, the unsync logic could be made thread safe. In theory, simply converting all relevant kvm_mmu_page fields to atomics and using atomic bitops for the bitmap would suffice. However, (a) an in-depth audit would be required, (b) the code churn would be substantial, and (c) legacy shadow paging would incur additional atomic operations in performance sensitive paths for no benefit (to legacy shadow paging). Fixes: a2855af ("KVM: x86/mmu: Allow parallel page faults for the TDP MMU") Cc: [email protected] Cc: Ben Gardon <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Jun 22, 2022
The debug window for IPC4 compatible firmware is always window #2, set the debug_box.offset accordingly. Signed-off-by: Peter Ujfalusi <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Jul 7, 2022
While randstruct was satisfied with using an open-coded "void *" offset cast for the netfs_i_context <-> inode casting, __builtin_object_size() as used by FORTIFY_SOURCE was not as easily fooled. This was causing the following complaint[1] from gcc v12: In file included from include/linux/string.h:253, from include/linux/ceph/ceph_debug.h:7, from fs/ceph/inode.c:2: In function 'fortify_memset_chk', inlined from 'netfs_i_context_init' at include/linux/netfs.h:326:2, inlined from 'ceph_alloc_inode' at fs/ceph/inode.c:463:2: include/linux/fortify-string.h:242:25: warning: call to '__write_overflow_field' declared with attribute warning: detected write beyond size of field (1st parameter); maybe use struct_group()? [-Wattribute-warning] 242 | __write_overflow_field(p_size_field, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fix this by embedding a struct inode into struct netfs_i_context (which should perhaps be renamed to struct netfs_inode). The struct inode vfs_inode fields are then removed from the 9p, afs, ceph and cifs inode structs and vfs_inode is then simply changed to "netfs.inode" in those filesystems. Further, rename netfs_i_context to netfs_inode, get rid of the netfs_inode() function that converted a netfs_i_context pointer to an inode pointer (that can now be done with &ctx->inode) and rename the netfs_i_context() function to netfs_inode() (which is now a wrapper around container_of()). Most of the changes were done with: perl -p -i -e 's/vfs_inode/netfs.inode/'g \ `git grep -l 'vfs_inode' -- fs/{9p,afs,ceph,cifs}/*.[ch]` Kees suggested doing it with a pair structure[2] and a special declarator to insert that into the network filesystem's inode wrapper[3], but I think it's cleaner to embed it - and then it doesn't matter if struct randomisation reorders things. Dave Chinner suggested using a filesystem-specific VFS_I() function in each filesystem to convert that filesystem's own inode wrapper struct into the VFS inode struct[4]. Version #2: - Fix a couple of missed name changes due to a disabled cifs option. - Rename nfs_i_context to nfs_inode - Use "netfs" instead of "nic" as the member name in per-fs inode wrapper structs. [ This also undoes commit 507160f ("netfs: gcc-12: temporarily disable '-Wattribute-warning' for now") that is no longer needed ] Fixes: bc899ee ("netfs: Add a netfs inode context") Reported-by: Jeff Layton <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Xiubo Li <[email protected]> cc: Jonathan Corbet <[email protected]> cc: Eric Van Hensbergen <[email protected]> cc: Latchesar Ionkov <[email protected]> cc: Dominique Martinet <[email protected]> cc: Christian Schoenebeck <[email protected]> cc: Marc Dionne <[email protected]> cc: Ilya Dryomov <[email protected]> cc: Steve French <[email protected]> cc: William Kucharski <[email protected]> cc: "Matthew Wilcox (Oracle)" <[email protected]> cc: Dave Chinner <[email protected]> cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] cc: [email protected] Link: https://lore.kernel.org/r/[email protected]/ [1] Link: https://lore.kernel.org/r/[email protected]/ [2] Link: https://lore.kernel.org/r/[email protected]/ [3] Link: https://lore.kernel.org/r/[email protected]/ [4] Link: https://lore.kernel.org/r/165296786831.3591209.12111293034669289733.stgit@warthog.procyon.org.uk/ # v1 Link: https://lore.kernel.org/r/165305805651.4094995.7763502506786714216.stgit@warthog.procyon.org.uk # v2 Signed-off-by: Linus Torvalds <[email protected]>
libinyang
pushed a commit
that referenced
this issue
Jul 7, 2022
Merge series from Javier Martinez Canillas <[email protected]>: This series contains fixes for a few issues found while testing the recent support for drivers to define bulk read/write callbacks in regmap_config. I tested this with drivers/gpu/drm/solomon/ssd130x-spi.c, by converting it to use this new API instead of defining its own regmap bus for bulk write. Patch #1 and patch #2 are fixes for regresions introduced by that commit and patch #3 adds regmap_config provided bulk write support to functions regmap_noinc_write() and regmap_bulk_write(), that were missed.
libinyang
pushed a commit
that referenced
this issue
Jul 7, 2022
The debug window for IPC4 compatible firmware is always window #2, set the debug_box.offset accordingly. Signed-off-by: Peter Ujfalusi <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The text was updated successfully, but these errors were encountered: