Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hugepage_reset: Test compatible with different NUMA topologies #4237

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions qemu/tests/cfg/hugepage_reset.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@
pre_command = 'echo 3 > /proc/sys/vm/drop_caches && echo 1 > /proc/sys/vm/compact_memory'
mem = 4096
origin_nr = 8
# Please set hugepage in kernel command line before this test:
# default_hugepagesz=1G hugepagesz=1G hugepages=8
# Please allocate enough hugepages at boot time for this test.
# IMPORTANT! Keep in mind the system's memory and number of NUMA nodes.
# Also check the hugepage size has changed to 1G
# Example: default_hugepagesz=1G hugepagesz=1G hugepages=24
expected_hugepage_size = 1048576
Windows:
x86_64:
Expand Down
25 changes: 23 additions & 2 deletions qemu/tests/hugepage_reset.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,27 @@ def run(test, params, env):
:param env: Dictionary with the test environment.
"""

def allocate_largepages_per_node():
"""
The functions is inteded to set 1G hugepages per NUMA
node when the system has four or more of these nodes.
For this function to work the hugepage size should be 1G.
This way a QEMU failure could be avoided if it's unable
to allocate memory.
"""
node_list = host_numa_node.online_nodes_withcpumem
if len(node_list) >= 4:
try:
for node in node_list:
node_mem_free = int(
host_numa_node.read_from_node_meminfo(node, "MemFree")
)
mem_kb = mem * 1024
if node_mem_free > mem_kb:
hp_config.set_node_num_huge_pages(4, node, "1048576")
except ValueError as e:
test.cancel(e)

def set_hugepage():
"""Set nr_hugepages"""
try:
Expand Down Expand Up @@ -107,9 +128,9 @@ def heavyload_install():
"No node on your host has sufficient free memory for " "this test."
)
hp_config = test_setup.HugePageConfig(params)
if params.get("on_numa_node"):
allocate_largepages_per_node()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mcasquer , code LGTM, I just want to confirm with you that if the node mem is not enough, still setup or better to raise error or skip the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmmm @PaulYuuu good point, I think that situation should be handled, perhaps with a try block, I'll send an update of this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PaulYuuu added a try block that will cancel the case ig there's no enough memory, faked example:

 (2/2) Host_RHEL.m9.u6.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.6.0.x86_64.io-github-autotest-qemu.hugepage_reset.on_numa_node.q35: STARTED
 (2/2) Host_RHEL.m9.u6.ovmf.qcow2.virtio_scsi.up.virtio_net.Guest.RHEL.9.6.0.x86_64.io-github-autotest-qemu.hugepage_reset.on_numa_node.q35: CANCEL: 18 (expecting 400) hugepages is set on the node 0, please check if the node has enough memory (12.51 s)

hp_config.target_hugepages = origin_nr
test.log.info("Setup hugepage number to %s", origin_nr)
hp_config.setup()
hugepage_size = utils_memory.get_huge_page_size()
params["hugepage_path"] = hp_config.hugepage_path
params["start_vm"] = "yes"
Expand Down
Loading