Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sled-agent needs to deal with SSD overprovisioning #5158

Merged
merged 8 commits into from
Mar 11, 2024

Conversation

papertigers
Copy link
Contributor

@papertigers papertigers commented Feb 28, 2024

This introduces the ability for omicron's storage manager to resize and format NVMe disks based upon a lookup table keyed off of model number to preferred settings.

Fixes #4619

@papertigers papertigers marked this pull request as ready for review March 5, 2024 20:28
@papertigers papertigers added the Sled Agent Related to the Per-Sled Configuration and Management label Mar 5, 2024
/// The desired disk size for dealing with overprovisioning.
size: Option<u32>,
/// An override for the default 4k LBA formatting.
lba_data_size_override: Option<u64>,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing uses this today, so I am open to removing it from this PR in favor of not having dead code.

@papertigers
Copy link
Contributor Author

I have tested this on my bench gimlet and observed the proper behavior. I will run through that test one more time and paste the results on this PR.

Currently the blocker for this is updating the buildiomat image (cc @jclulow) which means we will introduce a new omicron dependency on libnvme.

Copy link
Contributor

@jgallagher jgallagher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but I think I'd vote for a review from @smklein too since he's currently down in the weeds of physical disk management from Nexus.


fn preferred_nvme_device_settings(
) -> &'static HashMap<&'static str, NvmeDeviceSettings> {
PREFERRED_NVME_DEVICE_SETTINGS.get_or_init(|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiny nit: if having to do this dance is annoying, you could use https://docs.rs/once_cell/latest/once_cell/sync/struct.Lazy.html instead, so you could use PREFERRED_NVME_DEVICE_SETTINGS directly instead of having to go through this helper. We use it elsewhere, and eventually it will land in the std lib.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel strongly about this one. I am okay leaving it as is if that works for you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, totally fine.

HashMap::from([
(
"WUS4C6432DSP3X3",
NvmeDeviceSettings { size: 3200, lba_data_size_override: None },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two thoughts, neither of which are necessarily relevant to landing this PR (and are kinda half baked; maybe @smklein has more thoughts here?):

  • This list being built into sled-agent means adding support for more disks requires an OS update. Is that okay?
  • If being baked into sled-agent is okay, do we want to define this in code, or would it make sense to put it in the sled-agent config.toml (which would require some work to pass it down into sled-hardware, admittedly).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are good questions, @jgallagher.

Just to clarify, I think this means that if we want to support adopting a disk that needs non-default changes (i.e. something other than the 4K sector size) then we will need to do something. If we're going to move it at all, then we should probably make nexus own this list and tell sled-agent (not unreasonable per se). I think that'll fit better with the general firmware update and related flows that we've been discussing as well for disks. What I'd suggest is that for the moment we keep this here and then work to move it into nexus as part of the disk adoption path and make this something we can send down.

What I'd expect in terms of user interface flow over time is that we'd see something like:

  • sled-agent detects a new disk and tells nexus.
  • nexus will prompt the operator and if this is not a known / expected disk we will make the prompt to adopt very clear.
  • At adoption time, nexus will send down instructions on how to transform the disk and things like resizing are things we may want to do in the future if we for example feel good undoing the overprovisioning.
  • This may mean that a nexus update is required for us not to show the scarier unsupported disk warning and it may still require an OS update to fully update sled-agent on the transformations required.

I'm not sure it's worth an intermediate step to put it in the toml file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I much prefer the nexus path laid out here to using the toml file. Thanks for the details @rmustacc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the helpful writeup @rmustacc. What you outlined here makes sense to me and likely ties into some of the same work that's going to happen with nvme firmware upgrades. I can keep this in the back of my mind as I start to figure out what that's going to look like.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After #5172 merges, I'm hoping this pathway will get much easier, but I totally agree with Robert here. Expect that Nexus will end up sending a struct to Sled Agent, basically saying "Please use (and potentially format) these disks" -- and Sled Agent won't try writing to storage until that happens.

I'm okay with this being an explicit lookup table in Sled Agent until we get to that point. I don't think this PR needs to block on 5172

@@ -119,6 +191,11 @@ fn internal_ensure_partition_layout<GPT: gpt::LibEfiGpt>(
};
match variant {
DiskVariant::U2 => {
// First we need to check that this disk is of the proper
// size and correct logical block address formatting.
ensure_size_and_formatting(log, identity)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly for my own education: what does this do with disks that already have data on them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't touch a disk with a label/zpool as changing the LBA format erases all data on the disk. This should only be preformed when StorageManger finds a new disk that has not been setup yet i.e first boot, add disk etc

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I expect that when we transform to an explicit adoption model then we'll want this to not occur until that adoption process is initiated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only gets run if there is no GPT table. It's hidden by github above the diff.

@@ -154,13 +231,124 @@ fn internal_ensure_partition_layout<GPT: gpt::LibEfiGpt>(
}
}

fn ensure_size_and_formatting(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any problem running this over after failure on a reboot or restart of sled-agent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So while testing I found that if a disk is left in the unattached state then sled-agent will ignore it. That's due to these lines. So, if we were to detach blkdev from the disk and then take a sled-agent reboot we wouldn't be notified of that disk again I believe.

Now if we were in the process of formatting a disk and the box panicked or rebooted for some reason I don't know what would happen. Perhaps @rmustacc could shed some light on that failure scenario.

Regardless I don't know what the best course of action is for the first failure mode, maybe we need an enhancement in how we detect disks from devinfo?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now if we were in the process of formatting a disk and the box panicked or rebooted for some reason I don't know what would happen. Perhaps @rmustacc could shed some light on that failure scenario.

There is nothing persistent about a blkdev attach/detach, so the kernel will attempt to if it can. Because we don't support namespace management and aren't doing anything here, this should likely be okay, though I can't comment on what happens if a device fails in the middle of say a format nvm command.

Regardless I don't know what the best course of action is for the first failure mode, maybe we need an enhancement in how we detect disks from devinfo?

I would suggest we file a follow up bug that moves this to looking for the NVMe node and not the blkdev node. This is for a few reasons:

  • An NVMe device may not have any blkdev instances attached as mentioned to start this.
  • An NVMe device may have the wrong number of namespaces created if it came from another system. This would show up as two distinct block devices or possibly no block devices, which would likely be confusing.

Copy link
Contributor Author

@papertigers papertigers Mar 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed this as #5241

Copy link
Contributor

@andrewjstone andrewjstone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@papertigers Nice work. This looks good to me, although I'm mostly trusting you (and @rmustacc) on how the actual NVME stuff works. I think that's probably a safe bet :)

As long as the answer to my question is "it's fine", this LGTM.

I don't think this will affect what @smklein is doing as I believe this is at a much lower level. But it would be good to get a quick check.

@papertigers
Copy link
Contributor Author

papertigers commented Mar 8, 2024

Just wanted to dump some testing notes.

Setup

Two of the new disks that Robert added to my bench gimlet for me.

BRM42220027 # diskinfo | grep -vE 'Micron|WUS4C6'
TYPE    DISK                    VID      PID              SIZE          RMV SSD
NVME    c2t0014EE84015DD700d0   NVMe     WUS5EA138ESP7E3  3576.98 GiB   no  yes
NVME    c4t0014EE84015DD100d0   NVMe     WUS5EA138ESP7E3  3576.98 GiB   no  yes

I looped over them and configured them how we will likely see them in the wild from the manufacture:

BRM42220027 # for i in $(nvmeadm list | grep WUS5E | awk 'BEGIN {FS=":"}; {print $1}'); do controller="$i/1";  echo resetting "$controller";  nvmeadm detach "$controller";  nvmeadm wdc/resize -s 3840 "$controller"; nvmeadm format "$controller" 0; nvmeadm attach "$controller"; done
resetting nvme1/1
nvme1/1 resized to 3840 GB
resetting nvme3/1
nvme3/1 resized to 3840 GB

Where LBA 0 looks like:

    LBA Format 0
      Metadata Size:                        0 bytes
      LBA Data Size:                        512 bytes
      Relative Performance:                 Best

While I am here let us reset one of the deployed WUS4C6432DSP3X3 drives with a 512 LBA to match what's in the field today:

BRM42220027 # nvmeadm detach nvme10/1
BRM42220027 # nvmeadm format nvme10/1 0
BRM42220027 # nvmeadm attach nvme10/1

Where LBA looks like:

    LBA Format 0
      Metadata Size:                        0 bytes
      LBA Data Size:                        512 bytes
      Relative Performance:                 Best

1st pass sled-agent

When we install omicron for the first time we expect to see the WUS4C6432DSP3X3 device update to use a 4K LBA and the two WUS5EA138ESP7E3 devices should resize from 3840 down to 3200 and also be formatted using a 4K logical block address.

Install:

BRM42220027 # ./omicron-package -t bench install

Check sled-agent's logs:

BRM42220027 # cat $(svcs -L sled-agent) | grep -iE 'formatted|resized' | looker
02:56:51.459Z INFO SledAgent (StorageManager): Formatted disk with serial A079E3AF to an LBA with data size 4096
    file = sled-hardware/src/illumos/partitions.rs:316
02:56:57.009Z INFO SledAgent (StorageManager): Resized 23181L900156 from 3840 to 3200
    file = sled-hardware/src/illumos/partitions.rs:280
02:57:01.361Z INFO SledAgent (StorageManager): Formatted disk with serial 23181L900156 to an LBA with data size 4096
    file = sled-hardware/src/illumos/partitions.rs:316
02:57:09.718Z INFO SledAgent (StorageManager): Resized 23181L900102 from 3840 to 3200
    file = sled-hardware/src/illumos/partitions.rs:280
02:57:14.063Z INFO SledAgent (StorageManager): Formatted disk with serial 23181L900102 to an LBA with data size 4096
    file = sled-hardware/src/illumos/partitions.rs:316

2nd pass sled-agent

Remove omicron but don't modify the disks

BRM42220027 # ./omicron-package -t bench uninstall
Logging to: /tmp/omicron/out/LOG
About to delete the following datasets: [
    "oxi_b6014b47-c0cc-481f-8bdb-bbd543071616/cluster",
    "oxi_b6014b47-c0cc-481f-8bdb-bbd543071616/config",
    "oxi_b6014b47-c0cc-481f-8bdb-bbd543071616/debug",
    "oxi_b6014b47-c0cc-481f-8bdb-bbd543071616/install",
    "oxi_f369eded-643f-4a47-857f-e44c2a973c1f/cluster",
    "oxi_f369eded-643f-4a47-857f-e44c2a973c1f/config",
    "oxi_f369eded-643f-4a47-857f-e44c2a973c1f/debug",
    "oxi_f369eded-643f-4a47-857f-e44c2a973c1f/install",
    "oxp_1db0d263-c5e7-46ef-bf81-015f3a4af100/crypt",
    "oxp_22aa8eef-15f4-4769-8503-d25ede198c7d/crypt",
    "oxp_24df79f9-2aed-4b7a-93f8-047fbb1f2514/crypt",
    "oxp_410bc7c9-a23d-4a50-b537-9a80f4319b50/crypt",
    "oxp_4cd8e627-3a0a-4b35-a287-9ef51f91ec78/crypt",
    "oxp_67f9b83a-7ccb-4e50-b5ad-309128bdf014/crypt",
    "oxp_cfdc3eb7-6f02-410c-8daa-346868387c7e/crypt",
    "oxp_d17de75f-c329-417e-9cb1-10fc5d17ab14/crypt",
    "oxp_ed1efcc5-0c03-4b7b-9fbc-b5598ab7ea7a/crypt",
]
[yY to confirm] >> y
BRM42220027 # rm /var/svc/log/oxide-sled-agent:default.log
BRM42220027 # svcadm disable fmd
BRM42220027 # swap -l
swapfile             dev    swaplo   blocks     free
/dev/zvol/dsk/oxi_f369eded-643f-4a47-857f-e44c2a973c1f/swap 170,4         8 536870904 536870904
BRM42220027 # swap -d /dev/zvol/dsk/oxi_f369eded-643f-4a47-857f-e44c2a973c1f/swap
BRM42220027 # for zpool in $(zpool list -Hp | grep ox | awk '{print $1}'); do         zpool destroy "$zpool"; done
BRM42220027 # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  3.94G  1.31G  2.62G        -         -     5%    33%  1.00x    ONLINE  -
BRM42220027 # svcs sled-agent
svcs: Pattern 'sled-agent' doesn't match any instances
STATE          STIME    FMRI

Install omicron again and verify none of the disks are touched outside of zpool creation.

BRM42220027 # ./omicron-package -t bench install
... wait a bit ...
BRM42220027 # cat $(svcs -L sled-agent) | grep -iE 'formatted|resized' | looker
BRM42220027 #
BRM42220027 # zpool list
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
oxi_61762044-15db-411c-8ffc-df3b9ea8eb7c   748G  1.43M   748G        -         -     0%     0%  1.00x    ONLINE  -
oxi_ebd965ff-a438-45cb-8f1b-cd6f0527a602   748G  2.36M   748G        -         -     0%     0%  1.00x    ONLINE  -
oxp_3c7bf7b8-1b8c-4d78-b478-35eb2097e8cf  2.91T  1.39M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_40c404c2-5429-4b0f-88db-a99e5d0a923d  2.91T  1.42M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_42f191fd-aa0e-43b7-b1bc-a960db7dca59  2.91T  1.42M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_44972f33-648d-4b90-8e02-e1b94aea20a9  2.91T  1.42M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_71cbbfee-4ea9-4a15-9acb-4015a156ce47  2.91T  1.70M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_797f7bcf-387f-4f44-ac04-6396030f9e8a  2.91T  1.39M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_91c965c6-1e7b-4c72-853c-81a806100ca0  2.91T  1.66M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_a2a5efee-d300-4e05-8696-93fcbfbe68f5  2.91T  1.67M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_b70628e2-86cf-4302-a64d-f20e46f06292  2.91T  1.40M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
rpool                                     3.94G  1.31G  2.62G        -         -     5%    33%  1.00x    ONLINE  -

Ensure disks with existing pools are not touched

Follow the same cleanup steps from above.

Then:

BRM42220027 # nvmeadm detach nvme10/1
BRM42220027 # nvmeadm format nvme10/1 0
BRM42220027 # nvmeadm attach nvme10/1
BRM42220027 # nvmeadm list nvme10
nvme10: model: WUS4C6432DSP3X3, serial: A079E3AF, FW rev: R2210000, NVMe v1.3, Capacity = 3052360 MB
  nvme10/1 (c11t0014EE81000BC523d0): Size = 2.91 TB, Capacity = 2.91 TB, Used = 2.91 TB
BRM42220027 # zpool create important-data c11t0014EE81000BC523d0
BRM42220027 # zpool list
NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
important-data  2.91T   111K  2.91T        -         -     0%     0%  1.00x    ONLINE  -
rpool           3.94G  1.31G  2.62G        -         -     6%    33%  1.00x    ONLINE  -
BRM42220027 # touch /important-data/oxide
BRM42220027 # echo hello > /important-data/oxide
BRM42220027 # cat /important-data/oxide
hello
BRM42220027 # ./omicron-package -t bench install
Logging to: /tmp/omicron/out/LOG

... wait some time ...

BRM42220027 # zpool list
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
important-data                            2.91T   130K  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxi_42b721ab-bcc4-41b8-be68-b5eabf89acbc   748G  1.60M   748G        -         -     0%     0%  1.00x    ONLINE  -
oxi_85b0774a-9ace-4ed7-acf4-9d56ba665384   748G  2.53M   748G        -         -     0%     0%  1.00x    ONLINE  -
oxp_039cf6fc-49d9-40ca-a8b7-bfe7f0497fc9  2.91T     2M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_359d6b72-7c60-47e1-adbb-a4c6e897f546  2.91T  1.41M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_47a77a27-10a8-4c3b-a1fb-5547be43f71c  2.91T  1.42M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_5aa0de14-cadf-44d1-87b0-45e7e3e71318  2.91T  1.96M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_8ad31463-fe8e-4498-ba80-87419627e325  2.91T  1.39M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_c2ae34e2-e6bc-4298-a344-cb9e19fbd6c5  2.91T  1.45M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_f71530fa-e6d0-4819-8cd8-a19e881c46c8  2.91T  1.42M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
oxp_ff493800-36a0-418d-b29e-e924f6bcfb23  2.91T  1.98M  2.91T        -         -     0%     0%  1.00x    ONLINE  -
rpool                                     3.94G  1.32G  2.62G        -         -     7%    33%  1.00x    ONLINE  -
BRM42220027 # cat /important-data/oxide
hello

Copy link
Collaborator

@smklein smklein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, with the caveats about how we can move some of this into Nexus -- but that doesn't need to be a blocker.


/// Returns a DiskIdentity that can be passed to ensure_partition_layout when
/// not operating on a real disk.
pub fn mock_device_identity() -> DiskIdentity {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: disk_identity - there are devices other than disks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in ea09e03

HashMap::from([
(
"WUS4C6432DSP3X3",
NvmeDeviceSettings { size: 3200, lba_data_size_override: None },
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After #5172 merges, I'm hoping this pathway will get much easier, but I totally agree with Robert here. Expect that Nexus will end up sending a struct to Sled Agent, basically saying "Please use (and potentially format) these disks" -- and Sled Agent won't try writing to storage until that happens.

I'm okay with this being an explicit lookup table in Sled Agent until we get to that point. I don't think this PR needs to block on 5172

Comment on lines +327 to +334
} else {
info!(
log,
"There are no preferred NVMe settings for disk model {}; nothing to\
do for disk with serial {}",
identity.model,
identity.serial
);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for handling this case -- this means that all the file-backed vdevs in testing will still work.

@papertigers papertigers merged commit 72cdbd7 into main Mar 11, 2024
22 checks passed
@papertigers papertigers deleted the mike/nvme-overprovisioning branch March 11, 2024 20:07
jgallagher added a commit that referenced this pull request Mar 13, 2024
On main as of #5158, we unexpectedly get some binaries depending on
`libnvme` that shouldn't. In release builds:

```
installinator
omicron-dev
omicron-package
services-ledger-check-migrate
sled-agent
sled-agent-sim
wicketd
```

and in debug builds, all of the above plus `omdb`. We don't really care
about this for binaries that don't run on the rack, so stripping the
list down to binaries we do care about:

```
installinator
omdb (debug only)
sled-agent
wicketd
```

It's correct and expected that installinator and sled-agent depend on
libnvme, but omdb shouldn't (and doesn't in release), and wicketd _must
not_, as libnvme isn't available in the switch zone.

This PR fixes that incorrect dependency by splitting the parts of the
`sled-hardware` crate that wicketd and omdb depended on (directly or
transitively) into a new `sled-hardware-types` crate. On this PR, we are
left with only the following needing libnvme, in both debug and release:

```
installinator
omicron-dev
omicron-package
services-ledger-check-migrate
sled-agent
sled-agent-sim
```

I assume with a bit more work we could trim out everything but
`installinator` and `sled-agent`, and that might be worth doing, but
we'd like to land this ASAP so as to not break any updates performed off
of `main`. Separately, we could also imagine a CI check that we don't
have unexpected library dependencies present in the final binaries;
@papertigers is going to work on that.
papertigers added a commit that referenced this pull request Mar 22, 2024
After #5158 was integrated
@Rain noticed that attempting to run a build of `omdb` in the switch
zone suddenly stopped working and filed
oxidecomputer/helios-omicron-brand#15.
@jgallagher ended up fixing this by splitting out the sled-hardware
types into their own crate in
#5245.

We decided it would be good if we added some sort of CI check to omicron
to catch these library leakages earlier. This PR introduces that check
and adds it to the helios build and test buildomat job. I have also
added some notes to the readme for others that may end up adding a new
library dependency.

Locally I modified the allow list so that it would produce errors, those
errors end up looking like:
```
$ cargo xtask verify-libraries
    Finished dev [unoptimized + debuginfo] target(s) in 0.42s
     Running `target/debug/xtask verify-libraries`
    Finished dev [unoptimized + debuginfo] target(s) in 4.11s
Error: Found library issues with the following:
installinator
	UNEXPECTED dependency on libipcc.so.1
omicron-dev
	UNEXPECTED dependency on libipcc.so.1
	UNEXPECTED dependency on libresolv.so.2
sp-sim
	UNEXPECTED dependency on libipcc.so.1
	UNEXPECTED dependency on libresolv.so.2
sled-agent
	NEEDS libnvme.so.1 but is not allowed
mgs
	UNEXPECTED dependency on libipcc.so.1
	UNEXPECTED dependency on libresolv.so.2


If depending on a new library was intended please add it to xtask.toml
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Sled Agent Related to the Per-Sled Configuration and Management
Projects
None yet
Development

Successfully merging this pull request may close these issues.

sled-agent needs to deal with SSD overprovisioning
5 participants