Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline image configuration #534

Closed
SteveL-MSFT opened this issue Aug 29, 2024 · 13 comments · Fixed by #589
Closed

Offline image configuration #534

SteveL-MSFT opened this issue Aug 29, 2024 · 13 comments · Fixed by #589
Labels
Milestone

Comments

@SteveL-MSFT
Copy link
Member

Summary of the new feature / enhancement

Support offline image configuration by having a group resource handle the mounting/unmounting of an image and passing the mounted path to resources that can support it.

Proposed technical implementation details (optional)

Introduce a tag offline that indicates resources support being given a mounted path to work against instead of the current live system. This would include the registry resource and others like file, windowsservice, etc...

How to pass the mounted path to resources? Some options:

  • A well known property _mountedPath
  • Metadata that the resource looks for mountedPath
@SteveL-MSFT SteveL-MSFT added Issue-Enhancement The issue is a feature or idea Needs Triage labels Aug 29, 2024
@SteveL-MSFT SteveL-MSFT added this to the Post-3.0 milestone Aug 29, 2024
@citelao
Copy link

citelao commented Sep 5, 2024

Besides figuring out how to pass the mount path:

  • What are the limitations of an offline resource?
  • Can services (like sshd) be started offline? Can group policies (registry/non-registry) be configured offline?
  • Will all resources need to support online & offline configuration?

@SteveL-MSFT
Copy link
Member Author

I think resources that support offline would need to have such a tag in the manifest to make it easier to discover (and the group resource to be described below could validate the resources provided support it). Services can't be started offline, so any configuration of them would need to be via file or registry (for Windows).

I think we would want a different model that requires starting the image, configure, and shutting it down and not mix it with actual offline configuration.

My initial thinking is that DSC would include a OfflineGroup resource that has the properties:

type: Microsoft.DSC/OfflineGroup
properties:
  imagePath: <path to vhd/vhdx to mount>
  resources: <array of resources that support `Offline` mode>

This group resource would be responsible for:

  • verifying every resource specified supports Offline mode, otherwise it's an error
  • mount the image specified and pass that path to each resource as metadata under Microsoft.DSC/mountedPath
  • resources that support Offline would use this metadata (if present) to read the filesystem (like the registry hive or sshd_config), and write changes to that mounted filesystem on set
  • unmount the image (on error or after final resource is completed)

For Windows, %windir% doesn't have to be on the C: drive, so there might need to be a way to pass some image environment variables also as metadata that resources can use.

@SteveL-MSFT
Copy link
Member Author

Thinking about this further, one of the problems is that resource authors won't think in terms of online/offline, but I suspect many resources can work against a file path. So a simpler proposal would be a DiskImage resource:

type: Microsoft.Windows/DiskImage
properties:
  action: <mount|unmount>
  imagePath: <file path to vhd/vhdx/wim>
  mountPath: <mounted path>

So this resource simply takes care of mounting or unmounting an image (basically replicating the Mount-DiskImage cmdlet, so may need properties that map to parameters that are relevant.

Then resources like registry and sshdconfig would use the reference() function to get the mounted path. However, there's still the problem of env vars within the vhd that may be needed like systemroot, systemdrive, windir, programfiles, etc... so it would make sense that the DiskImage resource returns those values as properties that other parts of the configuration can pass to other resources.

The main downside to this approach is the configuration needs explicit unmount, but this approach would be easier to implement.

@citelao
Copy link

citelao commented Sep 13, 2024

My teammates and I talked yesterday---let me see if I can capture what we discussed & my thoughts.

Our problem

We often debug kernel-level issues that prevent the system from booting, so we need to ensure that KD is configured before the device ever boots.

We currently do that (in IXPTools - internal only) with a bevy of PowerShell scripts that mount our VHDs, mount their registries, change some files, and then boot them (and then run some subsequent offline scripts after unattend OOBE finishes). There are a few other benefits to offline configuration---it's static & easy to verify, it's much faster than waiting for OOBE to finish, it can access stuff on the host machine (especially resources that live on our VPN)---but the main requirement is to have tools like KD and SSH guaranteed running before the device boots.

For physical devices, we then choose to boot from VHD.

Also: bootstrapping

One other frustration is that Windows automation is not self-bootstrapping: there is no way to configure a Windows installation to be connectable (at least publicly) at first boot. With TShell, we can install a .cab offline, but there's no equivalent for SSH (even though I can deploy the certs offline, I can't Start-Service offline).

For example, Docker setups rely on substantial amounts of scripting and RunOnce keys to configure firewalls, start communication methods, etc. And all of them require an online device.

Also: device-agnosticity

We'd also like any solution to be device-agnostic: we'd love to be able to run configuration on existing devices and offline ones.

For example, we have many scripts that do something like this:

  1. Offline: copy some files to the VHD
  2. Online: run an installer

Since we haven't generalized the offline step, if someone wants to run that script on an existing machine, our answer is basically, "too bad. Create a new VM." Which sucks.

A new option for offline provisioning

You proposed these options in previous comments:

  1. OfflineGroup: your proposal to add a new group that verifies its children support offline mode & mounts the VHD.
  2. DiskImage mount/unmount: your proposal to instead add a new action/resource that handles mounting the VHD & redirecting subsequent resources.

We want to propose a 3rd option:

  1. Offline mode for dsc: e.g. dsc --to-vhd <.vhd> or dsc --to-mounted-vhd </my/vhd/path/>

This keeps the YAML files device-agnostic: you can run the same provisioning on an online or offline device.

This is in keeping with the overall DSC v3 philosophy: nominally a firewall resource should be able to run on any machine regardless of OS, so why not be able to run on any machine regardless of VHD vs live? Ansible, Chef, Puppet, etc all use configuration files that are device-agnostic, too (yes, there are win-*-specific tasks for each, but the philosophy exists at least).

Exploring offline mode (--to-vhd & --to-mounted-vhd)

Would all resources need to support offline mode?

At least some. The --to-vhd approach would mean that at least some resources would have to support offline mode explicitly. For example, the registry resource would need to understand how to work on mounted VHDs and the current machine.

One opportunity would be to invest further into meta-resources (#102)---resources that other resources can harness. For example, on Windows, the firewall resource could delegate all its work to the registry resource; if the registry resource supported offline mode, firewall would, too, automatically.

Luckily, @jazzdelightsme did an investigation into our offline resources and found that we only rely on 3 offline "things":

  1. Deploy files to arbitrary VHD locations
  2. Mount & modify the registry on the VHD
  3. Run dism commands to install cabs.
  4. (plus, as you mentioned @SteveL-MSFT, we may want to support environment variable redirection etc)

What about resources that can't support offline mode?

We could throw an error (or other options). A lot of stuff on Windows cannot be done statically (at least, according to public APIs):

  • Setting group policy
  • Starting services (especially sshd)
  • Some SPI calls?
  • Some Windows settings (e.g. those backed by CDS?)

I'd be quite fine throwing an error if users try to run dsc --to-vhd with one of those actions, at least for now.

We could also include a dsc validate that could validate your YAML files are offline-only. Or we could even create some sort of .nupkg-esque bundle of the remaining actions and send them over to the device to be run "later" (after OOBE via a RunOnce script?).

Does this generalize to remote devices?

Yes? Interestingly, introducing this offline mode corresponds quite well to a remote mode ("agentless"). If I mount a remote machine's filesystem & pass that to --to-mounted-vhd, a remote file provider would basically function perfectly. If I mount a remote machine's registry, ditto registry.

But at that point, I'm worried we're reinventing a worse version of Ansible or Bolt. Or even Nix?

Is there prior art for offline provisioning?

No? There might be, but everything I can find in this space is online.

Is NixOS close?

I also have a hunch that Packer could be augmented to have an offline builder...

Should dsc mount the VHD or does someone else?

Someone else is easier. Patching a mounted VHD has some gotchas:

  • VHDs have multiple volumes, identified only by GUID. IXPTools does some inspection to figure out which one "looks like" a C drive.
  • Loading hives is a little complicated, since there are multiple files.
  • dism offline is extremely slow if the affected registry hive is mounted. IXPTools unmounts the registry before running dism commands.

The code to handle these cases is not particularly complicated, from what I've heard, but these nuances do exist.

Alternatives

  • Could rely on dism & generate .cab files for all of our tools instead? dism supports online & offline out-of-the-box (and rollback!), but the cabinet format is very hard to use, from what I've heard.
  • Group Policy? WMI (CIM)? MDM? PS Workflows? Maybe there's something there that could be used. I'm not an expert in the space; I only know what we've done :).

Anyway, this turned out pretty long.

tldr:

We want to be able to provision machines that are running KD and SSHD on first boot.

Can we support dsc --to-mounted-vhd /path/to/vhd/? We could augment "basic" providers like files, registry, & dism to support online & offline modes, and we could harness those in more advanced providers like firewall or group_policy.

Thoughts?

@SteveL-MSFT
Copy link
Member Author

SteveL-MSFT commented Sep 13, 2024

@citelao some good thinking there. I like the dsc --to-image type of parameter and rely on dsc to mount/unmount the image (this solves one problem with the Image resource which would require explicit unmounting) and I like that you can use the same config for both online and offline (assuming the resources support it as you noted). With offline mode being a top level feature, it should make it easier to inform resource authors to be aware of making their resources offline capable and a new capability to advertise compatible resources (something in the resource manifest).

As you mention, I don't think we want to turn dsc itself into a full configuration solution including remote configuration directly unless there are compelling scenarios that makes sense. My current preference is for higher level solutions to use dsc with ssh for remote configuration/audit.

The only other major point we should consider before closing towards a design (even if we don't implement it now) is for non-Windows. For example, would it be useful to use dsc to configure offline container images? I would want to avoid a Windows specific solution unless this really is a Windows specific problem.

@citelao
Copy link

citelao commented Sep 16, 2024

I'm glad you resonate with the --to-image parameter idea!

My current preference is for higher level solutions to use dsc with ssh for remote configuration/audit.

The only reason I'd push back on this is bootstrapping: if dsc requires me to deploy resources & files to the remote machine, I will need automation beyond dsc to provision their machines.

Compare this to Ansible: yes, I need to set up SSH on my remote machines first, but I can run everything else through Ansible itself. Versus the current approach of dsc: I'll need to set up SSH on my remote machine, then do more work to deploy all the DSC resources to the remote machine (and since DSC resources can now be any language, that gets inordinately more complicated).

The only other major point we should consider before closing towards a design (even if we don't implement it now) is for non-Windows. For example, would it be useful to use dsc to configure offline container images? I would want to avoid a Windows specific solution unless this really is a Windows specific problem.

Agreed. Offline provisioning seems like an obvious benefit to all platforms.

But it strikes me that I couldn't find any real competitive art here. Is it because Linux boots so quickly that online provisioning (boot VM, make changes, shut down) is fast enough? TBH, I'm not sure how systemd developers do their jobs 😄.


Also, I'm sure you knew this, but I looked into the servicing .cab format (e.g. can we get dism to do our offline provisioning?), and I'm not certain that's an easy/generalizable way forward. The .cab format seems proprietary & includes quite a few complicated manifests. .appx files are simpler & well-documented, but are constrained in what they can modify & generally require signing. I'm still intrigued by this approach, though: if we could harness dism to add our tools & shortcuts, we'd be using the same tool that Windows uses for their updates. & we'd get rollbacks etc.

@michaeltlombardi
Copy link
Collaborator

@citelao - just a quick note on terminology we often use that may not be clear, but when we're referencing "higher level solutions" or "higher order tools," we're talking about things like Ansible (and Machine Configuration, Puppet, Chef, Salt, etc) - DSCv3 is designed, like PSDSC before it, to provide a highly effective way to design, implement, and call resources that those tools can readily integrate with.

There's an exponential complexity problem when designing configuration management tooling, and that problem grows large enough that there are multiple companies with hundreds of people trying to solve it.

That's not to say that DSC shouldn't be bootstrap-aware, or that remote execution is strictly out-of-bounds, but that DSC isn't trying to replicate Machine Configuration or the enterprise offerings of other configuration management tools.

But there are some domains where investing in them in DSC is more costly for less benefit than ensuring higher order tools can integrate and use their own models for calling DSC itself or resources that adhere to the DSC contract and publish a manifest.

@SteveL-MSFT
Copy link
Member Author

@citelao was talking to another internal partner regarding the need for init, so I created this issue to cover that topic with regards to bootstrapping #557

I wonder for the actual mount/unmounting of various types of files: vhd, vhdx, wim, cab, etc... it might be better for DSC to just provide an extension mechanism that a different tool just handles the mounting/unmounting and simply returns the path to the mounted image.

For example, we would have an extension manifest that is similar to a resource manifest that advertises the mount/unmount capability with a well defined JSON output format (initially just need the path I think) to be returned via stdout. The manifest would declare what file formats the tool supports.

In the case of VHD, it could literally just be a .ps1 script that calls Mount-VHD with a manifest file.

@citelao
Copy link

citelao commented Sep 30, 2024

@citelao - just a quick note on terminology we often use that may not be clear, but when we're referencing "higher level solutions" or "higher order tools," we're talking about things like Ansible (and Machine Configuration, Puppet, Chef, Salt, etc) - DSCv3 is designed, like PSDSC before it, to provide a highly effective way to design, implement, and call resources that those tools can readily integrate with.

That's not to say that DSC shouldn't be bootstrap-aware, or that remote execution is strictly out-of-bounds, but that DSC isn't trying to replicate Machine Configuration or the enterprise offerings of other configuration management tools.

But there are some domains where investing in them in DSC is more costly for less benefit than ensuring higher order tools can integrate and use their own models for calling DSC itself or resources that adhere to the DSC contract and publish a manifest.

@michaeltlombardi Absoutely---I have no intent to (a) step on anyone's toes or (b) reimplement something that another tool does better.

But my understanding of the state of things is that there does not exist a good solution for pre-provisioning VHDs (etc) in situations like this that can also be run on live machines. In other words, I can either:

  1. Write a script that mounts a VHD & modifies registry keys/files/etc
  2. Write a script that changes keys on a running machine (perhaps via DSC, and perhaps via an unattend.xml)

But not both---which seems like something that would be useful. "Get this machine into this state," regardless if online or offline.

Perhaps our scenario is uncommon, but our (internal) customers keep asking if our provisioning scripts can run on already-running devices. We've had to tell them "no" each time because they're designed to run on offline VMs only. That said---I'm beginning to wonder if I misunderstand our requirements. Some example scenarios we'd want to run online:

  • Install a public tool & configure it to auto-launch - simple enough
  • ⚠️ Install MSVSMON & add firewall exceptions - our firewall exceptions are for the VM's host IP---how would we obtain that on the device?
  • Deploy internal versions of a tool - would require VPN access. How would we get the installation files to the device?
  • Bootstrap sshd - how do we get deploy our host's public key to authorized_keys?

I wonder if our aspirational tool must be dual-mode "offline, mounted" or "online, over SSH", since our provisioning steps largely require info from a host machine.


@citelao was talking to another internal partner regarding the need for init, so I created this issue to cover that topic with regards to bootstrapping #557

I'm not certain I understand issue 557---is it about downloading artifacts? We obtain artifacts many ways, plus we cache them and a bunch of other complicated stuff. I'd rather let our tool obtain the files, then use dsc to get them onto the device somehow.

Or is 557 more about creating a "nupkg" equivalent file?

I wonder for the actual mount/unmounting of various types of files: vhd, vhdx, wim, cab, etc... it might be better for DSC to just provide an extension mechanism that a different tool just handles the mounting/unmounting and simply returns the path to the mounted image.

For example, we would have an extension manifest that is similar to a resource manifest that advertises the mount/unmount capability with a well defined JSON output format (initially just need the path I think) to be returned via stdout. The manifest would declare what file formats the tool supports.

In the case of VHD, it could literally just be a .ps1 script that calls Mount-VHD with a manifest file.

Yes, @jazzdelightsme was the initial person to point out the complexity here. His proposal was to have dsc accept a mounted path (--to-mounted-vhd <path>). No need for complicated schemas or scripts---let the script's caller manage mounting the VHD/ISO/whatever?

@michaeltlombardi
Copy link
Collaborator

michaeltlombardi commented Oct 1, 2024

Perhaps our scenario is uncommon, but our (internal) customers keep asking if our provisioning scripts can run on already-running devices. We've had to tell them "no" each time because they're designed to run on offline VMs only. That said---I'm beginning to wonder if I misunderstand our requirements. Some example scenarios we'd want to run online:

@citelao, I'm not sure if this helps, but back when I was doing infrastructure engineering, we used a combination of packer (which Azure VM Image Builder is built on) and our configuration management tool - at that time, Chef - to pre-configure everything we could do without the machine being joined and networked up in it's eventual spot, then finishing the configuration at deployment time.

We were migrating into blue/green deployment, but when I left that role we had something close to 80/20 for pre-deploy configuration and post-deploy, with continuous enforcement of post-deploy.

I don't remember finding any cases that we couldn't solve that way, and it helped us move the vast majority of slow/intensive processing into image preparation. We kept a small library of versioned images and a much more heterogenous and sprawling set of deployment configs for our various app teams.

One thing to highlight is that our post-deployment configurations were a super-set of pre-deployment, we just didn't do upgrades to installed software (because we were migrating apps to blue/green as we onboarded them to this model). With DSC that would look something like having a pre-deploy configuration document and using the Microsoft.DSC/Include resource in the post-deploy configuration and having the other instances depend on it.

@SteveL-MSFT
Copy link
Member Author

I'm not certain I understand issue 557---is it about downloading artifacts? We obtain artifacts many ways, plus we cache them and a bunch of other complicated stuff. I'd rather let our tool obtain the files, then use dsc to get them onto the device somehow.

Or is 557 more about creating a "nupkg" equivalent file?

No, #463 is the "nupkg" type solution

557 is a potential alternative additional approach. For this one, imagine you want to install software locally and don't or can't (due to disk space) have it as a zip pkg. I would agree 463 would be a more common scenario than 557 and for me would take priority.

In the case of VHD, it could literally just be a .ps1 script that calls Mount-VHD with a manifest file.

Yes, @jazzdelightsme was the initial person to point out the complexity here. His proposal was to have dsc accept a mounted path (--to-mounted-vhd <path>). No need for complicated schemas or scripts---let the script's caller manage mounting the VHD/ISO/whatever?

That would certainly simplify things and make it generally useful without tying it to any specific tools/technology. I think my preference right now is this approach although I'd call it --to-mounted-image rather than specific to VHD.

@kilasuit
Copy link

kilasuit commented Oct 2, 2024

adding to this

But my understanding of the state of things is that there does not exist a good solution for pre-provisioning VHDs (etc) in situations like this that can also be run on live machines.
Perhaps our scenario is uncommon, but our (internal) customers keep asking if our provisioning scripts can run on already-running devices. We've had to tell them "no" each time because they're designed to run on offline VMs only.

No it's not, but like any configuration process you need to equate for handoffs between each step of the process especially between teams & you only make it more complex in trying to have an all in one solution for both the online & offline scenario, but it do able.

This is why you can build in your end to end process to do checks at each stage and depending which stage it is you either skip a series of steps or do those steps. I used Composite Resources and Script Resources for these along with Wait For Resources which replication and expansion of these are I believe a part of the plans for v3 but I've been out the loop somewhat with the development and plans for v3.

My experience comes from the use of Lability for use not only in Lab environments where I could build and deploy them whilst I have been totally disconnected from the internet or any other network but also similarly to packer, build and package an image/image layer, that could be used elsewhere as needed (much like how containers are built today) I even said to Jeffery Snover that would be how I would configure containers by starting with base os, dsc config for the next layer & so on and so forth until the full layering is completed and a device/complex group of devices is in the overall desired state.

However I also had a need for not only Lab deployments to local machines, or other machines in my control in the local network, I had a need to do so with Azure VMs and utilised Azure Lab Services and before it Azure DevTest Labs when spinning up repeatable labs when I've delivered training in this area.

@rikhepworth wrote this great article - Define Once, Deploy Everywhere (Sort of...) about how he's used bits of this & may have some thoughts on this thread.

✅ Install a public tool & configure it to auto-launch - simple enough
⚠️ Install MSVSMON & add firewall exceptions - our firewall exceptions are for the VM's host IP---how would we obtain that on the device?
❌ Deploy internal versions of a tool - would require VPN access. How would we get the installation files to the device?
❌ Bootstrap sshd - how do we get deploy our host's public key to authorized_keys?

2nd is doable if you also have access to jump up from the VM to it's host via the network layer, like via WinRM, or SSH,

  • however that's not a config step that the VM's configuration should be managing but the host of the VM should manage

In the 3rd - Nothing stopping an encoded string that represents the installer file and setting that in the mounted disk, I have done this using segment based configuration to deploy parts of a file and then stitch them together to resemble the end file inside the VM's Disk for deploying a VHDX into a nested VM where I could not give the VM any networking access - that's not a great process but doable.
for the 4th I've raised in the Computer Management DSC Resource Module to get SSH Config options manageable via this resource.

I personally would rather see dsc not add a --to-vhd option (or --mounted-image) as that's not granular enough for me as offline interaction may need to be done on a device that is not totally disconnected but to a FileSystem Type that isn't supported by host OS. I'd also want to be able to remote attach a Disk from another device with this method too and even if was to a Virtual Hard Drive file, there are many other file types for these than just vhd/vhdx or similar and have that be passed in the configuration itself using a "simple" Offline Resource in the configuration and then pass that resource any configuration via the cmdline and it's additional parameters much like how ARM does this today.

1..10 | foreach { dsc config set --path Myconfig.yml --offline-settings Device$_.yml --background}

That then would allow you to build as many devices as you need with the same config

that would look like this

# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
resources:
- name: Offline
  type: Microsoft.DSC/OfflineConfiguration
- name: Current user registry example
  type: DSCCommunity.NetworkingDSC/HostsFile
  properties:
    HostName: google.com
    ipaddress: 127.0.0.1
    ensure: present
  dependsOn:
    - "[resourceId('Microsoft.DSC/OfflineConfiguration', 'Offline')"

with this path.yml containing any details for the offline resource and how to interact with it

# example.offline.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/offline.json
OfflineConfig:
- type: VHD
    Path: C:\myvhd.vhdx
    MountTo : Any DriveLetter

or

# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/offline.json
OfflineConfig:
- type: Device
    DeviceID: <As Returned by Get-PNPDeviceID>
    Mount: Locally
    MountTo : Any DriveLetter

or

# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/offline.json
OfflineConfig:
- type: Device
    DeviceID: <As Returned by Get-PNPDeviceID>
    Mount: VM
    MountTo : <VMID>
1..10 | foreach { dsc config set --path Myconfig.yml --off-device-settings Device$_.yml --background}

I know this is beyond "Offline" but this would also be worthwhile for considering the wider "configuring a device that is not where I am running dsc from"

# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
resources:
- name: OffDevice
  type: Microsoft.DSC/OffDevice
- name: Current user registry example
  type: DSCCommunity.NetworkingDSC/HostsFile
  properties:
    HostName: google.com
    ipaddress: 127.0.0.1
    ensure: present
  dependsOn:
    - "[resourceId('Microsoft.DSC/OffDevice', 'OffDevice')"
# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
resources:
- name: AWSDisk
  type: Microsoft.DSC/OffDevice
  provider: AWS
  path: <AWS Disk accessible URI>

with this you pass the connection element off to the AWS provider and that deals with Auth and securing the connection to that disk and making any required edits to it as needed. This could also optionally mount it as if it's local, using SMB over Quic, but that's an implementation detail that is outscope on this converation

# example.dsc.config.yaml
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
resources:
- name: AWSDisk
  type: Microsoft.DSC/OffDevice
  provider: AWS
  path: <AWS Disk accessible URI>

I say this as especially as there would be a need for this to mount or interact with other disk types that may include filesystems that isn't usually visible to the host OS or even connected to the host but can be coerced to be interacted with via either attaching the device to a VM instead or may require other installed software inc drivers to be able to be interacted with in the host OS or can be remotely interacted with via other methods.

Examples of that include filesystems used on devices like Games Consoles, phones etc.

I'd prefer this than explicit passing of a mounted path due to flexibility and reducing any potential in leaking out a sensitive item like the ImageName DiskName RemoteDrive in any logging on the device where the dsc executable runs.

It would also enable the ability to invoke parallel configurations to similar types of devices across different environments think Azure, AWS, GCP and Private Clouds and even potentially to customer environments where dsc is available too.

@SteveL-MSFT
Copy link
Member Author

SteveL-MSFT commented Oct 7, 2024

@kilasuit --to-mounted-path is general such that it could work against a remote filesystem, vhd, etc... it simply requires it to be a file system path as the actual mounting/connecting/etc... is expected to happen beforehand.

With --to-mounted-path, I think the simplest design (for both config authors and users) is to have a getMountedPath() function that is used in the config and if there is no mounted path, it simply returns an empty string. In this case, assume registry resource is updated to support loading a hive from disk:

$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
metadata:
  Microsoft.DSC:
    deploymentSupport:
      - online
      - offline
resources:
- name: windows product name
  type: Microsoft.Windows/Registry
  properties:
    hivePath: "[concat(getMountedPath(), '\\Windows\\System32\\Config\\SOFTWARE')]"
    keyPath: HKLM\Software\Microsoft\Windows NT\CurrentVersion
    valueName: ProductName

The deploymentSupport (which can debate the name) advertises that this config has been tested to work for both online and offline scenarios. In the future, this could be extended to include things like remote.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants