Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure DNS and NTP in machine allocation #571

Merged
merged 24 commits into from
Nov 8, 2024

Conversation

@majst01
Copy link
Contributor

majst01 commented Oct 19, 2024

We should discuss if we also put these two options into the Partition and take it from there if a machine is allocated ?

Other option is to put dns and ntp configuration into the CloudProfile and adopt machine creation in gepm.

@robertvolkmann
Copy link

robertvolkmann commented Oct 21, 2024

It should be a feature of metal-stack and not reliant on Gardener-specific features like CloudProfile. For one customer, having the ability to configure it directly within the Partition would be advantageous. Additionally, for isolated clusters, the DNS servers and NTP servers in the CloudProfile would overwrite the values provided in the Partition. I also tend to require configured DNS and NTP servers for every partition in the future to eliminate any default values in the metal images.

@majst01
Copy link
Contributor

majst01 commented Oct 21, 2024

It should be a feature of metal-stack and not reliant on Gardener-specific features like CloudProfile. For one customer, having the ability to configure it directly within the Partition would be advantageous. Additionally, for isolated clusters, the DNS servers and NTP servers in the CloudProfile would overwrite the values provided in the Partition. I also tend to require configured DNS and NTP servers for every partition in the future to eliminate any default values in the metal images.

On the other hand, properties like image or firewall-image are not defaulted and must be provided during allocation by the GEPM.

For me it is a bit opinionated if we inherit from the partition.

@robertvolkmann
Copy link

How do you plan to provide the NTP configuration for metal-hammer if the NTP servers cannot be configured in the Partition?

@majst01
Copy link
Contributor

majst01 commented Oct 21, 2024

How do you plan to provide the NTP configuration for metal-hammer if the NTP servers cannot be configured in the Partition?

This is actually done in the pixie-core deployment, but would also be easier if partition contains these configuration.

@robertvolkmann
Copy link

How do you plan to provide the NTP configuration for metal-hammer if the NTP servers cannot be configured in the Partition?

This is actually done in the pixie-core deployment, but would also be easier if partition contains these configuration.

It seems a bit inconsistent to me that we have to configure the kernel and image URL for metal-hammer directly in the Partition, but not the NTP server that will be used by metal-hammer.

@Gerrit91
Copy link
Contributor

Regarding the defaulting through the partition entity: I am actually open to offer this defaulting layer when for certain environments this makes everything easier to configure. It seems there can be partitions that do not reach the internet at all and no machine can be provisioned without passing custom DNS and NTP servers. For these scenarios it's really cumbersome to always pass these settings all the time. As long as it's optional to provide, it does not hurt to add this.

@majst01
Copy link
Contributor

majst01 commented Oct 21, 2024

@simcod please add matching fields to the Partition entity (v1 and database) and check in the service if they are set and use the partition config for defaults.

@simcod simcod marked this pull request as ready for review October 28, 2024 15:06
@simcod simcod requested a review from a team as a code owner October 28, 2024 15:06
@simcod simcod requested a review from Gerrit91 October 28, 2024 15:06
cmd/metal-api/internal/service/machine-service.go Outdated Show resolved Hide resolved
cmd/metal-api/internal/service/machine-service.go Outdated Show resolved Hide resolved
cmd/metal-api/internal/service/machine-service.go Outdated Show resolved Hide resolved
cmd/metal-api/internal/metal/machine.go Outdated Show resolved Hide resolved
cmd/metal-api/internal/metal/machine.go Outdated Show resolved Hide resolved
Comment on lines 1155 to 1156
dnsServers metal.DNSServers
ntpServers metal.NTPServers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would render the else branches unnecessary:

Suggested change
dnsServers metal.DNSServers
ntpServers metal.NTPServers
dnsServers = partition.DNSServers
ntpServers = partition.NTPServers

Comment on lines 208 to 216
var dnsServers metal.DNSServers
if len(requestPayload.DNSServers) > 0 {
dnsServers = requestPayload.DNSServers
}

var ntpServers metal.NTPServers
if len(requestPayload.NTPServers) > 0 {
ntpServers = requestPayload.NTPServers
}
Copy link
Contributor

@Gerrit91 Gerrit91 Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same validations as in the machine spec should be implemented here. Otherwise machines might default to invalid partition defaults.

Maybe you can factor out the validation into a dedicated func?

if len(requestPayload.DNSServers) > 0 {
dnsServers = requestPayload.DNSServers
reqDNSServers := requestPayload.DNSServers
if len(reqDNSServers) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this check can be omited, because the validate func iterates over the dnsservers otherwise returns nil

if len(requestPayload.NTPServers) > 0 {
ntpServers = requestPayload.NTPServers
reqNtpServers := requestPayload.NTPServers
if len(ntpServers) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants