-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to attach static IP (non-FQDN) node to a pool #1020
Labels
Comments
Hi @asmajlovic, I tested the following with success. resource "bigip_ltm_pool_attachment" "attach_node" {
for_each = {
for entry in local.pool_attachments : "${entry.pool_name} : ${entry.node_name} : ${entry.node_fqdn_autopopulate} : ${entry.node_priority_group} : ${entry.node_ratio} : ${entr
y.node_state}" => entry
}
node = "/${var.f5_partition}/${each.value.node_name}"
pool = "/${var.f5_partition}/${each.value.pool_name}"
state = each.value.node_state
ratio = each.value.node_ratio
priority_group = each.value.node_priority_group
depends_on = [bigip_ltm_pool.pool, bigip_ltm_node.node]
} $ terraform plan -out poolattach
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# bigip_ltm_node.node["cerastes"] will be created
+ resource "bigip_ltm_node" "node" {
+ address = "172.20.1.10"
+ connection_limit = (known after apply)
+ description = "Static IP test node #1"
+ dynamic_ratio = (known after apply)
+ id = (known after apply)
+ monitor = "default"
+ name = "/Common/cerastes"
+ rate_limit = (known after apply)
+ ratio = (known after apply)
+ session = (known after apply)
+ state = (known after apply)
}
# bigip_ltm_pool.pool["myth"] will be created
+ resource "bigip_ltm_pool" "pool" {
+ allow_nat = "yes"
+ allow_snat = "yes"
+ id = (known after apply)
+ load_balancing_mode = "round-robin"
+ minimum_active_members = (known after apply)
+ monitors = [
+ "/Common/tcp",
]
+ name = "/Common/myth"
+ reselect_tries = (known after apply)
+ service_down_action = (known after apply)
+ slow_ramp_time = (known after apply)
}
# bigip_ltm_pool_attachment.attach_node["myth : cerastes:5000 : enabled : 0 : 1 : enabled"] will be created
+ resource "bigip_ltm_pool_attachment" "attach_node" {
+ connection_limit = (known after apply)
+ connection_rate_limit = (known after apply)
+ dynamic_ratio = (known after apply)
+ id = (known after apply)
+ monitor = (known after apply)
+ node = "/Common/cerastes:5000"
+ pool = "/Common/myth"
+ priority_group = 0
+ ratio = 1
+ state = "enabled"
}
Plan: 3 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Saved the plan to: poolattach
To perform exactly these actions, run the following command to apply:
terraform apply "poolattach"
$ terraform apply "poolattach"
bigip_ltm_node.node["cerastes"]: Creating...
bigip_ltm_pool.pool["myth"]: Creating...
bigip_ltm_node.node["cerastes"]: Creation complete after 0s [id=/Common/cerastes]
bigip_ltm_pool.pool["myth"]: Creation complete after 0s [id=/Common/myth]
bigip_ltm_pool_attachment.attach_node["myth : cerastes:5000 : enabled : 0 : 1 : enabled"]: Creating...
bigip_ltm_pool_attachment.attach_node["myth : cerastes:5000 : enabled : 0 : 1 : enabled"]: Creation complete after 0s [id=/Common/myth-/Common/cerastes:5000]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed. |
Now I feel a bit stupid for raising the "issue" in the first place 😄 #rtfm Thank you for your prompt response and code debugging - much obliged. I'll happily finance a coffee or beer for you (just send a suitable, non-crypto, link). Other than that, we can close this off. |
Hi @asmajlovic, no pb. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Environment
Summary
Pool attachments appear to be always treated as FQDN nodes, despite the fact that a given node definition is making use of a static IP address. The result is that any static IP node pool attachment attempt will result in an error, stating that: "FQDN property cannot be used with static IP nodes".
Steps To Reproduce
Steps to reproduce the behavior:
variables.tf
node.tf
pool.tf
misc.auto.tfvars
Terraform plan
The plan for the above resources is generated without any issues:
Attempting to apply the changes results in a pool attachment error:
Expected Behavior
Static IP address node is identified as an existing resource and attached to a pool without any FQDN considerations. The API / UI permits the creation of an arbitrarily named node and preferred IPv4 / IPv6 address - there appears to be no obvious reason why the Terraform provider would not provide the same functionality.
Actual Behavior
Static IP address node pool attachment fails with an FQDN property error (see error output above).
Current workaround
For the time being, we have updated our
tfvars
source file to reference an IPv4 address for both the node name and address values:This feels like unnecessary duplication of node name and address, as well as a resulting resource that has no obvious reference in the node name (we leave a comment in the variables source file to indicate its purpose), but it works as expected:
Additional information
I have noted several previous Github issues that appear to be similar / related to what has been outlined above:
The last entry in the list is the closest duplicate I have been able to find. It seems that this was functional in version 1.6.0 of the F5
bigip
Terraform provider and I'm afraid that I do not know when exactly the behaviour changed. It is also unclear why there have been occasional statements to this being "legacy" pool attachment behaviour. As already mentioned above, the API / UI supports the feature - while that remains true, I see no reason why the Terraform provider should not as well.Any input, clarification, or suggestions (should we be doing something incorrectly) on the above would be very much appreciated. Thank you.
The text was updated successfully, but these errors were encountered: