-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VNF on OneKE appliance doesn't NAT (or get any NAT related info) #89
Comments
I confirm the issue |
Hi, the one-apps repo is the correct place to report VR, OneKE related issues. ☝️ 😌
Thanks, I've corrected the command in the docs it was a simple typo. In general, your OneFlow configuration looks OK and something similar to it seems to be working in my environments. When one-failover service "fails" it always tries to bring down every VR module possible, hence NAT (and everything else) is disabled. There must be a reason keepalived returned FAULT state through the VRRP fifo. If you could examine
🤔 |
Hi there @sk4zuzu, thanks you for your answer mate. First, sorry for ask in the wrong repo bro, it won't happen again. Answering you:
vrouter:~# cat /var/log/messages | grep keep
Apr 29 23:34:41 vrouter local3.debug one-contextd: Script loc-15-keepalived: Starting ...
Apr 29 23:34:41 vrouter local3.debug one-contextd: Script loc-15-keepalived: Finished with exit code 0
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: WARNING - keepalived was built for newer Linux 6.3.0, running on Linux 6.1.78-0-virt OpenNebula/one#1-Alpine SMP PREEMPT_DYNAMIC Wed, 21 Feb 2024 08:19:22 +0000
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: Command line: '/usr/sbin/keepalived' '--dont-fork' '--use-file=/etc/keepalived/keepalived.conf'
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: Configuration file /etc/keepalived/keepalived.conf
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: Script user 'keepalived_script' does not exist
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: Configuration file /etc/keepalived/keepalived.conf
Apr 29 23:34:44 vrouter daemon.info Keepalived[2704]: Script user 'keepalived_script' does not exist
Apr 29 23:34:44 vrouter daemon.info Keepalived_vrrp[2818]: Script user 'keepalived_script' does not exist It's somehow screaming at me that the user 'keepalived_script' does not exist. But I guess this is not needed, cuz I understand that the screamed user 'keepalived_script' is only needed for scripts in the post/pre keepalive services.
cat /etc/keepalived/conf.d/vrrp.conf
vrrp_sync_group VRouter {
group {
ETH1
}
}
vrrp_instance ETH1 {
state BACKUP
interface eth1
virtual_router_id 17
priority 100
advert_int 1
virtual_ipaddress {
10.1.0.11/26 dev eth0
10.1.0.10/24 dev eth1
}
virtual_routes {
}
}
Which differ from the virtual_ipaddress {
10.1.0.11/26 dev eth0
10.1.0.10/24 dev eth1
} In the cidr way, in yours is /32 in boths NICs, in my case are /26 for eth0 and /24 for eth1. |
Hi there @sk4zuzu, I think I have found the problem (not the solution tho, sorry) At the time we instantiate the cluster we define the
If I left those variables blank (empty) it will run the cluster without a problem, regardless the network I use. I.E.:
Let me explain myself:
AUTOMATIC_VLAN_ID = "YES"
CLUSTER_IDS = "100"
PHYDEV = "bond0"
VN_MAD = "802.1Q" The already created private network:
Am I doing it wrong? |
Hi @kCyborg
It's fine man :) @rsmontero already saved us.. ☝️ 😌 As for the example with The first VIP address should preferably be from the public VNET (should work with the private VNET as well), but it has to be from outside the AR you use to deploy cluster nodes so there is no conflict on the IP protocol level. |
Description
Once our team try instantiate the OneKE appliance (both the normal and the airgaped version) available from the public Opennebula marketplace the VNF doesn't get any NAT rule, thus making the communication between the public network to the VNF and then to the private k8s cluster unavailable :-(
To Reproduce
Expected behavior
A working k8s cluster
Details
If we go into the VNF via SSH and check the logs on
/var/log/one-appliance/one-failover.log
we got:Telling us that the VRouter failed, but it doesn't say the why :-(
But, if we check on the
/var/log/one-appliance/configure.log
, we got:Informing us that the
/etc/iptables/rules-save
file was created, but if we try to open the file, the file is indeed empty:And if we check the iptables:
And:
And if we try with the recommended command:
We got nothing :-(
If we try to get to the public network from the master, storage or slave nodes, (which have the DNS server (at
/etc/resolv.conf
) pointing to the private IP of the VNF node) we got no answer from the internet, meaning those k8s nodes can't get anything from the internet.Additional context
We don't really know if the problem it's indeed VNF or if we are setting a wrong configuration, as the documentation doesn't give much :-(
Progress Status
The text was updated successfully, but these errors were encountered: