-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for creating this :) cannot get past this error... #7
Comments
That looks like your new support node is unreachable. After the terraform fails, is the node up? Can you ssh into it? |
Hey, thanks for taking the time to reply :) Some progress, creating a new base image.... I now get;
I can ssh into the first support node using my private key, but I am unclear how your plan knows where the private key is? I also note
For testing, I am using the base vlan (192.168.1.x) and I can at least connect to the node.
|
Hey @mannp my pleasure! Thanks for taking an interest! Let me see if i can clear up some of your questions:
The module loads your public ssh keys through the
vlans is not something I have personally worked with extensively, but from what you described it should still work. As long as your node is routable, has the correct public key set in
This is not something that is part of the terraform code in this repo, but I do think this is important. It is also something I struggled a bit with my self. Building custom cloud-init vm templates is a little tricky sometimes. If I have time, I might write up a guide on setting one up in the documentation section. Until then google is your best friend here I am afraid. Hope this helps and let me know if you have any additional questions! |
Hi @fvumbaca Oh dear, a I was looking for;
It's obvious now, but might be worth stating so, for other noobs :) wasting a good few hours :-/ (my excuse is other similar things create the keys, or ask for the private key lol) It's busy provisioning on 192.168.1.x, so I will try other subnets if this completes successfully. So I am getting So, created a new template with debian 11 instead of ubuntu and things looks much better! Currently applying with no errors. For simplicity, I used this for my template -> https://github.com/oytal/proxmox-ubuntu-cloudinit-bash and used a debian 11 image. Is it possible to specify / change the start id/number used for the vms? So they are all kepted together in the gui? So this error, which appears to do with where terraform expects to see the ssh binary, which is hardcoded? Not there in NixOS :)
I sshed into master-01 and got the kubeconfig, and successfully in lens 👍🏻 It seems 'support' is for MariaDB and a nginx load balancer? Not sure of the terminology used, and 'default' are the worker nodes? [Edit, yes, I changed the name to worker and recreated the cluster] Will do some more investigation, thanks for your help, got there! Finally, it doesn't work for vlans I believe due to the tag not being set;
I have tried changing a git pulled version of your repo, and changing the tag to 10, but it doesn't work.... I guess its pulling your git repo every time. The only subnet that works for me is 192.168.1.x, the others fail. |
No worries! Happens to me all the time, im glad you got it working! I can see the issue you might be having with the Nginx I think is the OK choice to hardcode it since it is only being used as a means to balance across master nodes for the API. Personally, once the cluster is up I use metallb for everything else. Because of the way the module sets up your API server i would recommend you to use the I honestly did not put as much thought into MariaDB other then it being easy to backup, runs externally from the cluster, and (from my non-scientific knowledge) is the most efficient supported DB for this size of cluster. I found it would also be difficult to support many different databases, especially for newer users. As for your issue with changing the terraform module, first be sure to re-run I think the improvements I can take away from this are:
Recently I have become really busy with things so I am not sure when I will have the time to sit down and work on these things. If your interested I would be more then happy to make the time to review any MRs! In any case I really appreciate you reaching out and using this little module of mine! Hope it helped you out some :) (Im going to keep this issue open until I get around to making an issue for each improvement here so I dont forget about them) |
I've tried to add support for control over VM id numbers, I'd added vlans, but not submitted a PR in time :), as well as a couple of other updates I found useful. |
@mannp thanks for #13! I have left a comment on it as I think I understand where you are going with it, but I dont think the interface being introduced will have the desired effect. After looking it over myself, I think controlling the VM IDs can get very tricky, especially when there is node pool rollovers to consider. In any case, we can continue the specific conversation for this control on that MR thread. |
I had quite similar issues as the OP (same errors, etc) and most of it boiled down to setting up cloud-init and the VM template correctly. Here's what worked for me (a modified version of this guide): Don’t try to do this from the installer ISO, you will almost certainly run into issues (for example, the ISO disables cloud-init networking configuration in the image). Use the cloud images specifically. First, go to the console of one of your promox nodes. Get the tooling to install packages into the image apt install libguestfs-tools Download the .img file (amd64 in my case, usually is) from https://cloud-images.ubuntu.com/ For example: wget https://cloud-images.ubuntu.com/jammy/20220708/jammy-server-cloudimg-amd64.img Create the bootable template using cloud-init export STORAGE_POOL="local-lvm"
export VM_ID = "9000"
export VM_NAME = "jammy-server-cloudimg-amd64.img"
virt-customize -a $VM_NAME --install qemu-guest-agent # installs qemu guest agent into the template
# virt-customize -a $VM_NAME --install nfs-common # optional - if you are going to use nfs on the host from within k3s
qm create $VM_ID --name ubuntu --memory 2048 --net0 virtio,bridge=vmbr0
qm importdisk $VM_ID $VM_NAME $STORAGE_POOL -format qcow2
qm set $VM_ID --scsihw virtio-scsi-pci --scsi0 $STORAGE_POOL:9000/vm-9000-disk-0.qcow2
qm set $VM_ID --ide2 $STORAGE_POOL:cloudinit
qm set $VM_ID --boot c --bootdisk scsi0
qm set $VM_ID --serial0 socket --vga serial0
Enable the guest agent in UI by going to VM -> Options -> Enable QEMU Guest Agent
Ensure the private key that is referenced public key specified in Then run terraform. I'm happy to bundle this up into a docs pull request if it makes sense. |
Not sure what I am doing wrong, but I have tried a number of reconfigures and always get back to this error.
Thanks for any pointers.
The text was updated successfully, but these errors were encountered: