Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy Devices does not work with some OCI Images #1508

Open
5 of 6 tasks
neubi4 opened this issue Dec 13, 2024 · 0 comments
Open
5 of 6 tasks

Proxy Devices does not work with some OCI Images #1508

neubi4 opened this issue Dec 13, 2024 · 0 comments
Assignees
Labels
Bug Confirmed to be a bug
Milestone

Comments

@neubi4
Copy link

neubi4 commented Dec 13, 2024

Required information

  • Distribution: Debian
  • Distribution version: 12
  • The output of "incus info" or if that fails:
config:
  core.https_address: :8777
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instance_oci
- clustering_groups_config
- instances_lxcfs_per_instance
- clustering_groups_vm_cpu_definition
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: root
auth_user_method: unix
environment:
  addresses:
  - 192.168.1.10:8777
  - '[fde7:434c:ad1a:1:aaa1:59ff:fe67:a908]:8777'
  - '[2003:cc:4f17:d101:aaa1:59ff:fe67:a908]:8777'
  - 10.89.2.1:8777
  - 10.89.1.1:8777
  - 10.255.255.1:8777
  - 192.168.42.1:8777
  - 10.61.210.1:8777
  - 10.89.4.1:8777
  - 10.89.0.1:8777
  - 10.89.7.1:8777
  - 10.89.3.1:8777
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB/jCCAYOgAwIBAgIQOrT/3udhxkFdQ4JhpXcdkjAKBggqhkjOPQQDAzAxMRkw
    FwYDVQQKExBMaW51eCBDb250YWluZXJzMRQwEgYDVQQDDAtyb290QGx1bWlrYTAe
    Fw0yNDExMzAxMTQ1MThaFw0zNDExMjgxMTQ1MThaMDExGTAXBgNVBAoTEExpbnV4
    IENvbnRhaW5lcnMxFDASBgNVBAMMC3Jvb3RAbHVtaWthMHYwEAYHKoZIzj0CAQYF
    K4EEACIDYgAEWbeBqcH/+QocA2HC0JI/CDOwj1nPwVPPyyfl2NSNpll6465bijEy
    8um1n/CNB4wVI55uUXkZ2XJPf7rZ5FShcDS5O2skAReTgUIRtv19Q4DzpwB5e0rm
    AVC+L5eybDdXo2AwXjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
    AwEwDAYDVR0TAQH/BAIwADApBgNVHREEIjAgggZsdW1pa2GHBH8AAAGHEAAAAAAA
    AAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaQAwZgIxAMg/gjKF9k5CBl9trR3H08YX
    zzviKuanfO4c/LcEfbtowcrB/cBW/PetSipbTc6RQwIxAJTPSWfyCbDSbLJG4sKM
    Ioi+ky7p5m8UF+MspuEBmW5eSKhMdmE+5hojPp0hKLe6Fg==
    -----END CERTIFICATE-----
  certificate_fingerprint: b9ea16b71782fff5eee1e987d086362ad7a956319789975abd3bb10357bb88a2
  driver: lxc | qemu
  driver_version: 6.0.2 | 9.0.4
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.11.5+bpo-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "12"
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: lumika
  server_pid: 3690897
  server_version: "6.8"
  storage: zfs
  storage_version: 2.2.6-1~bpo12+3
  storage_supported_drivers:
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.48.0
    remote: false
  - name: lvmcluster
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.48.0
    remote: true
  - name: zfs
    version: 2.2.6-1~bpo12+3
    remote: false
  - name: btrfs
    version: "6.2"
    remote: false
  - name: dir
    version: "1"
    remote: false

Issue description

With some OCI Images i can not use Proxy Devices. Trying to start an container after adding a Proxy Device results in this Error Message:
Error: Error occurred when starting proxy device: Error: Permission denied - Failed setns to connector network namespace

This happens at least with the official Grafana Docker Image docker:grafana/grafana:latest and the Prometheus Docker Image docker:prom/prometheus:latest.

Steps to reproduce

root@lumika:~# incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| docker          | https://docker.io                  | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| ghcr            | https://ghcr.io                    | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
root@lumika:~# incus init docker:grafana/grafana:latest grafana
Erstelle grafana
root@lumika:~# incus config device add grafana port3031 proxy listen=tcp:0.0.0.0:3031 connect=tcp:127.0.0.1:3000
Device port3031 added to grafana
root@lumika:~# incus start grafana
Error: Error occurred when starting proxy device: Error: Permission denied - Failed setns to connector network namespace
Try `incus info --show-log grafana` for more info

Other Images, for example the official nginx Image works without any Problem:

root@lumika:~# incus init docker:nginx nginx
Erstelle nginx
root@lumika:~# incus config device add nginx port3031 proxy listen=tcp:0.0.0.0:3031 connect=tcp:127.0.0.1:80
Device port3031 added to nginx
root@lumika:~# incus start nginx
root@lumika:~#

Information to attach

  • Any relevant kernel output (dmesg)
[Fr Dez 13 16:52:27 2024] audit: type=1400 audit(1734105152.923:367): apparmor="STATUS" operation="profile_load" profile="unconfined" name="incus_forkproxy-port3031_grafana_</var/lib/incus>" pid=3703708 comm="apparmor_parser"
[Fr Dez 13 16:52:27 2024] audit: type=1400 audit(1734105152.939:368): apparmor="DENIED" operation="open" class="file" profile="incus_forkproxy-port3031_grafana_</var/lib/incus>" name="/etc/resolv.conf" pid=3703723 comm="incusd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
  • Container log (incus info NAME --show-log)
root@lumika:~# incus info --show-log grafana
Name: grafana
Status: STOPPED
Type: container (application)
Architektur: x86_64
Erstellt: 2024/12/13 16:56 CET
Last Used: 2024/12/13 16:56 CET

Log:

lxc grafana 20241213155658.626 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroup_tree_create:747 - Die Datei existiert bereits - Creating the final cgroup 10(lxc.monitor.grafana) failed
lxc grafana 20241213155658.626 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgroup_tree_create:807 - Die Datei existiert bereits - Failed to create monitor cgroup 10(lxc.monitor.grafana)
lxc grafana 20241213155658.626 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroup_tree_create:747 - Die Datei existiert bereits - Creating the final cgroup 10(lxc.monitor.grafana-1) failed
lxc grafana 20241213155658.626 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgroup_tree_create:807 - Die Datei existiert bereits - Failed to create monitor cgroup 10(lxc.monitor.grafana-1)
lxc grafana 20241213155658.627 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:__cgroup_tree_create:747 - Die Datei existiert bereits - Creating the final cgroup 10(lxc.monitor.grafana-2) failed
lxc grafana 20241213155658.627 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgroup_tree_create:807 - Die Datei existiert bereits - Failed to create monitor cgroup 10(lxc.monitor.grafana-2)
lxc grafana 20241213155659.434 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_monitor_destroy:925 - Das Gerät oder die Ressource ist belegt - Failed to destroy 10(lxc.monitor.grafana-3)
  • Container configuration (incus config show NAME --expanded)
root@lumika:~# incus config show grafana --expanded
architecture: x86_64
config:
  environment.GF_PATHS_CONFIG: /etc/grafana/grafana.ini
  environment.GF_PATHS_DATA: /var/lib/grafana
  environment.GF_PATHS_HOME: /usr/share/grafana
  environment.GF_PATHS_LOGS: /var/log/grafana
  environment.GF_PATHS_PLUGINS: /var/lib/grafana/plugins
  environment.GF_PATHS_PROVISIONING: /etc/grafana/provisioning
  environment.HOME: /home/grafana
  environment.PATH: /usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  environment.TERM: xterm
  image.architecture: x86_64
  image.description: docker.io/grafana/grafana (OCI)
  image.id: grafana/grafana:latest
  image.type: oci
  volatile.base_image: d8ea37798ccc41061a62ab080f2676dda6bf7815558499f901bdb0f533a456fb
  volatile.cloud-init.instance-id: 65257e84-38a7-4b68-b6eb-3858fd8189e3
  volatile.container.oci: "true"
  volatile.eth0.hwaddr: 00:16:3e:54:76:43
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1541793,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1541793,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1541793,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1541793,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: a5c73b68-af4c-47a3-9747-b4634117f1f4
  volatile.uuid.generation: a5c73b68-af4c-47a3-9747-b4634117f1f4
devices:
  eth0:
    network: incusbr0
    type: nic
  port3031:
    connect: tcp:127.0.0.1:3000
    listen: tcp:0.0.0.0:3031
    type: proxy
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
  • Main daemon log (at /var/log/incus/incusd.log)
time="2024-12-13T16:27:20+01:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the forward listen IPs" driver=bridge err="br_netfilter kernel module not loaded" network=incusbr0 project=default
time="2024-12-13T16:29:20+01:00" level=error msg="Error getting disk usage" err="Failed to run: zfs get -H -p -o value used ssd/incus/containers/monitoring_graf: exit status 1 (cannot open 'ssd/incus/containers/monitoring_graf': dataset does not exist)" instance=graf instanceType=container project=monitoring
time="2024-12-13T16:38:43+01:00" level=warning msg="Failed getting exec control websocket reader, killing command" PID=0 err="websocket: close 1005 (no status)" instance=hassio interactive=true project=default
  • Output of the client with --debug
  • Output of the daemon with --debug (alternatively output of incus monitor --pretty while reproducing the issue)
    incus_monitor.txt
@stgraber stgraber added the Bug Confirmed to be a bug label Dec 13, 2024
@stgraber stgraber self-assigned this Dec 13, 2024
@stgraber stgraber added this to the incus-6.9 milestone Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Development

No branches or pull requests

2 participants