Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avahi dbus #4

Open
AlexZigante opened this issue May 13, 2021 · 5 comments
Open

Avahi dbus #4

AlexZigante opened this issue May 13, 2021 · 5 comments

Comments

@AlexZigante
Copy link

Hii,
was just wondering if theres any way of enabling dbus internally? Making it run on container side and not host???? Im running rancher and i cant seem to find a way to run avahi dbus.

@ghost
Copy link

ghost commented Dec 31, 2021

heh no there's a built in dns server that you get when you create a container with a bridge other than the default bridge that will only listen inside the container on 127.0.0.11, that doesn't respond authoritatively so good luck setting up a forwarder between it and an interface between the host and the guest; you can't really and it's disgusting anyway, it shouldn't even be there, at all in any capacity because they refuse toe expand on this idea that it might actually serve people's needs better to have it optionally listen for queries on the host side as well; this is really old news, and you can rest assured that they have no intention of improving it so you may as well look into writing your own network driver to do the job:

https://github.com/clandestinenetworks/python-docker-openvswitch-plugin

which actually is the only finer point of using docker is it's extensibility and the rest is pretty disappointing. "What do you want free" honestly there's a lot I wouldn't do that would speak to the problem, for free, but there's a lot that I've already done to some extent. That driver already uses iproute2 extensively for a lot of the setup the only part that is ovs is the bridge so it's only like, 5 lines of code. But, so then I guess the question is:

  • Would this solution speak to yours or somebody else's interest? I don't find it particularly interesting and it's a tired enough subject that it legit pisses me off and it's hard to have a serious discussion about it with anybody.
  • I could do this a couple of different ways, the first that comes to mind is dnssd, the other is to simply add a DNS server to it (https://gist.github.com/pklaus/b5a7876d4d2cf7271873) which is lesser than more what the current bridge embedded dns server is except that I can implement it to listen on both the host where you can forward requests to it from whatever else you have for DNS or just use it directly (eg search order in resolvconf.)

I guess let me know, I have to do it for my own purposes sooner or later whether its sooner or later depends on who else is interested.

@AlexZigante
Copy link
Author

AlexZigante commented Dec 31, 2021

Sorry if I brought up a topic that pisses you off. I honestly really didn't mean to piss you off. I was trying to get up and running a few tools. On Arch linux I use Avahi with a hanky old python tool that announces cnames over mdns. I tired to run this two containers that replicate same thing. But then somehow didn't work and I came to the conclusion it must be dbus.

@ghost
Copy link

ghost commented Dec 31, 2021

@Alexr00t nah you're fine, its not you at all, there are a ton of people who have this problem and I can't imagine docker developers are strangers to it. Eventually I figured out on my own that the right way to fix the problem is to write a driver, but if you look at stack overflow it's years worth of the same questions about docker and dns and the answer "you can't" and frankly it's turned a lot of people off from docker now and subsequently it's become a contentious topic like systemd which ironically does everything I ever wanted it to do better than anything else but people in my groups are hard headed.

It seems in my mind very unlikely that trying to develop anything directly for docker's default driver (improving whats there already for DNS) would be very productive if even on the table. But it turns out that running a custom network driver is actually really easy if it works correctly (if it's broken you won't want to use it at all.) When I wrote my ovs driver I couldn't find anything around that was also written in python so I had to figure out everything from docs but it really wasn't that much. I might take another stab at making something with good DNS support, the thing thats hard is just making sure it's not buggy (for this it's hardly useful if it is.)

@AlexZigante
Copy link
Author

@clandestinenetworks thanks for your replies and all understood. Well I just wish you a wonderful rest of the day, rest of 2021 and happy 2022.

@bmartino1
Copy link

bmartino1 commented Dec 16, 2024

I could get dbus running on my fork. The fork was changed to use Debian over alpine to fix some things.

https://github.com/bmartino1/avahi

I originally meant this to be for unraid but can see other projects use it as well.

dbus needs some work on alpine but for Debian:

# Ensure D-Bus is running
DBUS_PID_FILE="/run/dbus/pid"
if [ -f "${DBUS_PID_FILE}" ]; then
  DBUS_PID=$(cat "${DBUS_PID_FILE}")
  if [ -z "${DBUS_PID}" ] || ! kill -0 "${DBUS_PID}" 2>/dev/null; then
    echo "Cleaning up stale D-Bus PID file."
    rm -f "${DBUS_PID_FILE}"
  fi
fi

# Start D-Bus if not already running
if ! pgrep -x "dbus-daemon" >/dev/null; then
  echo "Starting D-Bus service..."
  mkdir -p /run/dbus && chmod 755 /run/dbus
  dbus-daemon --system --address=unix:path=/run/dbus/system_bus_socket &
  export DBUS_SYSTEM_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
fi

example log:

No service file found in /etc/avahi/services.
Joining mDNS multicast group on interface veth.IPv6 with address ipv6:15d.
New relevant interface veth.IPv6 for mDNS.
Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
New relevant interface docker0.IPv4 for mDNS.
Joining mDNS multicast group on interface vhost0.IPv6 with address ipv6:4672.
New relevant interface vhost0.IPv6 for mDNS.
Joining mDNS multicast group on interface vhost0.IPv4 with address 192.168.2.248.
New relevant interface vhost0.IPv4 for mDNS.
Joining mDNS multicast group on interface eth0.IPv6 with address ipv6:9573.
New relevant interface eth0.IPv6 for mDNS.
Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.2.254.
New relevant interface eth0.IPv4 for mDNS.
Joining mDNS multicast group on interface lo.IPv6 with address ::1.
New relevant interface lo.IPv6 for mDNS.
Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
New relevant interface lo.IPv4 for mDNS.
Network interface enumeration completed.
Registering new address record for ipv6:15d on veth3618c2f.*.
Registering new address record for 172.17.0.1 on docker0.IPv4.
Registering new address record for ipv6:4672 on vhost0.*.
Registering new address record for 192.168.2.248 on vhost0.IPv4.
Registering new address record for ipv6:9573 on eth0.*.
Registering new address record for 192.168.2.254 on eth0.IPv4.
Registering new address record for ::1 on lo.*.
Registering new address record for 127.0.0.1 on lo.IPv4.
sendmsg() to ff02::fb failed: Network is unreachable
sendmsg() to ff02::fb failed: Network is unreachable
sendmsg() to ff02::fb failed: Network is unreachable
Server startup complete. Host name is hostname.local. Local service cookie is ###.
sendmsg() to ff02::fb failed: Network is unreachable
...
sendmsg() to ff02::fb failed: Network is unreachable
Starting D-Bus service...
Generating Avahi configuration file from Docker environment variables...
Prepared wide-area/enable-wide-area=yes for wide-area section
Prepared server/use-ipv4=yes for server section
Prepared reflector/enable-reflector=yes for reflector section
Wrote section [wide-area] to config file.
Final Avahi Configuration:
# Auto-generated Avahi Configuration
[wide-area]

enable-wide-area=yes
Wrote section [server] to config file.
Final Avahi Configuration:
# Auto-generated Avahi Configuration
[wide-area]

enable-wide-area=yes
[server]

use-ipv4=yes
Wrote section [reflector] to config file.
Final Avahi Configuration:
# Auto-generated Avahi Configuration
[wide-area]

enable-wide-area=yes
[server]

use-ipv4=yes
[reflector]

enable-reflector=yes
Avahi Daemon started with Docker environment variables.
Avahi Daemon started successfully with PID 18.

exmample docker run:
docker run -d -name='Avahi' --net='host' -e TZ="America/Chicago" -e 'REFLECTOR_ENABLE_REFLECTOR'='yes' -e 'SERVER_USE_IPV4'='yes' -e 'WIDE_AREA_ENABLE_WIDE_AREA'='yes' 'avahi:latest'

https://hub.docker.com/r/bmmbmm01/avahi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants