-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phorus sound bar detected but doesn't play audio #53
Comments
Hi, 3 things:
This does not stop the music coming from the PC speakers, so I use pavucontrol again to route the audio output to a device that is not connected (line out in my case) to silence the speakers. If audio is being captured you can see this in swyh-rs ("The wave_reader is now receiving samples"), and in the RMS monitor. |
After a look at the Phorus website and some googling I found that naked PCM streams are not supported, so only the WAV option could possibly work with swyh-rs. This is also true for Sonos speakers. https://phorus.com/blog/high-resolution-audio That is possibly the reason why the Phorus accepts all the dlna/avtransport commands without error but in the end decides that it can't handle the PCM stream so it never sends a streaming request. Could you give it a try with autoresume/autoreconnect/24 bit disabled and the WAV header and RMS monitor enabled? |
Could you try the latest 1.3.23 release? It fixes a broblem with the AV transport DIDL-Lite protocolinfo. |
Hi! Thanks for the tips. I will look at it this Friday :). |
Okay I tried with the settings you suggested. I think I configured it right and I do see "the wave_reader is now receiving samples" and the green bars move. The pavucontrol monitor for the stereo is also moving. Unfortunately there still isn't sound coming from my sound bar. I tried putting up the volume a lot to be sure it wasn't because due to a silly thing like that. Thanks for your help. Are there more things we could try or things I could do to debug? |
The sound bar does not send a streaming request so streaming never starts. I suppose that there is something in the AV SetTransportUri metadata that it does not understand or like, this should be visible in the log (a http 500 response instead of a 200 response on receiving the SetTransportUri command). Unfortunately I have no idea what it is that it does not like... |
Hmm. I looked through the logs and didn't see a 500 response. What's interesting is that when I was first using Spotify itself to stream to it and then tried swyh-rs, the sound bar stopped the Spotify stream. So apparently it does realize a new stream comes in, or something. Since I have a few software packages that do manage to connect and stream (Spotify itself and the earlier mentioned mkchromecast), I could analyze and compare the network traffic of these and see if I can find where the difference. Do you have some tips on how to do this? How did you debug connecting to your own DLNA device? I realize you're doing this in your free time, so again, thanks for helping and providing this for free. EDIT: Adding log for completeness sake.
|
You're right, the device accepts everything wiyh a 200 OK response, but yet it does not initiate the streaming when it acknowledges the Play command. I use Wireshark to debug dlna sessions when I have a problem. I have used BubbleUnpn and Foobar2000 in the past to record a successful dlna streaming session, so that I could compare with what I was sending. But the sniffer traces are a lot of data to wade through... If you could attach a trace of a working dlna AV-transport streaming session here there's a possibility that it gives me a clue why my code doesn't work. So far it has never been necessary to write device-dependent AV transport or Openhome transport code, they either work with what is generally considered "standard" commands or don't (refusing commands with a 500 internal server error). |
I fiddled around a bit and made Wireshark recordings (pcapng) for both Spotify and Swyh-rs. Is this what you meant?
Not much a standard then haha. I hope Phonus doesn't stray from it (much). |
Unfortunately Spotify does not use dlna. I think bubbleupnp server is your best bet. Or any other dlna controller... |
Okay, attempt 2! It contains 2 files. One is the logs of swyh-rs directly to the sound bar, the other is swyh-rs redirecting to BubbleUPnP and then to the sound bar. This last setup works, confirming that the capturing indeed works correctly. |
Thanks! I'll let you know if I find anything useful. |
In the swyh-rs log I see that the Phorus actually starts streaming:
After receiving the first 8 KB block of WAV data from swyh-rs it resets the connection so streaming is immediately aborted:
This is normal behaviour except that it really should be a HEAD request instead of a GET (it is used to check what is coming). Immediately after that it does the "real" GET request:
and then streaming starts for real, just like it should, but the only problem is that you get no sound... The soundbar uses GStreamer internally, it is possible that it misunderstands the WAV format. Could you check again without the WAV option (and perhaps without the chunked option, although I don't think this is relevant as libcurl is used for HTTP)? It also looks like Bubble is not using DLNA but the Chromecast protocol to send sound to the Phorus, so no luck there... |
I tried again without wav and it made no difference :\ So strange. I hadn't realized completely that Chromecast and DLNA are different things. So far the Chromecast outputs have worked. Maybe that's relevant somehow. Does this sound bar maybe not correctly implement/follow/support DLNA? Weird. |
There have been dlna problem reports in the past: philippe44/AirConnect#178. I'm afraid that I don't know how to make this work, sorry... Thanks for your efforts trying to help me! |
In the past (before I wrote swyh-rs) I have used pulseaudio-dlna to stream Linux system audio to chromecast and dlna renderers: old pulseaudio-dlna article for Ubuntu. You can find a recent and maintained fork of the original |
Ah thanks! I hadn't found the fork yet. I had found another repo / copr which was outdated. I'm going to try it! In general the issue I keep running into though is that there is a delay/latency of several seconds (3 to 8) with streaming from my computer to the sound bar. Do you think this is inherent and unsolvable? I have the feeling that transmitting a wav over a wireless network should be doable real-time (ish) so if I were to video call with someone that the sound would be correspond to their lip movements. I was hoping swyh-rs, being a Rust program, would do that. What is your expectation / experience? |
A certain delay is unavoidable: the os sound system does a certain amount of buffering for capturing audio, swyh-rs uses the same buffer size as the os sound system, then you have the http transport that also uses buffering, and finally the rendering device too has to use a certain amount of buffering to prevent drops due to network delays. The delay added by swyh-rs itself is minimal, typically around 8KB (44 khz 16 bit sterio PCM audio is 176 KB per second). |
Would you be prepared to test the latest 1.4.1 release with the new FLAC support option? |
Sure! What do part do I need to test?
…-------- Oorspronkelijk bericht --------
Op 15 jul. 2022 18:26, schreef dheijl :
Would you be prepared to test the latest 1.4.1 release with the new FLAC support option?
—
Reply to this email directly, [view it on GitHub](#53 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AAIPS7V5SBAXNOWAIGI7NODVUGGLRANCNFSM5OJFKCDA).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Select FLAC format in the new FMT dropdown and try to stream to the phorus. And let me know if it works or not of course:) |
Sure thing. I didn't get around to it last weekend, but I'll try to somewhere the coming week. |
Okay, finally found the time. Unfortunately it did not work :\ I did see the soundbar respond however. It says "Play-Fi" or show the volume level, but as before no sound came out. I tested with 1.4.3, FLAC and WAV. |
Thanks for testing! I'm sorry that it doesn't work. But I notice that you enabled "autoreconnect", and it looks like the "Play" command is being sent before the built-in HTTP server is ready. Could you give it one more try with "autoreconnect" disabled? |
I tried, but it makes no difference unfortunately. Strangely enough today I don't even see the bar respond like it did yesterday. Any clues? |
All the upnp commands/responses look OK but the soundbar does not send a streaming request when asked to play. No clues unfortunately. |
Hi! This looks like a great utility allowing me to wirelessly use my soundbar as my laptop's sound output. Very cool.
Unfortunately it doesn't seem to work. Sound still keeps coming from my laptop's speakers instead of the soundbar. I have compiled from source and am running on Fedora 35. I see a message in the logs saying "Sorry, you don't have permissions to raise priority....". Any idea what could be going on?
I'm assuming it is not due to incompatibility, because a similar project did work, albeit with quite some latency. I have also disabled SELinux. The soundbar ("Woonkamer") is correctly detected.
Debug logs:
The text was updated successfully, but these errors were encountered: