-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real-time transcription #4
Comments
I'd also like to see an SFSpeechRecognizer-like API for easy replacement of SFSpeechRecognizer. |
+1 for this feature |
Yes, this would be a great feature. |
Right now, I'm setting a timer to start + stop the transcription every 2 seconds, however it's not that accurate because if a word is cut off, then whisper tries to improvise, and the text often has hallucinations. |
how would whiper officially support real time? The cut off issue is same for official library, correct? @fakerybakery |
The Whisper cpp repo has examples of how to implement realtime. |
I think that the whisper.cpp library stores some of the previous recording history and uses that to fix the cutoff issue, but I'm not sure. |
thanks for point that out. Just curious, why it's in Obj-C but not in swift version... |
I don't know why, but if someone could port the example to Swift, I would really appreciate that (I'm really bad at Obj-C). |
Yep, I believe it does too – see this line (and line 245) |
Don't have a great understanding, but to me it looks like whisper.objc is storing the contents of a buffer when it fills up, then calling it's transcribe function against what it just stored, while clearing the buffer and re-enqueuing it. I don't know a ton about AVFAudio, but does anyone know if you could use AVAudioEngine and AVAudioPCMBuffer to create similar functionality? I'm thinking you could call Whisper.transcribe here with the buffer data if you can get that buffer data back from AVAudioEngine. Does anyone know if that would work? |
@barkb have you ever found a solution to this real-time idea? |
+1 |
I found this Swift implementation of streaming: https://github.com/leetcode-mafia/cheetah/blob/b7e301c0ae16df5c597b564b2126e10e532871b2/LibWhisper/stream.cpp with a swift file inside a swit project. It's CC0 licensed. I couldn't tell if it's uses the right config to benefit from the latest Metal/OpenML performance-oriented config and it uses some tool that requires a brew install so I don't know how sandbox friendly it is. |
The linked app is an AI interview ... er assistant? and it listens to your audio and tries to respond with GPT-4 (it doesn't use SwiftWhisper). It uses the
I haven't extensively researched this subject, but my interpretation is that this allows the app to listen to your system audio and transcribe it, so you don't have to install external software such as BlackHole. This leads me to believe that the library may not be necessary if the object is to listen from the microphone, which may mean that it can be run on other devices, such as iOS. |
@fakerybakery it looks to me like https://github.com/leetcode-mafia/cheetah/blob/b7e301c0ae16df5c597b564b2126e10e532871b2/LibWhisper/WhisperStream.swift has similarities to https://github.com/exPHAT/SwiftWhisper/blob/master/Sources/SwiftWhisper/Whisper.swift and the latter could be extended with that logic with some effort |
I've ported it into SwiftWhisper here: dougzilla32/SwiftWhisper@master...lake-of-fire:SwiftWhisper:master#diff-bc90b919aba349b74638614ff99f2c0581ae2bcd8b4c2c816a9c9d93969853d0 still untested though. Looks like SDL can run on iOS. |
Wow, thank you so much! Might it be possible to update the README to add documentation? |
No plans, but I'll update here if I test it and it works |
Hi @aehlke, were you able to get it to work? |
Haven't tried yet. I will within a week or two probably |
I have created a very poor mans version of the streaming here. It works but the reading from the buffer queue needs quite a bit of improvement. |
What's the downside to your queue implementation? Like what's the cost or risk of the technical debt as you implemented it - thanks |
@aehlke lost fidelity. If you test using GGeranov's implementation with I havent had time to invest in making it more true buffer where it puts audio drops back into the array, this is more of a FILO queue. |
I tested and fixed the one I linked above. I don't have a test implementation to share but it works. |
@aehlke mind sharing a code example? |
cheetah-main-2.zip |
@aehlke this is pretty amazing. Are you using a more recent version of the code? When I try to add SwiftWhisper as a dependency from github.com/lake-of-fire/SwiftWhisper.git I get error that SwiftWhisperStream cannot be found |
https://github.com/lake-of-fire/SwiftWhisper/blob/master/Package.swift#L20 it's here... btw this appears to work on both iOS and macOS tho I only really tested macOS. licensing of the dependencies involved are all properly open eg MIT, no GPL my SwiftWhisper fork is messy and could be simplified for sure, either merged into SwiftWhisper or split out as a separate thing |
@aehlke my mistake looks like I didn't actually add it to the target. Amazing work |
Personally think that Metal > CoreML so far but I mostly dissected your project and havent pulled it into mine yet. CoreML seems to spike the CPU but maybe Im doing something wrong. I also completely gutted the Cheetah project and created a simpler example here for anyone else following along. One thing to note is at least on my end the downloader isnt updating the state correctly when it finishes but could just be a me thing. You have to restart and it recognizes the file. |
@cgfarmer4 when I run your |
this sounds like a critical feature. I'll also look into it... |
Looks like the swift SDL calls needed are:
|
Hi @aehlke @cgfarmer4, might it be possible to run this on iOS? |
I think it works on iOS. haven't fully tested yet. |
Yea I cant figure it out @aehlke. Dont have enough From
|
Should be able to call them from swift via the SDL module |
I've cleaned up my fork here https://github.com/lake-of-fire/SwiftWhisperStream |
Updates:
|
@aehlke nice job! Do you have an example project setup anywhere? Looking to stream, but not from the device microphone (from live video). |
You can select/specify a device via CaptureDevice. I don't have any open source demo currently sorry - it's going into my iOS/macOS app ChatOnMac.com which isn't fully open source currently. I'm almost done with my fork and will update here once it's working. |
Got it. It looks like the your fork only supports input from a device. Is there a way to support input from raw buffers (eg. streaming audio, video audio) that doesn't come from a device microphone? |
It's possible but I haven't implemented that yet. I'd like to eventually. Another option is to create a virtual device. edit: one more discovery - ensure xcode debug mode thread sanitizer is OFF, otherwise accuracy plummets and CPU shoots up |
@cerupcat if youre using this for macOS, you can use a loopback device. https://github.com/ExistentialAudio/BlackHole |
Thanks @cgfarmer4. Looking to use this for iOS though so need a way to pass in an audio buffer. |
My fork isn't working on iOS 😞 Trying to understand why - everything appears to work except that there's no actual incoming audio signal, despite gaining mic permission etc. EDIT: Never mind, it works on iOS ~ |
@aehlke thanks for your work! I'm looking to add live transcriptions to my own iOS app. Are you planning to create a PR into the SwiftWhisper repo? Or will you be keeping it as a separate fork? |
I currently don't plan to spend more time packaging it for reuse or submitting a PR, sorry. Just trying to get it working for my own purposes, and wanted to share my work openly while I'm at it. |
Hi @aehlke! Thanks for your fork! Are you planning to add some documentation to your fork? |
@fakerybakery hi! I have no plans to improve it for reuse, sorry. it works but it's a mess, which serves my needs. you may consider it unmaintained... |
+1 Needs this feature also!! |
@aehlke looks like the package Getting errors:
Anyone have any ideas ? |
It was building for me in my project as of a few weeks ago but I will take another look within a month or two |
This is the way ... https://github.com/argmaxinc/WhisperKit |
@cgfarmer4 it uses CoreML instead of Metal? Is that actually better now? It used to be much worse btw my fork supports iOS 16 & macOS 13 (maybe earlier as well) edit: also worth noting that one uses the huggingface lib, while mine/this one uses llama.cpp. it would be cool if someone wants to package up my work better but I can't afford time to work on it for a while. I will revisit its production use within weeks or months and might be able to open src more then. |
@aehlke it uses GPU + Neural Engine. That is a win that yours supports 13 however this one has a full test suite and the models are further optimized. https://www.takeargmax.com/blog/whisperkit They are also working on a Metal version:
|
Hey, awesome package!
I wanted to ask how one could use this for on-device realtime description with microphone audio, similar to the objc example from the whisper.cpp package
The text was updated successfully, but these errors were encountered: