Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change in readme.md #39

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open

Change in readme.md #39

wants to merge 26 commits into from

Conversation

ArqiesAr
Copy link

gi to git

@what-the-diff
Copy link

what-the-diff bot commented Mar 19, 2023

  • Renamed the project to auto-subtitle-plus
  • Added support for multiple languages (see --language parameter)
  • Fixed audio out of sync issue by using ffmpeg instead of pydub and audiosegment
  • Wildcard support for filenames, so you can use it like this: auto_subtitle *.mp4 or even with a folder path as input: auto_subtitle /path/to/folder/*.*
  • Convert audio to subtitles (output .srt files), useful if you want to generate srt's without generating videos at all, just run something like this : auto_subtitles video1*.avi -o outputdir --output-srt. This will create an .srt file next to your original video in the specified directory (-o). You can also specify only one filename too!
  • Add a new function to check if the input file is an audio
  • Add a new function to extract audio from video using ffmpeg
  • Change the main() method in cli.py:
    • Extract audios first, then generate subtitles with whisper and finally add them into videos (if output_video=True)
  • Added a new file utils.py
  • Updated requirements to include psutil and youtube-dl
  • Changed the name of the package from auto_subtitle to auto_subtitle_plus, updated author information, added description for console script entry point in setup.py

@thebetauser
Copy link

thebetauser commented Jul 14, 2023

Hello, can you please add a check for using Apple silicon (M1/M2/M3)

if torch.cuda.is_available():
    default_device = "cuda"
elif hasattr(torch.backends, 'mps'):
    if torch.backends.mps.is_available():
        default_device = "mps"
else:
    default_device = "cpu"
parser.add_argument("--device", default="cuda" if torch.cuda.is_available() else ("mps" if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available() else "cpu"), help="device to use for PyTorch inference")

Edit:
Whisper is currently adding support for this so as of 7/14 it will not work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants