Skip to content

Latest commit

 

History

History
205 lines (156 loc) · 10.3 KB

README.md

File metadata and controls

205 lines (156 loc) · 10.3 KB

Comic Translate 源码解析

English | 한국어 | Français | 简体中文 | 日本語 | Português Brasileiro

Intro

Many Automatic Manga Translators exist. Very few properly support comics of other kinds in other languages. This project was created to utilize the ability of State of the Art (SOTA) Large Language Models (LLMs) like GPT-4 and translate comics from all over the world. Currently, it supports translating to and from English, Korean, Japanese, French, Simplified Chinese, Traditional Chinese, Russian, German, Dutch, Spanish and Italian. It can translate to (but not from) Turkish, Polish, Portuguese and Brazillian Portuguese.

The State of Machine Translation

For a couple dozen languages, the best Machine Translator is not Google Translate, Papago or even DeepL, but a SOTA LLM like GPT-4o, and by far. This is very apparent for distant language pairs (Korean<->English, Japanese<->English etc) where other translators still often devolve into gibberish. Excerpt from "The Walking Practice"(보행 연습) by Dolki Min(돌기민) Model

Comic Samples

GPT-4 as Translator. Note: Some of these also have Official English Translations

The Wretched of the High Seas

Journey to the West

The Wormworld Saga

Frieren: Beyond Journey's End

Days of Sand

Player (OH Hyeon-Jun)

Carbon & Silicon

Installation

Python

Install Python (<=3.10). Tick "Add python.exe to PATH" during the setup.

https://www.python.org/downloads/

Clone the repo (or download the folder), navigate to the folder

git clone https://github.com/ogkalu2/comic-translate
cd comic-translate

and install the requirements

pip install -r requirements.txt

If you run into any issues, you can try running it in a virtual environment. Open the terminal/cmd in whatever directory you want the virtual environment installed (or cd 'path/to/virtual environment/folder'). Create your virtual environment with:

python -m venv comic-translate-venv

Now activate the virtual environment. On Windows:

comic-translate-venv\Scripts\activate

On Mac and Linux:

source comic-translate-venv/bin/activate

Now you can run the Installation Commands again. When you are finished using the app, you can deactivate the virtul environment with:

deactivate

To re-activate, use the same commands with the terminal in the folder your virtual environment folder is located in.

If you have an NVIDIA GPU, then it is recommended to run

pip uninstall torch torchvision
pip install torch==2.1.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html
pip install torchvision==0.16.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html

Note: The 121 in +cu121 represents the CUDA version - 12.1. Replace 121 with your CUDA version. E.g 118 if you are running CUDA 11.8

Usage

In the comic-translate directory, run

python comic.py

This will launch the GUI

Tips

  • If you have a CBR file, you'll need to install Winrar or 7-Zip then add the folder it's installed to (e.g "C:\Program Files\WinRAR" for Windows) to Path. If it's installed but not to Path, you may get the error,
raise RarCannotExec("Cannot find working tool")

In that case, Instructions for Windows, Linux, Mac

  • Make sure the selected Font supports characters of the target language
  • v2.0 introduces a Manual Mode. When you run into issues with Automatic Mode (No text detected, Incorrect OCR, Insufficient Cleaning etc), you are now able to make corrections. Simply Undo the Image and toggle Manual Mode.
  • In Automatic Mode, Once an Image has been processed, it is loaded in the Viewer or stored to be loaded on switch so you can keep reading in the app as the other Images are being translated.
  • Ctrl + Mouse Wheel to Zoom otherwise Vertical Scrolling
  • The Usual Trackpad Gestures work for viewing the Image
  • Right, Left Keys to Navigate Between Images

API Keys

To following selections will require access to closed resources and subsequently, API Keys:

  • GPT-4o or 4o-mini for Translation (Paid, about $0.01 USD/Page for 4o)
  • DeepL Translator (Free for 500,000 characters/month)
  • GPT-4o for OCR (Default Option for French, Russian, German, Dutch, Spanish, Italian) (Paid, about $0.02 USD/Page)
  • Microsoft Azure Vision for OCR (Free for 5000 images/month)
  • Google Cloud Vision for OCR (Free for 1000 images/month) You can set your API Keys by going to Settings > Credentials

Getting API Keys

Open AI (GPT)

  • Go to OpenAI's Platform website at platform.openai.com and sign in with (or create) an OpenAI account.
  • Hover your Mouse over the right taskbar of the page and select "API Keys."
  • Click "Create New Secret Key" to generate a new API key. Copy and store it.

Google Cloud Vision

  • Sign in/Create a Google Cloud account. Go to Cloud Resource Manager and click "Create Project". Set your project name.
  • Select your project here then select "Billing" then "Create Account". In the pop-up, "Enable billing account", and accept the offer of a free trial account. Your "Account type" should be individual. Fill in a valid credit card.
  • Enable Google Cloud Vison for your project here
  • In the Google Cloud Credentials page, click "Create Credentials" then API Key. Copy and store it.

How it works

Speech Bubble Detection and Text Segmentation

speech-bubble-detector, text-segmenter. Two yolov8m models trained on 8k and 3k images of comics (Manga, Webtoons, Western) respectively.

OCR

By Default:

Optional:

These can be used for any of the supported languages. An API Key is required.

Inpainting

A Manga/Anime finetuned lama checkpoint to remove text detected by the segmenter. Implementation courtsey of lama-cleaner

Translation

Currently, this supports using GPT-4o, GPT-4o mini, DeepL, Claude-3-Opus, Claude-3.5-Sonnet, Claude-3-Haiku, Gemini-1.5-Flash, Gemini-1.5-Pro, Yandex, Google Translate and Microsoft Translator.

All LLMs are fed the entire page text to aid translations. There is also the Option to provide the Image itself for further context.

Text Rendering

PIL for rendering wrapped text in bounding boxes obtained from bubbles and text.

Acknowledgements