-
Notifications
You must be signed in to change notification settings - Fork 153
/
Install_and_Tutorial.txt
186 lines (110 loc) · 7.72 KB
/
Install_and_Tutorial.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
Information needed to run hentAI.
--Prereqs
First install DeepCreamPy (I will refer to this as DCP). You can install the exe (preffered) or source code.
You should check the github page to see if there is a newer weights model available. This is the h5 file.
The model packaged here is model 161. The weights are used to run the detections.
!!!!!! IMAGES should be .png FORMAT !!!!!
--Installation for hentAI Executable:
1. Unzip the downloaded folder, run main.exe
2. Make sure you downloaded the 4x_FatalPixels_340000_G.pth model and place it beside the main.exe.
Link here: https://de-next.owncube.com/index.php/s/mDGmi7NgdyyQRXL
3. You should see a black terminal show up. This means the libraries are being loaded. Wait until you see some
status text show up, then the main GUI will soon appear. Should not take more than 1 minute.
--- Tutorial for hentAI executable
1. You should use the provided input_images folder to place the images you want to decensor. Remember that you must
decensor content with bar censors and mosaic censors separately. Images must be in .png format.
Also, make sure to clear out the input folder between runs so you dont decensor the same things.
2. Use my Screentone Remover (on my github) if you want to decensor non-colored doujins with screentones.
3. Run main.exe. A terminal should open, and may remain blank for a minute. Just allow it to load, the program will start soon.
4. Select the kind of censor you have in your input folder, bar or mosaic
5. For "Your own input image folder", select the folder with your images in it.
For "DCP install directory", selec the parent directory of DeepCreamPy (usually called dist 1)
New for 1.6.7, dilation amount will expand the mask by any positive number of pixels (I reccommend lower integers).
6. Now you can hit the Go button. Loading the nueral network will take some time.
The image detections will also take some time, up to a minute per image once it gets rolling depending on your computer.
7. The images with the detected censors should have automatically been placed into the
corresponding folders in your DeepCreamPy directory.
8. Now you should run DeepCreamPy, and you can close hentAI. Be sure to select the appropriate censor type in DCP.
--- hentAI video detecting (Mosaic only)
1. Place the input .mp4 into its own folder.
Keep the video as SHORT as possible. You will need to do some video editing.
2. Open hentAI, enter the video detection, then select the source video folder and the DCP folder.
3. First, run Begin Detection
hentAI will automatically place frames into the DCP directories.
It will take a long time, regardless of your setup. If you have a compatible Nvidia GPU,
you can install tensorflow 1.9.0 and it should use your GPU to run detections.
4. After the process ends and you get the popup, run DCP. This will take some time to load its nueral network, but is faster per image.
5. After DCP finishes, go back to hentAI and run the Video Maker.
6. This will take the output of DCP and peice it together as a video.
You will need ffmpeg to properly add sound back in. On windows, place the ffmpeg.exe into the main directory,
or have its path defined as a system variable.
---hentAI Code (for fellow developers or contributors, linux users):
** If you have an nvidia, CUDA compatible card, there are special instructions for you below**
1. Install Anaconda3.
2. Open anaconda prompt, create virtual environment via
conda create --name hentai python=3.5.2
3. Activate that environment
conda activate hentai
4. Navigate to your hentAI install directory within conda
cd *path to hentAI folder*
5. Install requirements
pip install -r requirements-cpu.txt
OR if you have a CUDA compatible gpu with CUDA 9.0:
pip install -r requirements-gpu.txt
5.2. To get ffmpeg working, after installing the requirements, run these:
!add-apt-repository ppa:jon-severinsson/ffmpeg
!apt-get update
!apt-get install ffmpeg
5.5. NOTE: You might not be able to install torch 0.4.1 from pip, like on Windows.
In that case, run the following for CPU
conda install pytorch=0.4.1 -c pytorch
or if you have a Cuda compatible card:
conda install pytorch=0.4.1 cuda90 -c pytorch
6. Install Mask Rcnn
python setup.py install
7. Optionally, create an input folder.
---Tutorial for code:
For Screentone Remover, instructions are in its own file. Use this if you are using
non-colored images with the printed effect.
NOTE: If the doujin was tagged as [Digital] and does not have the screentone effect, you
probably do not need to use Screentone Remover.
You will also need the ffmpeg compiled binary in the same directory, or its path properly added as an environment variable.
1. First, have an input folder ready with whatever images you want to decensor.
Do not have bar censors and mosaic censors in the same folder.
2. To detect, simply run the main script and follow tutorial for the exe above.
python main.py
2a. NOTE: Code and Colab versions will have ESRGAN as well. Detection and decensoring will be done together so DeepCreamPy won't be needed.
The output of this will be in the ESR_output folder, but videos will be written to the main directory
3. Training:
**If you are interested in training, please contact me and I may provide you with
the current dataset. All I ask is that you also send me your trained model should
you improve on the latest detections. An NVIDIA CUDA compatible card is required**
4. Add to the dataset with VGG annotator. Class definitions are on the readme / github page.
5. Place dataset into the dataset_img folder. Please have a train and val folder.
Dataset annotation format should be on Readme or the hentAI git page.
6. Optionally, you can inspect the dataset with the samples\hentai\inspect_h_data.ipynb
This is meant for jupyter notebook.
7. Follow the special NVIDIA card instructions below to install tensorflow-gpu and CUDA 9.0 properly
8. Run the training!
python samples\hentai\hentai.py train --dataset=dataset_img/ --weights=weights.h5
(can also use --weights=last if you are resuming training.)
9. There will be a lot of compatibility warnings and such. But the training will still run.
Model should be placed in a logs folder, outside the hentAI folder.
You can use tensorboard to inspect the loss graph visuals.
10. Inspect the model detections with either hentAI or sampels\hentai\inspect_h_model.ipynb
You may modify the jupyter notebook to test the current model and your new model
simultaneously. Again, if you procure better results, you may create a pull request or just
send it to me!
--------NVIDIA users-----------
You may encounter an error like this: ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
This means your gpu is being detected, and tensorflow wants to use it. This means you could get 6-30x better performance.
1. Install CUDA 9.0 here: https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64
Get the right one for your computer
4. You also need cuDNN v7.6.4 runtime for CUDA 9.0. You need an Nvidia account to access the download, but it that is free.
Get it from here: https://developer.nvidia.com/cudnn
3. Make sure to install gpu requirements, not the cpu.
pip install -r requirements-gpu.txt
4. NOTE: It should be possible to have CUDA 9.0 and CUDA 10 coexist.
5. NOTE FOR RTX owners: There may exist some issue with RTX cards and the torch model in ESRGAN. I don't think it has been resolved yet,
so for now if you encounter errors such as CUDNN_STATUS_SUCCESS or CUDDNN_STATUS_ALLOC_FAILED, then change line 95 in detector.py
to say 'cpu' instead of 'cuda'. This will force only ESRGAN to use CPU. This will be slow, so at this point i'd reccommend using colab instead.