Skip to content

Commit

Permalink
docs: spell check (#1463)
Browse files Browse the repository at this point in the history
  • Loading branch information
Quicksilver151 authored Dec 10, 2024
1 parent 1b9e9bb commit 2595908
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
</p>

<h3 align="center">
A cli to browse and watch anime (alone AND with friends). This tool scrapes the site <a href="https://allanime.to/">allanime.</a>
A cli to browse and watch anime (alone AND with friends). This tool scrapes the site <a href="https://allmanga.to/">allmanga.</a>
</h3>

<h1 align="center">
Expand Down Expand Up @@ -562,7 +562,7 @@ Ani-skip uses the external lua script function of mpv and as such – for now

## Homies

* [animdl](https://github.com/justfoolingaround/animdl): Ridiculously efficient, fast and light-weight (supports most sources: allanime, zoro ... (Python)
* [animdl](https://github.com/justfoolingaround/animdl): Ridiculously efficient, fast and light-weight (supports most sources: allmanga, zoro ... (Python)
* [jerry](https://github.com/justchokingaround/jerry): stream anime with anilist tracking and syncing, with discord presence (Shell)
* [anipy-cli](https://github.com/sdaqo/anipy-cli): ani-cli rewritten in python (Python)
* [Dantotsu](https://github.com/rebelonion/Dantotsu): Rebirth of Saikou, Best android app for anime/manga/LN with anilist integration (Kotlin)
Expand Down
6 changes: 3 additions & 3 deletions ani-cli
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ get_links() {
[ -z "$ANI_CLI_NON_INTERACTIVE" ] && printf "\033[1;32m%s\033[0m Links Fetched\n" "$provider_name" 1>&2
}

# innitialises provider_name and provider_id. First argument is the provider name, 2nd is the regex that matches that provider's link
# initialises provider_name and provider_id. First argument is the provider name, 2nd is the regex that matches that provider's link
provider_init() {
provider_name=$1
provider_id=$(printf "%s" "$resp" | sed -n "$2" | head -1 | cut -d':' -f2 | sed 's/../&\
Expand Down Expand Up @@ -332,7 +332,7 @@ play() {
else
play_episode
fi
# moves upto stored positon and deletes to end
# moves up to stored position and deletes to end
[ "$player_function" != "debug" ] && [ "$player_function" != "download" ] && tput rc && tput ed
}

Expand Down Expand Up @@ -496,7 +496,7 @@ esac

# moves the cursor up one line and clears that line
tput cuu1 && tput el
# stores the positon of cursor
# stores the position of cursor
tput sc

# playback & loop
Expand Down
8 changes: 4 additions & 4 deletions hacking.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Ani-cli is set up to scrape one platform - currently allanime. Supporting multip

However ani-cli being open-source and the pirate anime streaming sites being so similar you can hack ani-cli to support any site that follows a few conventions.

## Prequisites
## Prerequisites
Here's the of skills you'll need and the guide will take for granted:
- basic shell scripting
- understanding of http(s) requests and proficiency with curl
Expand Down Expand Up @@ -33,7 +33,7 @@ An adblocker can help with reducing traffic from the site, but beware of extensi

Once you have the pages (urls) that you're interested in, it's easier to inspect them from less/an editor.
The debugger's inspector can help you with finding what's what but finding patterns/urls is much easier in an editor.
Additionally the debugger doesn't always show you the html faithfully - I've experineced some escape sequences being rendered, capitalization changing - so be sure you see the response of the servers in raw format before you write your regexes.
Additionally the debugger doesn't always show you the html faithfully - I've experienced some escape sequences being rendered, capitalization changing - so be sure you see the response of the servers in raw format before you write your regexes.

### Core concepts
If you navigate the site normally from the browser, you'll see that each anime is represented with an URL that compromises from an ID (that identifies a series/season of series) and an episode number.
Expand All @@ -50,7 +50,7 @@ Just try searching for a few series and see how the URL changes (most of the tim
If the site uses a POST request or a more roundabout way, use the debugger to analyze the traffic.

Once you figured out how searching works, you'll have to replicate it in the `search_anime` function.
The `curl` in this function is responsible for the search request, and the following `sed` regexes mold the respons into many lines of `id\ttitle` format.
The `curl` in this function is responsible for the search request, and the following `sed` regexes mold the response into many lines of `id\ttitle` format.
The reason for this is the `nth` function, see it for more details.
You'll have to change some variables in the process (eg. allanime_base) too.

Expand Down Expand Up @@ -83,7 +83,7 @@ From here they are separated and parsed by `provider_init` and the first half on
Some sites (like allanime) have these urls not in plaintext but "encrypted". The decrypt allanime function does this post-processing, it might need to be changed or discarded completely.

If there's only one embed source, the `generate links..` block can be reduced to a single call to `generate_link`.
The current structure does the agregation of many providers asynchronously, but this is not needed if there's only one source.
The current structure does the aggregation of many providers asynchronously, but this is not needed if there's only one source.

### Extracting the media links

Expand Down

0 comments on commit 2595908

Please sign in to comment.