Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shumakov Gleb AT-18 #40

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions REPORT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# API
![img.png](img.png)

28.11.2021 было наибольшее количество правок. Это дата смерти Градского.
# Корреляция
![img_1.png](img_1.png)

Наибольшее количество правок совпадает с датой смерти Бельмондо, но такой метрикой все же нельзя пользоваться. Так как, например, большое количество правок может быть вызвано другим событием.
2,002 changes: 2,002 additions & 0 deletions all_news.json

Large diffs are not rendered by default.

11 changes: 11 additions & 0 deletions api_task.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from urllib.request import urlopen
from json import loads
from itertools import groupby


url = 'https://ru.wikipedia.org/w/api.php?action=query&format=json&prop=revisions&rvlimit=500&titles=%D0%93%D1%80%D0%B0%D0%B4%D1%81%D0%BA%D0%B8%D0%B9,_%D0%90%D0%BB%D0%B5%D0%BA%D1%81%D0%B0%D0%BD%D0%B4%D1%80_%D0%91%D0%BE%D1%80%D0%B8%D1%81%D0%BE%D0%B2%D0%B8%D1%87'
data = loads(urlopen(url).read().decode('utf8'))
group_data = groupby([i['timestamp'][:10] for i in data['query']['pages']['183903']['revisions']])

for date, edits in group_data:
print(date, len(list(edits)))
11 changes: 11 additions & 0 deletions correlation_task.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from urllib.request import urlopen
from json import loads
from itertools import groupby


url = 'https://ru.wikipedia.org/w/api.php?action=query&format=json&prop=revisions&rvlimit=500&titles=%D0%91%D0%B5%D0%BB%D1%8C%D0%BC%D0%BE%D0%BD%D0%B4%D0%BE,_%D0%96%D0%B0%D0%BD-%D0%9F%D0%BE%D0%BB%D1%8C'
data = loads(urlopen(url).read().decode('utf8'))
group_data = groupby([i['timestamp'][:10] for i in data['query']['pages']['192203']['revisions']])

for date, edits in group_data:
print(date, len(list(edits)))
Binary file added img.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
802 changes: 802 additions & 0 deletions news.json

Large diffs are not rendered by default.

14 changes: 14 additions & 0 deletions news_task1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import xml.etree.ElementTree as ET
from urllib.request import urlopen
from json import dump

data = urlopen('https://lenta.ru/rss').read().decode('utf8')
root = ET.fromstring(data)
news = []

for i in root.findall('channel/item'):
news.append({'pubDate': i.find('pubDate').text,
'title': i.find('title').text})

with open('news.json', 'w', encoding='utf-8') as file:
dump(news, file, indent=1, ensure_ascii=False)
13 changes: 13 additions & 0 deletions news_task2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import xml.etree.ElementTree as ET
from urllib.request import urlopen
from json import dump

data = urlopen('https://lenta.ru/rss').read().decode('utf8')
root = ET.fromstring(data)
news = []

for item in root.findall('channel/item'):
news.append({i.tag: i.text for i in item})

with open('all_news.json', 'w', encoding='utf-8') as file:
dump(news, file, indent=1, ensure_ascii=False)