This project aims to save time and energy for deep learning folks. It monitors new things on multiple sources and find out those important to you. Currently, the data sources include:
- Arxiv papers
- Tweets
- Reddit posts
Take a look at the public server: https://deeplearn.org
- Install postgres server
pip install -r requirements.txt
sudo apt-get install poppler-utils
- Create a
.env
file in the project root.
DATABASE_USER=dlmonitor
DATABASE_PASSWD=something
TWITTER_CONSUMER_KEY=something
TWITTER_CONSUMER_SECRET=something
TWITTER_ACCESS_TOKEN=something
TWITTER_ACCESS_SECRET=something
SUPERVISORD_PASSWD=something
- Create database
Run bash bin/create_db.sh
Fetch Arxiv papers and tweets.
python bin/fetch_new_sources.py all
PYTHONPATH="." python dlmonitor/webapp/app.py
-
Install nginx
-
Copy configuration files for supervisord and nignx
bash bin/config_server.sh
- Start Gunicorn processes through supervisord
bash bin/start_supervisord.sh