Skip to content

Commit

Permalink
Merge pull request #2 from Rosna/master
Browse files Browse the repository at this point in the history
Added new data to data folder
  • Loading branch information
citronella3alain authored Jul 26, 2019
2 parents 06aab61 + 52971e2 commit f20076a
Show file tree
Hide file tree
Showing 22 changed files with 2,481 additions and 175 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
.ipynb_checkpoints/
config.json
__pycache__/
.DS_Store
102 changes: 45 additions & 57 deletions data/citation.csv

Large diffs are not rendered by default.

66 changes: 61 additions & 5 deletions data/description.csv

Large diffs are not rendered by default.

291 changes: 210 additions & 81 deletions data/installation.csv

Large diffs are not rendered by default.

68 changes: 42 additions & 26 deletions data/invocation.csv

Large diffs are not rendered by default.

371 changes: 371 additions & 0 deletions data/none.csv

Large diffs are not rendered by default.

81 changes: 81 additions & 0 deletions data/repos/cltk-cltk-README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# The Classical Language Toolkit

[![PyPi downloads](http://img.shields.io/pypi/v/cltk.svg?style=flat)](https://pypi.python.org/pypi/cltk/) [![Documentation Status](https://readthedocs.org/projects/cltk/badge/?version=latest)](http://docs.cltk.org/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.593336.svg)](https://doi.org/10.5281/zenodo.593336)

[![Build Status](https://travis-ci.org/cltk/cltk.svg?branch=master)](https://travis-ci.org/cltk/cltk) [![codecov.io](http://codecov.io/github/cltk/cltk/coverage.svg?branch=master)](http://codecov.io/github/cltk/cltk?branch=master)

[![Join the chat at https://gitter.im/cltk/cltk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/cltk/cltk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)


## About

The Classical Language Toolkit (CLTK) offers natural language processing (NLP) support for the languages of Ancient, Classical, and Medieval Eurasia. Greek, Latin, Akkadian, and the Germanic languages are currently most complete. The goals of the CLTK are to:
* compile analysis-friendly corpora;
* collect and generate linguistic data;
* act as a free and open platform for generating scientific research.


## Documentation

The docs are at [docs.cltk.org](http://docs.cltk.org).


### Installation

CLTK supports Python versions 3.6 and 3.7. The software only runs on POSIX–compliant operating systems (Linux, Mac OS X, FreeBSD, etc.).

``` bash
$ pip install cltk
```

See docs for [complete installation instructions](http://docs.cltk.org/en/latest/installation.html).

The [CLTK organization curates corpora](https://github.com/cltk) which can be downloaded directly or, better, [imported by the toolkit](http://docs.cltk.org/en/latest/importing_corpora.html).


### Tutorials

For interactive tutorials, in the form of Jupyter Notebooks, see <https://github.com/cltk/tutorials>.


## Contributing

See the [Quickstart for contributors](https://github.com/cltk/cltk/wiki/Quickstart-for-contributors) for an overview of the process. If you're looking to start with a small contribution, see the [Issue tracker for "easy" jobs](https://github.com/cltk/cltk/issues?q=is%3Aopen+is%3Aissue+label%3Aeasy) needing to be done. Bigger projects may be found at [Project ideas](https://github.com/cltk/cltk/wiki/Project-ideas) page. Of course, new ideas are always welcome.


## Citation

Each major release of the CLTK is given a [DOI](http://en.wikipedia.org/wiki/Digital_object_identifier), a type of unique identity for digital documents. This DOI ought to be included in your citation, as it will allow researchers to reproduce your results should the CLTK's API or codebase change. To find the CLTK's current DOI, observe the blue `DOI` button in the repository's home on GitHub. To the end of your bibliographic entry, append `DOI ` plus the current identifier. You may also add version/release number, located in the `pypi` button at the project's GitHub repository homepage.

Thus, please cite core software as something like:
```
Kyle P. Johnson et al.. (2014-2019). CLTK: The Classical Language Toolkit. DOI 10.5281/zenodo.<current_release_id>
```

A style-neutral BibTeX entry would look like this:
```
@Misc{johnson2014,
author = {Kyle P. Johnson et al.},
title = {CLTK: The Classical Language Toolkit},
howpublished = {\url{https://github.com/cltk/cltk}},
note = {{DOI} 10.5281/zenodo.<current_release_id>},
year = {2014--2019},
}
```


[Many contributors](https://github.com/cltk/cltk/blob/master/contributors.md) have made substantial contributions to the CLTK. For scholarship about particular code, it might be proper to cite these individuals as authors of the work under discussion.


## Gratitude

We are thankful for the following organizations that have offered support:

* Google Summer of Code (sponsoring two students, 2016, 2017; three students 2018)
* JetBrains (licenses for PyCharm)
* Google Cloud Platform (with credits for the Classical Language Archive and API)


## License

The CLTK is Copyright (c) 2014-2019 Kyle P. Johnson, under the MIT License. See [LICENSE](https://github.com/cltk/cltk/blob/master/LICENSE) for details.
81 changes: 81 additions & 0 deletions data/repos/facebookresearch-DensePose-README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# DensePose:
**Dense Human Pose Estimation In The Wild**

_Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos_

[[`densepose.org`](https://densepose.org)] [[`arXiv`](https://arxiv.org/abs/1802.00434)] [[`BibTeX`](#CitingDensePose)]

Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body.
DensePose-RCNN is implemented in the [Detectron](https://github.com/facebookresearch/Detectron) framework and is powered by [Caffe2](https://github.com/caffe2/caffe2).

<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1qfSOkpueo1kVZbXOuQJJhyagKjMgepsz" width="700px" />
</div>


In this repository, we provide the code to train and evaluate DensePose-RCNN. We also provide notebooks to visualize the collected DensePose-COCO dataset and show the correspondences to the SMPL model.

## Installation

Please find installation instructions for Caffe2 and DensePose in [`INSTALL.md`](INSTALL.md), a document based on the [Detectron](https://github.com/facebookresearch/Detectron) installation instructions.

## Inference-Training-Testing

After installation, please see [`GETTING_STARTED.md`](GETTING_STARTED.md) for examples of inference and training and testing.

## Notebooks

### Visualization of DensePose-COCO annotations:

See [`notebooks/DensePose-COCO-Visualize.ipynb`](notebooks/DensePose-COCO-Visualize.ipynb) to visualize the DensePose-COCO annotations on the images:

<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1uYRJkIA24KkJU2i4sMwrKa61P0xtZzHk" width="800px" />
</div>

---

### DensePose-COCO in 3D:

See [`notebooks/DensePose-COCO-on-SMPL.ipynb`](notebooks/DensePose-COCO-on-SMPL.ipynb) to localize the DensePose-COCO annotations on the 3D template ([`SMPL`](http://smpl.is.tue.mpg.de)) model:

<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1m32oyMuE7AZd3EOf9k8zHpr75C8bHlYj" width="500px" />
</div>

---
### Visualize DensePose-RCNN Results:

See [`notebooks/DensePose-RCNN-Visualize-Results.ipynb`](notebooks/DensePose-RCNN-Visualize-Results.ipynb) to visualize the inferred DensePose-RCNN Results.

<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1k4HtoXpbDV9MhuyhaVcxDrXnyP_NX896" width="900px" />
</div>

---
### DensePose-RCNN Texture Transfer:

See [`notebooks/DensePose-RCNN-Texture-Transfer.ipynb`](notebooks/DensePose-RCNN-Texture-Transfer.ipynb) to localize the DensePose-COCO annotations on the 3D template ([`SMPL`](http://smpl.is.tue.mpg.de)) model:

<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1r-w1oDkDHYnc1vYMbpXcYBVD1-V3B4Le" width="900px" />
</div>

## License

This source code is licensed under the license found in the [`LICENSE`](LICENSE) file in the root directory of this source tree.

## <a name="CitingDensePose"></a>Citing DensePose

If you use Densepose, please use the following BibTeX entry.

```
@InProceedings{Guler2018DensePose,
title={DensePose: Dense Human Pose Estimation In The Wild},
author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos},
journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}
```


108 changes: 108 additions & 0 deletions data/repos/facebookresearch-ResNeXt-README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# ResNeXt: Aggregated Residual Transformations for Deep Neural Networks

By [Saining Xie](http://vcl.ucsd.edu/~sxie), [Ross Girshick](http://www.rossgirshick.info/), [Piotr Dollár](https://pdollar.github.io/), [Zhuowen Tu](http://pages.ucsd.edu/~ztu/), [Kaiming He](http://kaiminghe.com)

UC San Diego, Facebook AI Research

### Table of Contents
0. [Introduction](#introduction)
0. [Citation](#citation)
0. [Requirements and Dependencies](#requirements-and-dependencies)
0. [Training](#training)
0. [ImageNet Pretrained Models](#imagenet-pretrained-models)
0. [Third-party re-implementations](#third-party-re-implementations)

#### News
* Congrats to the ILSVRC 2017 classification challenge winner [WMW](http://image-net.org/challenges/LSVRC/2017/results).
ResNeXt is the foundation of their new SENet architecture (a **ResNeXt-152 (64 x 4d)** with the Squeeze-and-Excitation module)!
* Check out Figure 6 in the new [Memory-Efficient Implementation of DenseNets](https://arxiv.org/pdf/1707.06990.pdf) paper for a comparision between ResNeXts and DenseNets. <sub>(*DenseNet cosine is DenseNet trained with cosine learning rate schedule.*)</sub>
<p align="center">
<img src="http://vcl.ucsd.edu/resnext/resnextvsdensenet.png" width="480">
</p>


### Introduction

This repository contains a [Torch](http://torch.ch) implementation for the [ResNeXt](https://arxiv.org/abs/1611.05431) algorithm for image classification. The code is based on [fb.resnet.torch](https://github.com/facebook/fb.resnet.torch).

[ResNeXt](https://arxiv.org/abs/1611.05431) is a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call “cardinality” (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width.


![teaser](http://vcl.ucsd.edu/resnext/teaser.png)
##### Figure: Training curves on ImageNet-1K. (Left): ResNet/ResNeXt-50 with the same complexity (~4.1 billion FLOPs, ~25 million parameters); (Right): ResNet/ResNeXt-101 with the same complexity (~7.8 billion FLOPs, ~44 million parameters).
-----

### Citation
If you use ResNeXt in your research, please cite the paper:
```
@article{Xie2016,
title={Aggregated Residual Transformations for Deep Neural Networks},
author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He},
journal={arXiv preprint arXiv:1611.05431},
year={2016}
}
```

### Requirements and Dependencies
See the fb.resnet.torch [installation instructions](https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md) for a step-by-step guide.
- Install [Torch](http://torch.ch/docs/getting-started.html) on a machine with CUDA GPU
- Install [cuDNN v4 or v5](https://developer.nvidia.com/cudnn) and the Torch [cuDNN bindings](https://github.com/soumith/cudnn.torch/tree/R4)
- Download the [ImageNet](http://image-net.org/download-images) dataset and [move validation images](https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md#download-the-imagenet-dataset) to labeled subfolders

### Training

Please follow [fb.resnet.torch](https://github.com/facebook/fb.resnet.torch) for the general usage of the code, including [how](https://github.com/facebook/fb.resnet.torch/tree/master/pretrained) to use pretrained ResNeXt models for your own task.

There are two new hyperparameters need to be specified to determine the bottleneck template:

**-baseWidth** and **-cardinality**

### 1x Complexity Configurations Reference Table

| baseWidth | cardinality |
|---------- | ----------- |
| 64 | 1 |
| 40 | 2 |
| 24 | 4 |
| 14 | 8 |
| 4 | 32 |


To train ResNeXt-50 (32x4d) on 8 GPUs for ImageNet:
```bash
th main.lua -dataset imagenet -bottleneckType resnext_C -depth 50 -baseWidth 4 -cardinality 32 -batchSize 256 -nGPU 8 -nThreads 8 -shareGradInput true -data [imagenet-folder]
```

To reproduce CIFAR results (e.g. ResNeXt 16x64d for cifar10) on 8 GPUs:
```bash
th main.lua -dataset cifar10 -bottleneckType resnext_C -depth 29 -baseWidth 64 -cardinality 16 -weightDecay 5e-4 -batchSize 128 -nGPU 8 -nThreads 8 -shareGradInput true
```
To get comparable results using 2/4 GPUs, you should change the batch size and the corresponding learning rate:
```bash
th main.lua -dataset cifar10 -bottleneckType resnext_C -depth 29 -baseWidth 64 -cardinality 16 -weightDecay 5e-4 -batchSize 64 -nGPU 4 -LR 0.05 -nThreads 8 -shareGradInput true
th main.lua -dataset cifar10 -bottleneckType resnext_C -depth 29 -baseWidth 64 -cardinality 16 -weightDecay 5e-4 -batchSize 32 -nGPU 2 -LR 0.025 -nThreads 8 -shareGradInput true
```
Note: CIFAR datasets will be automatically downloaded and processed for the first time. Note that in the arXiv paper CIFAR results are based on pre-activated bottleneck blocks and a batch size of 256. We found that better CIFAR test acurracy can be achieved using original bottleneck blocks and a batch size of 128.

### ImageNet Pretrained Models
ImageNet pretrained models are licensed under CC BY-NC 4.0.

[![CC BY-NC 4.0](https://i.creativecommons.org/l/by-nc/4.0/88x31.png)](https://creativecommons.org/licenses/by-nc/4.0/)

#### Single-crop (224x224) validation error rate
| Network | GFLOPS | Top-1 Error | Download |
| ------------------- | ------ | ----------- | ------------|
| ResNet-50 (1x64d) | ~4.1 | 23.9 | [Original ResNet-50](https://github.com/facebook/fb.resnet.torch/tree/master/pretrained) |
| ResNeXt-50 (32x4d) | ~4.1 | 22.2 | [Download (191MB)](https://dl.fbaipublicfiles.com/resnext/imagenet_models/resnext_50_32x4d.t7) |
| ResNet-101 (1x64d) | ~7.8 | 22.0 | [Original ResNet-101](https://github.com/facebook/fb.resnet.torch/tree/master/pretrained) |
| ResNeXt-101 (32x4d) | ~7.8 | 21.2 | [Download (338MB)](https://dl.fbaipublicfiles.com/resnext/imagenet_models/resnext_101_32x4d.t7) |
| ResNeXt-101 (64x4d) | ~15.6 | 20.4 | [Download (638MB)](https://dl.fbaipublicfiles.com/resnext/imagenet_models/resnext_101_64x4d.t7) |

### Third-party re-implementations

Besides our torch implementation, we recommend to see also the following third-party re-implementations and extensions:

1. Training code in PyTorch [code](https://github.com/prlz77/ResNeXt.pytorch)
1. Converting ImageNet pretrained model to PyTorch model and source. [code](https://github.com/clcarwin/convert_torch_to_pytorch)
1. Training code in MXNet and pretrained ImageNet models [code](https://github.com/dmlc/mxnet/tree/master/example/image-classification#imagenet-1k)
1. Caffe prototxt, pretrained ImageNet models (with ResNeXt-152), curves [code](https://github.com/cypw/ResNeXt-1)[code](https://github.com/terrychenism/ResNeXt)
101 changes: 101 additions & 0 deletions data/repos/facebookresearch-pyrobot-README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
<a href="https://www.pyrobot.org/"><img class="doc_vid" src="docs/website/website/static/img/pyrobot.svg"></a>

[PyRobot](https://www.pyrobot.org/) is a light weight, high-level interface which provides hardware independent APIs for robotic manipulation and navigation. This repository also contains the low-level stack for [LoCoBot](http://locobot.org), a low cost mobile manipulator hardware platform.

- [What can you do with PyRobot?](#what-can-you-do-with-pyrobot)
- [Installation](#installation)
- [Getting Started](#getting-started)
- [The Team](#the-team)
- [Citation](#citation)
- [License](#license)
- [Future features](#Future-features)

## What can you do with PyRobot?

<p align="center">
<img src="https://thumbs.gfycat.com/FickleSpeedyChimneyswift-size_restricted.gif", height="180">
<img src="https://thumbs.gfycat.com/FinishedWeirdCockerspaniel-size_restricted.gif", height="180">
<img src="https://thumbs.gfycat.com/WeightyLeadingGrub-size_restricted.gif", height="180">
</p>

## Installation

### Installing both PyRobot and LoCoBot dependencies

* Install **Ubuntu 16.04**

* Download the installation script
```bash
sudo apt update
sudo apt-get install curl
curl 'https://raw.githubusercontent.com/facebookresearch/pyrobot/master/robots/LoCoBot/install/locobot_install_all.sh' > locobot_install_all.sh
```

* Run the script to install everything (ROS, realsense driver, etc.). **Please connect the nuc machine to a realsense camera before running the following commands**.
```bash
chmod +x locobot_install_all.sh
./locobot_install_all.sh
```

### Installing just PyRobot

* Install **Ubuntu 16.04**

* Install [ROS kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu)

* Install KDL

```bash
sudo apt-get -y install ros-kinetic-orocos-kdl ros-kinetic-kdl-parser-py ros-kinetic-python-orocos-kdl ros-kinetic-trac-ik
```

* Install Python virtual environment

```bash
sudo apt-get -y install python-virtualenv
virtualenv_name="pyenv_pyrobot"
VIRTUALENV_FOLDER=~/${virtualenv_name}
virtualenv --system-site-packages -p python2.7 $VIRTUALENV_FOLDER
```

* Install PyRobot

```bash
cd ~
mkdir -p low_cost_ws/src
cd ~/low_cost_ws/src
source ~/${virtualenv_name}/bin/activate
git clone --recurse-submodules https://github.com/facebookresearch/pyrobot.git
cd pyrobot/
pip install .
```

**Warning**: As realsense keeps updating, compatibility issues might occur if you accidentally update
realsense-related packages from `Software Updater` in ubuntu. Therefore, we recommend you not to update
any libraries related to realsense. Check the list of updates carefully when ubuntu prompts software udpates.

## Getting Started
Please refer to [pyrobot.org](https://pyrobot.org/) and [locobot.org](http://locobot.org)

## The Team

[Adithya Murali](http://adithyamurali.com/), [Tao Chen](https://taochenshh.github.io), [Dhiraj Gandhi](http://www.cs.cmu.edu/~dgandhi/), Kalyan Vasudev, [Lerrel Pinto](http://www.cs.cmu.edu/~lerrelp/), [Saurabh Gupta](http://saurabhg.web.illinois.edu) and [Abhinav Gupta](http://www.cs.cmu.edu/~abhinavg/). We would also like to thank everyone who has helped PyRobot in any way.

## Future features

We are planning several features, namely:
* Interfacing with other simulators like [AI Habitat](https://aihabitat.org)
* Gravity compensation
* PyRobot interface for [UR5](https://www.universal-robots.com)

## Citation
```
@article{pyrobot2019,
title={PyRobot: An Open-source Robotics Framework for Research and Benchmarking},
author={Adithyavairavan Murali and Tao Chen and Kalyan Vasudev Alwala and Dhiraj Gandhi and Lerrel Pinto and Saurabh Gupta and Abhinav Gupta},
journal={arXiv preprint arXiv:1906.08236},
year={2019}
}
```
## License
PyRobot is under MIT license, as found in the LICENSE file.
Loading

0 comments on commit f20076a

Please sign in to comment.