diff --git a/docs/00-index.md b/docs/00-index.md new file mode 100644 index 0000000..aa836ff --- /dev/null +++ b/docs/00-index.md @@ -0,0 +1,80 @@ +--- +slug: / +--- + +# Introduction + +Welcome to the OpenDataCam Documentation. +Below is a quickstart guide to get you started with OpenDataCam. +Detailed information on installation options and configuration can be found in the respective subpages. + +## Quickstart + +The quickest way to get started with OpenDataCam is to use the existing Docker Images. + +### Pre-Requesits + +- You will need Docker and Docker-Compose installed. +- If you want to run OpenDataCam on a NVIDIA GPU you will additonally need + - [Nvidia CUDA 11 and cuDNN 8](https://developer.nvidia.com/cuda-downloads) + - [Nvidia Container toolkit installed](https://github.com/NVIDIA/nvidia-docker) + - You also need to install `nvidia-container-runtime` +- To run OpenDataCam on a NVIDIA Jetson device you will need [Jetpack 5.x](https://developer.nvidia.com/embedded/jetpack-sdk-512). + +### Installation + +```bash +# Download install script +wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.2/docker/install-opendatacam.sh + +# Give exec permission +chmod 777 install-opendatacam.sh + +# Note: You will be asked for sudo password when installing OpenDataCam + +# Install command for Jetson Nano +./install-opendatacam.sh --platform nano + +# Install command for Jetson Xavier / Xavier NX +./install-opendatacam.sh --platform xavier + +# Install command for a Laptop, Desktop or Server with NVIDIA GPU +./install-opendatacam.sh --platform desktop +``` + +This command will download and start a docker container on the machine. +After it finishes the docker container starts a webserver on port 8080 and run a demo video. + +:::note + +The docker container is started in auto-restart mode, so if you reboot your machine it will automaticaly start opendatacam on startup. +To stop it run `docker-compose down` in the same folder as the install script. + +::: + +### Next Steps + +Now you can… + +- Drag'n'Drop a video file into the browser window to have OpenDataCam analzye this file +- Change the [video input](/docs/configuration/#video-input) to run from a USB-Cam or other cameras +- Use custom [neural network weigts](/docs/configuration/#use-custom-neural-network-weights) + +and much more. See [Configuration](/docs/configuration) for a full list of configuration options. + +## How accurate is OpenDataCam? + +Accuracy depends on which YOLO weights your hardware is capable of running. + +We are working on [adding a benchmark](https://github.com/opendatacam/opendatacam/issues/87) to rank OpenDataCam on the [MOT Challenge (Multiple Object Tracking Benchmark)](https://motchallenge.net/) + +## How fast is OpenDataCam? + +FPS depends on: + +- which hardware your are running OpenDataCam on +- which YOLO weights you are using + +We made the default settings to run at least at 10 FPS on any Jetson. + +Learn more in the [Customize OpenDataCam documentation](documentation/CONFIG.md#Change-neural-network-weights) \ No newline at end of file diff --git a/docs/02-installation.md b/docs/02-installation.md new file mode 100644 index 0000000..90027f3 --- /dev/null +++ b/docs/02-installation.md @@ -0,0 +1,62 @@ +# Installation + +- You will need Docker and Docker-Compose installed. +- If you want to run OpenDataCam on a NVIDIA GPU you will additonally need + - [Nvidia CUDA 11 and cuDNN 8](https://developer.nvidia.com/cuda-downloads) + - [Nvidia Container toolkit installed](https://github.com/NVIDIA/nvidia-docker) + - You also need to install `nvidia-container-runtime` +- To run OpenDataCam on a NVIDIA Jetson device you will need [Jetpack 5.x](https://developer.nvidia.com/embedded/jetpack-sdk-512). + +## As Docker Container (Recommended) + +This is the recommended way to install OpenDataCam. +Follow the [Quickstart Guide](/docs/#quickstart) + +## Kubernetes + +If you prefer to deploy OpenDataCam on Kubernetes rather than with Docker Compose, use the `--orchestrator` flag for changing the engine. + +Apart from that, a Kubernetes distribution custom made for the embedded world would be [K3s](https://k3s.io/), which can be installed in 30 seconds by running: + +```bash +curl -sfL https://get.k3s.io | sh - +``` + +Then, to automatically download and deploy the services: + +```bash +# Download install script +wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/master/docker/install-opendatacam.sh + +# Give exec permission +chmod 777 install-opendatacam.sh + +# Install command for Jetson Nano +./install-opendatacam.sh --platform nano --orchestrator k8s + +# Install command for Jetson Xavier / Xavier NX +./install-opendatacam.sh --platform xavier --orchestrator k8s + +# Install command for a Desktop machine +./install-opendatacam.sh --platform desktop --orchestrator k8s +``` + +:::note +NVIDIA offers a [Kubernetes device plugin](https://github.com/NVIDIA/k8s-device-plugin) for detecting GPUs on nodes in case you are managing a heterogeneous cluster. +Support for Jetson boards is being worked [here](https://gitlab.com/nvidia/kubernetes/device-plugin/-/merge_requests/20) +::: + +## Balena + +[![](https://www.balena.io/deploy.png)](https://dashboard.balena-cloud.com/deploy?repoUrl=https://github.com/balenalabs-incubator/opendatacam) + +If you have a fleet of one or more devices, you can use [balena](https://www.balena.io/) to streamline deployment and management of OpenDataCam. +You can sign up for a free account [here](https://dashboard.balena-cloud.com/signup) and add up to ten devices at no charge. +Use the button below to build OpenDataCam for a Jetson Nano, TX2, or Xavier. +You can then download an image containing the OS, burn it to an SD card, and use balenaCloud to push OpenDataCam to your devices. + +You can learn more about this deployment option along with a step-by-step guide in this [recent blog post](https://www.balena.io/blog/using-opendatacam-and-balena-to-quantify-the-world-with-ai/), or [view a screencast](https://www.youtube.com/watch?v=YfRvUeSLi0M&t=44m45s) of the deployment in action. + +## Without Docker + +See [How to install OpenDataCam without docker](/docs/development/install-without-docker/) \ No newline at end of file diff --git a/docs/03-configuration.md b/docs/03-configuration.md new file mode 100644 index 0000000..a994418 --- /dev/null +++ b/docs/03-configuration.md @@ -0,0 +1,670 @@ +# Configuration + +OpenDataCam offers several customization options: + +- **Video input:** run from a file, change webcam resolution, change camera type (raspberry cam, usb cam...) +- **Neural network:** change YOLO weights files depending on your hardware capacity, desired FPS (tinyYOLOv4, full yolov4 ...) +- **Change display classes:** We default to mobility classes (car, bus, person...), but you can change this. + +## General + +All settings are in the [`config.json`](https://github.com/opendatacam/opendatacam/blob/master/config.json) file that you will find in the same directory you run the install script. +Any time the `config.json` file was changed, OpenDataCam needs to be properly restarted using `docker-compose down && docker-compose up -d` (or the corresponding `npm` commands). + +## Run opendatacam on a video file + +By default, OpenDataCam will run on a demo video file, if you want to change it, you should just drag & drop on the UI the new file. + +[Learn more about the others video inputs available (IP camera, Rasberry Pi in the Advanced use section)](#video-input) + +### Specificities of running on a file + +- Opendatacam will restart the video file when it reaches the end +- When you click on record, Opendatacam will reload the file to start the recording at the beggining +- **LIMITATION: it will only record from frame nº25** + + +## Neural network weights + +You can change YOLO weights files depending on what objects you want to track and which hardware your are running opendatacam on. + +Lighters weights file results in speed improvements, but loss in accuracy, for example `yolov4` run at ~1-2 FPS on Jetson Nano, ~5-6 FPS on Jetson TX2, and ~22 FPS on Jetson Xavier + +In order to have good enough tracking accuracy for cars and mobility objects, from our experiments we found out that the sweet spot was to be able to run YOLO at least at 8-9 FPS. + +For a standard install of opendatacam, these are the default weights we pick depending on your hardware: + +- Jetson Nano: `yolov4-tiny` +- Jetson Xavier: `yolov4` +- Desktop install: `yolov4` + +If you want to use other weights, please see [use custom weigths](#use-custom-neural-network-weights). + +## Track only specific classes + +By default, the opendatacam will track all the classes that the neural network is trained to track. In our case, YOLO is trained with the VOC dataset, here is the [complete list of classes](https://github.com/pjreddie/darknet/blob/master/data/voc.names) + +You can restrict the opendatacam to some specific classes with the VALID_CLASSES option in the [config.json file](https://github.com/opendatacam/opendatacam/blob/master/config.json) . + +_Find which classes YOLO is tracking depending on the weights you are running. For example [yolov4 trained on COCO dataset classes](https://github.com/AlexeyAB/darknet/blob/master/data/coco.names)_ + +Here is a way to only track buses and person: + +```json +{ + "VALID_CLASSES": ["bus","car"] +} +``` + +In order to track all the classes (default value), you need to set it to: + +```json +{ + "VALID_CLASSES": ["*"] +} +``` + +*Extra note: the tracking algorithm might work better by allowing all the classes, in our test we saw that for some classes like Bike/Motorbike, YOLO had a hard time distinguishing them well, and was switching between classes across frames for the same object. By keeping all the detections classes we saw that we can avoid losing some objects, this is [discussed here](https://github.com/opendatacam/opendatacam/issues/51#issuecomment-418019606)* + +## Display custom classes + +By default we are displaying the mobility classes: + +![Display classes](https://user-images.githubusercontent.com/533590/56987855-f0101c00-6b64-11e9-8bf4-afd83a53f991.png) + +If you want to customize it you should modify the `DISPLAY_CLASSES` config. + +```json +"DISPLAY_CLASSES": [ + { "class": "bicycle", "hexcode": "1F6B2"}, + { "class": "person", "hexcode": "1F6B6"}, + { "class": "truck", "hexcode": "1F69B"}, + { "class": "motorbike", "hexcode": "1F6F5"}, + { "class": "car", "hexcode": "1F697"}, + { "class": "bus", "hexcode": "1F683"} +] +``` + +You can associate any icon that are in the `public/static/icons/openmojis` folder. (they are from https://openmoji.org/, you can search the hexcode / unicode icon id directly there) + +For example: + +```json +"DISPLAY_CLASSES": [ + { "class": "dog", "icon": "1F415"}, + { "class": "cat", "icon": "1F431"} + ] +``` + +![Display classes custom](https://user-images.githubusercontent.com/533590/56992341-3028cc00-6b70-11e9-8fd8-d7e405fe4d54.png) + + +*LIMITATION: You can display a maximum of 6 classes, if you add more, it will just display the first 6 classes* + +## Customize pathfinder colors + +You can change the `PATHFINDER_COLORS` variable in the `config.json`. The app picks randomly for each new tracked object a color inside it. The colors need to be in HEX format. + +```json +"PATHFINDER_COLORS": [ + "#1f77b4", + "#ff7f0e", + "#2ca02c", + "#d62728", + "#9467bd", + "#8c564b", + "#e377c2", + "#7f7f7f", + "#bcbd22", + "#17becf" +] +``` + +For example, with only 2 colors: + + +```json +"PATHFINDER_COLORS": [ + "#1f77b4", + "#e377c2" +] +``` + +![Demo 2 colors](https://user-images.githubusercontent.com/533590/58332468-ab993880-7e11-11e9-831a-5f958442e015.jpg) + + +## Customize Counter colors + +You can change the `COUNTER_COLORS` variable in the `config.json`. As you draw counter lines, the app will pick the colors in the order you specified them. + +You need to add "key":"value" for counter lines, the key should be the label of the color (without space, numbers or special characters), and the value the color in HEX. + +For example, you can modify the default from: + +```json +"COUNTER_COLORS": { + "yellow": "#FFE700", + "turquoise": "#A3FFF4", + "green": "#a0f17f", + "purple": "#d070f0", + "red": "#AB4435" +} +``` + +To + +```json +"COUNTER_COLORS": { + "white": "#fff" +} +``` + +And after restarting OpenDataCam you should get a white line when defining a counter area: + +![Screenshot 2019-05-24 at 21 03 44](https://user-images.githubusercontent.com/533590/58361790-71f31c80-7e67-11e9-8b35-ecabb4a1e78a.png) + +_NOTE: If you draw more line than COUNTER_COLORS defined, the lines will be black_ + +## Advanced settings + +### Video input + +OpenDataCam is capable to take in input several video streams: pre-recorded file, usbcam, raspberry cam, remote IP cam etc etc.. + +This is configurable via the `VIDEO_INPUT` ans `VIDEO_INPUTS_PARAMS` settings. + +```json +"VIDEO_INPUTS_PARAMS": { + "file": "opendatacam_videos/demo.mp4", + "usbcam": "v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink", + "raspberrycam": "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink", + "remote_cam": "YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv", + "remote_hls_gstreamer": "souphttpsrc location=http://YOUR_HLSSTREAM_URL_HERE.m3u8 ! hlsdemux ! decodebin ! videoconvert ! videoscale ! appsink" +} +``` + +With the default installation, OpenDataCam will have `VIDEO_INPUT` set to `usbcam`. See below how to change this + +_Technical note:_ + +Behind the hoods, this config input becomes [the input of the darknet](https://github.com/opendatacam/opendatacam/blob/master/server/processes/YOLO.js#L32) process which then get [fed into OpenCV VideoCapture()](https://github.com/AlexeyAB/darknet/blob/master/src/image_opencv.cpp#L577). + +As we compile OpenCV with Gstreamer support when installing OpenDataCam, we can use any [Gstreamer pipeline](http://www.einarsundgren.se/gstreamer-basic-real-time-streaming-tutorial/) as input + other VideoCapture supported format like video files / IP cam streams. + +You can add your own gstreamer pipeline for your needs by adding an entry to `"VIDEO_INPUTS_PARAMS"` + +#### Run from an usbcam + +1. Verify if you have an usbcam detected + +```bash +ls /dev/video* +# Output should be: /dev/video0 +# +``` + +2. Change `VIDEO_INPUT` to `"usbcam"` + +```json +"VIDEO_INPUT": "usbcam" +``` + +3. (Optional) If your device is on `video1` or `video2` instead of default `video0`, change `VIDEO_INPUTS_PARAMS > usbcam` to your video device, for example if /dev/video1 + +```json +"VIDEO_INPUTS_PARAMS": { + "usbcam": "v4l2src device=/dev/video1 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink" +} +``` + +#### Run from a file + +You have two options to run from a file: + +- EASY SOLUTION: Drag and drop the file on the UI , OpenDataCam will restart on it + +- Read a file from the filesystem directly by setting the path in the `config.json` + + +For example, you have a `file.mp4` you want to run OpenDataCam on : + +**For a docker (standard install) of OpenDataCam:** + +You need to mount the file in the docker container, copy the file in the folder where you have the `docker-compose.yml` file + +- create a folder called `opendatacam_videos` and put the file in it + +- mount the `opendatacam_videos` folder using `volumes` in the `docker-compose.yml` file + +```yaml +volumes: + - './config.json:/var/local/opendatacam/config.json' + - './opendatacam_videos:/var/local/darknet/opendatacam_videos' +``` + +Once you do have the video file inside the `opendatacam_videos` folder, you can modify the `config.json` the following way: + +1. Change `VIDEO_INPUT` to `"file"` + +```json +"VIDEO_INPUT": "file" +``` + +2. Change `VIDEO_INPUTS_PARAMS > file` with the path to your file + +```json +"VIDEO_INPUTS_PARAMS": { + "file": "opendatacam_videos/file.mp4" +} +``` + +Once `config.json` is saved, you only need to restart the docker container using + + ``` +sudo docker-compose restart +``` + +**For a non docker install of OpenDataCam:** + +Same steps as above but instead of mountin the `opendatacam_videos` you should just create in in the `/darknet` folder. + +#### Run from IP cam + +1. Change `VIDEO_INPUT` to `"remote_cam"` + +```json +"VIDEO_INPUT": "remote_cam" +``` + +2. Change `VIDEO_INPUTS_PARAMS > remote_cam` to your IP cam stream, for example + +```json +"VIDEO_INPUTS_PARAMS": { + "remote_cam": "http://162.143.172.100:8081/-wvhttp-01-/GetOneShot?image_size=640x480&frame_count=1000000000" +} +``` + +NB: this IP cam won't work, it is just an example. Only use IP Cam you own yourself. + +#### Run from Raspberry Pi cam (Jetson nano) + +**For a docker (standard install) of OpenDataCam:** + +Not supported yet, follow https://github.com/opendatacam/opendatacam/issues/178 for updates + +**For a non docker install of OpenDataCam:** + +1. Change `VIDEO_INPUT` to `"raspberrycam"` + +```json +"VIDEO_INPUT": "raspberrycam" +``` + +2. Restart node.js app + + +#### Change webcam resolution + +As explained on the Technical note above, you can modify the Gstreamer pipeline as you like, by default we use a 640x360 feed from the webcam. + +If you want to change this, you need to: + +- First know which resolution your webcam supports, run `v4l2-ctl --list-formats-ext`. + +- Let's say we will use `1280x720` + +- Change the Gstreamer pipeline accordingly: `"v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=1280, height=720 ! videoconvert ! appsink"` + +- Restart OpenDataCam + +_NOTE: Increasing webcam resolution won't increase OpenDataCam accuracy, the input of the neural network is 400x400 max, and it might cause the UI to have logs as the MJPEG stream becomes very slow for higher resolution_ + +### Use Custom Neural Network weights + +**For a docker (standard install) of OpenDataCam:** + +We ship inside the docker container those YOLO weights: + +- Jetson Nano: `yolov4-tiny` +- Jetson Xavier: `yolov4` +- Desktop install: `yolov4` + +In order to switch to another one you need: + +- to mount the necessary files into the darknet folder of the docker container so OpenDataCam has access to those new weights. + +- change the `config.json` accordingly + +For example, if you want to use `yolov3-tiny-prn` , you need to: + +- download `yolov3-tiny-prn.weights` the same directory as the `docker-compose.yml` file + +- (optional) download the `.cfg`, `.data` and `.names` files if they are custom (not default darknet) + +- mount the weights file using `volumes` in the `docker-compose.yml` file + +```yaml +volumes: + - './config.json:/var/local/opendatacam/config.json' + - './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights' +``` + +- (optional) if you have custom `.cfg`,`.data` and `.names` files you should mount them too + + +```yaml +volumes: + - './config.json:/var/local/opendatacam/config.json' + - './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights' + - './coco.data:/var/local/darknet/cfg/coco.data' + - './yolov3-tiny-prn.cfg:/var/local/darknet/cfg/yolov3-tiny-prn.cfg' + - './coco.names:/var/local/darknet/cfg/coco.names' +``` + +- change the `config.json` + +- add an entry to the `NEURAL_NETWORK_PARAMS` setting in `config.json`. + +```json +"yolov3-tiny-prn": { + "data": "cfg/coco.data", + "cfg": "cfg/yolov3-tiny-prn.cfg", + "weights": "yolov3-tiny-prn.weights" +} +``` + +- change the `NEURAL_NETWORK` param to the key you defined in `NEURAL_NETWORK_PARAMS` + +```json +"NEURAL_NETWORK": "yolov3-tiny-prn" +``` + +- If you've added new volumes to your `docker-compose.yml`, you need to update the container using: + +``` +sudo docker-compose up -d +``` + +- Otherwise, if you just updated files from existing volumes, you need to restart the container using: + +``` +sudo docker-compose restart +``` + +**For a non-docker install of opendatacam:** + +It is the same as above, but instead of mounting the files in the docker container you just need to directly copy them in the `/darknet` folder + +- copy `.weights` files , `.cfg` and `.data` file into the darknet folder + +- Sames steps at above + +- Restart the node.js app (not need to recompile) + +### Tracker settings + +You can tweak some settings of the tracker to optimize OpenDataCam better for your needs + +```json +"TRACKER_SETTINGS": { + "objectMaxAreaInPercentageOfFrame": 80, + "confidence_threshold": 0.2, + "iouLimit": 0.05, + "unMatchedFrameTolerance": 5, + "fastDelete": true, + "matchingAlgorithm": "kdTree" +} +``` + +- `objectMaxAreaInPercentageOfFrame`: Filters out Objects which area (width * height) is higher than a certain percentage of the total frame area + +- `confidence_threshold`: Filters out object that have less than this confidence value (value given by neural network) + +- `iouLimit`: When tracking from frame to frame exclude object from beeing matched as same object as previous frame (same id) if their IOU (Intersection over union) is lower than this. More details on how tracker works here: https://github.com/opendatacam/node-moving-things-tracker/blob/master/README.md#how-does-it-work + +- `unMatchedFrameTolerance`: This the number of frame we keep predicting the object trajectory if it is not matched by the next frame list of detections. Setting this higher will cause less ID switches, but more potential false positive with an ID going to another object. + +- `fastDelete`: If false, detections will always be kept for `unMatchedFrameTolerance` in the buffer. Otherwise, detections will be dropped from the tracker buffer if they can not be machted the next frame they appeared. Setting this to `false` can help with tracking difficult objects, but may have side effects like more frequent object ID switches or lower FPS as more objects will be kept in the buffer. + +- `matchingAlgorithm`: The algorithm used to match tracks with new detections. Can be either `kdTree` or `munkres`. See https://github.com/opendatacam/node-moving-things-tracker/pull/21 for differences in performance of the matching algorithms. + +### Counter settings + +```json +"COUNTER_SETTINGS": { + "countingAreaMinFramesInsideToBeCounted": 1, + "countingAreaVerifyIfObjectEntersCrossingOneEdge": true, + "minAngleWithCountingLineThreshold": 5, + "computeTrajectoryBasedOnNbOfPastFrame": 5 +} +``` + +- `countingAreaMinFramesInsideToBeCounted`: this is the minimum number of frames the object needs to remain in the area to be counted + +- `countingAreaVerifyIfObjectEntersCrossingOneEdge`: (default `true`) + + - if `true`: in order to count the tracked item, the algorithm checks if the object trajectory crosses one of the edges of the polygon otherwise it won't count it.. this is to avoid to count id reassignment inside the polygon. + + - if `false`: the algorithm for the counting won't check this.. it will be very dump and basically count the item if it remains more than `countingAreaMinFramesInsideToBeCounted` inside the zone.. but if its IDs reassigns inside it could count it twice... + +- `minAngleWithCountingLineThreshold`: Count items crossing the counting line only if the angle between their trajectory and the counting line is superior to this angle (in degree). 90 degree would count nothing (or only perfectly perpendicular object) whereas 0 will count everything. + +![Counting line angle illustration](https://user-images.githubusercontent.com/533590/84757717-c3b39b00-afc4-11ea-8aef-e4900d7f6352.jpg) + +- `computeTrajectoryBasedOnNbOfPastFrame`: This tells the counting algorithm to compute the trajectory to determine if an object crosses the line based on this number of past frame. As you can see below in reality the trajectory of the center of the bbox given by YOLO is moving a little bit from frame to frame, so this can smooth out and be more reliable to determine if object is crossing the line and the value of the angle of crossing + +![CounterBuffer](https://user-images.githubusercontent.com/533590/84810794-1ebcb080-b00c-11ea-9cae-065fc066e10f.jpg) + +NB: if the object has changed ID in the past frames, it will take the last past frame known with the same ID. + +### Database + +Database backend can be selected by setting the `DATABASE` key. +See below for a list of supported database backends. + +#### MongoDB + +The following configuration options exists for MongoDB + +- `url`: By default Opendatacam will use the MongoDB instance running locally under the same docker compose +file. +If you want to persist the data on a remote mongodb instance, you can change the setting `url`. +See the example below: +- `persistTracker`: If `true`, raw Tracker output will be stored. This allows for in-dept analysis of trajectories. + +``` +"DATABASE_PARAMS": { + "mongo": { + "url": "mongodb://my-mongo-server.domain.tld:27017", + "persistTracker": false + } +} +``` + +By default the Mongodb will be persisted in the `/data/db` directory of your host machine + +### Ports + +You can modify the default ports used by OpenDataCam. + +```json +"PORTS": { + "app": 8080, + "darknet_json_stream": 8070, + "darknet_mjpeg_stream": 8090 +} +``` + +### Tracker accuracy display + +The tracker accuracy layer shows a heatmap like this one: + +![Screenshot 2019-06-12 at 18 59 54](https://user-images.githubusercontent.com/533590/60195072-c6106880-983a-11e9-8edd-178a38d3e2a2.JPG) + +This heatmap highlights the areas where the tracker accuracy **isn't really good** to help you: + +- Set counter lines where things are well tracked +- Decide if you should change the camera viewpoint + +_Behind the hoods, it displays a metric of the tracker called "zombies" which represent the predicted bounding box when the tracked isn't able to asign a bounding box from the YOLO detections._ + +You can tweak all the settings of this display with the `TRACKER_ACCURACY_DISPLAY` setting. + +| nbFrameBuffer | Number of previous frames displayed on the heatmap | +| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| radius | Radius of the points displayed on the heatmap (in % of the width of the canvas) | +| blur | Blur of the points displayed on the heatmap (in % of the width of the canvas) | +| step | For each point displayed, how much the point should contribute to the increase of the heapmap value (the range is between 0-1), increasing this will cause the heatmap to reach the higher values of the gradient faster | +| gradient | Colors gradient, insert as many values as you like between 0-1 (hex value supported, ex: "#fff" or "white") | +| canvasResolutionFactor | In order to improve performance, the tracker accuracy canvas resolution is downscaled by a factor of 10 by default (set a value between 0-1) | + +```json +"TRACKER_ACCURACY_DISPLAY": { + "nbFrameBuffer": 300, + "settings": { + "radius": 3.1, + "blur": 6.2, + "step": 0.1, + "gradient": { + "0.4":"orange", + "1":"red" + }, + "canvasResolutionFactor": 0.1 + } +} +``` + +For example, if you change the gradient with: + +```json +"gradient": { + "0.4":"yellow", + "0.6":"#fff", + "0.7":"red", + "0.8":"yellow", + "1":"red" +} +``` + +![Other gradient](https://user-images.githubusercontent.com/533590/59389118-ec66dc00-8d43-11e9-8310-309da6ab42e1.png) + +### Use Environment Variables + +Some of the entries in `config.json` can be overwritten using environment variables. Currently this is the `PORTS` object and the setting for the `MONGODB_URL`. See the file [.env.example](../.env.example) as an example how to set them. Make sure the use the exact same names or opendatacam will fall back to `config.json`, and if that is not present the general defaults. + +#### Without Docker + +If you are running opendatacam without docker you can set these by: + +- adding a file called `.env` to the root of the project, + then these will be picked up by the [dotenv](https://www.npmjs.com/package/dotenv) package. +- adding these variables to your `.bashrc` or `.zshrc` depending on what shell you are using or any other configuration file that gets loaded into your shell sessions. +- adding them to the command you use to start the opendatacam, + for example in bash `MONGODB_URL=mongodb://mongo:27017 PORT_APP=8080 PORT_DARKNET_MJPEG_STREAM=8090 PORT_DARKNET_JSON_STREAM=8070 node server.js` + If you are on windows we suggest using the [`cross-env` package](https://www.npmjs.com/package/cross-env) to set these variables. + +#### With docker-compose + +If you are running opendatacam with `docker-compose.yml` you can set them as [environment section](https://docs.docker.com/compose/environment-variables/) to the service opendatacam like shown below. + +```yml +service: + opendatacam: + environment: + - PORT_APP=8080 +``` + +You also can can declare these environment variables [in a `.env` file](https://docs.docker.com/compose/env-file/) in the folder where the `docker-compose` command is invoked. Then these will be available within the `docker-compose.yml` file and you can pass them through to the container like shown below. + +The `.env` file. +```env +PORT_APP=8080 +``` + +the `docker-compose.yml` file. + +```yml +service: + opendatacam: + environment: + - PORT_APP +``` + +There is also the possibility the have the `.env` in the directory where the `docker-compose` command is executed and add the `env_file` section to the docker-compose.yml configuration. + +```yml +service: + opendatacam: + env_file: + - ./.env +``` + +You also can add these variables to the call of the `docker-compose` command. For example like this `docker-compose up -e PORT_APP=8080`. + +### GPS + +OpenDataCam can obtain the current position of the tracker via GPS and persist it along other counter data. +This is useful in situations where the OpenDataCam is mobile e.g. used as a dashcam or mounted to a drone. + +#### Requirements + +To receive GPS position a GPS enabled device must be connected to your Jetson or PC. +See [GPSD's list of supported devices](https://gpsd.gitlab.io/gpsd/hardware.html). + +Additionally you will need GPSD running. +GPSD can either run in Docker or as a system service. + +##### Running GPSD in Docker + +The easiest way to run GPSD is through docker using the [opensourcefoundries/gpsd](https://registry.hub.docker.com/r/opensourcefoundries/gpsd) image. + +The easiest way is to add the GPSD service to your `docker-compose.yml` the following way. + +```yaml +services: + # Add the following service to you docker compose file + gpsd: + image: opensourcefoundries/gpsd + # List your GPS device here and make sure that it matches the device in the entrypoint line + devices: + - /dev/ttyACM0 + entrypoint: ["/bin/sh", "-c", "/sbin/syslogd -S -O - -n & exec /usr/sbin/gpsd -N -n -G /dev/ttyACM0", "--"] + ports: + - "2947:2947" + restart: always +``` + +If GPSD been added to your Docker compose file, please change your GPS `hostname` setting in `config.json` to the name of your GPSD service. +In the example above this would be `"hostname": "gpsd"`. + +Alternatively, if you don't run OpenDataCam in docker you can only start GPSD via the following command: + +```bash +# This assumes your device is /dev/ttyACM0. Please Change accordingly to your setup. +GPS_DEVICE=/dev/ttyACM0; docker run -d -p 2947:2947 --device=$GPS_DEVICE opensourcefoundries/gpsd $GPS_DEVICE +``` + +##### Running GPSD as system service + +Please read your operating system documentation. + +#### Configuration + +To enable GPS add the following section to your `config.json` + +```json +"GPS": { + "enabled": true, + "port": 2947, + "hostname": "localhost", + "signalLossTimeoutSeconds": 60, + "csvExportOpenStreetMapsUrl": true +} +``` + +Whereas + +- `enabled` is a flag to control the feature +- `port` and `hostname`: Contain the location of the GPS Deamon. +- `signalLossTimeoutSeconds`: In case of temporary position loss, the old signal will remain valid for this many seconds. +- `csvExportOpenStreetMapsUrl`: Besides the raw `lat` and `lon` values, a link to OpenStreetMaps may be added to the exported CSV diff --git a/docs/04-platform-support/01-jetson-nano.md b/docs/04-platform-support/01-jetson-nano.md new file mode 100644 index 0000000..b2c0757 --- /dev/null +++ b/docs/04-platform-support/01-jetson-nano.md @@ -0,0 +1,288 @@ +# Jetson Nano + +## Limitations + +Jetson Nano has two power mode, 5W and 10W. + +Once Opendatacam is installed and **running without a monitor**, it runs perfectly fine on 5W powermode _(which is nice because you can power it with a powerbank)_. If you use it with a monitor connected, the display will be a bit laggy but it should work. + +We recommend you to do the setup with a monitor connected and then make your Jetson nano available as a Wifi hotspot to operate it from another device. + +The 10W Power mode of the Jetson won't bring much performance improvement for Opendatacam. + +:::info +For the 2GB Jetson Nano, it **must be installed and running without a monitor**! +Otherwise the system will run out of RAM and OpenDataCam will not start. +::: + +## Shopping list + +The minimum setup for 5W power mode is: + +- 1 Jetson nano +- 1 Camera [USB compatible camera](https://elinux.org/Jetson_Nano#Cameras) or [Raspberrycam module v2](https://www.raspberrypi.org/products/camera-module-v2/) +- 1 Wifi dongle, [this one is compatible](https://www.edimax.com/edimax/merchandise/merchandise_detail/data/edimax/in/wireless_adapters_n150/ew-7811un/) out of the box, or [see compatibility list](https://elinux.org/Jetson_Nano#Wireless). +- 1 MicroSD card (at least 32 GB and 100 MB/s) +- 1 Power supply: either a [5V⎓2A Micro-USB adapter](https://www.adafruit.com/product/1995) or a Powerbank with min 2A output. + +For 10W power mode _(this is good for desktop use when you plug the screen, the mouse, the keyboard, it draws powers from the peripherics)_ + +- Power supply: [5V⎓4A DC barrel jack adapter, 5.5mm OD x 2.1mm ID x 9.5mm length, center-positive](https://www.adafruit.com/product/1466) +- 1x [2.54mm Standard Computer Jumper](https://www.amazon.com/2-54mm-Standard-Computer-Jumper-100pack/dp/B00N552DWK/) This is used on the J48 Pin when supplying power from the jack entry instead of the microUSB. It tells the Jetson to by-pass the microUSB power entry. + +For setup: + +- 1 usb mouse +- 1 usb keyboard +- 1 screen (HDMI or Displayport) +- And for faster connexion, a ethernet cable to your router + +Learn more about Jetson Nano ecosystem: [https://elinux.org/Jetson_Nano#Ecosystem_Products_and_Sensors](https://elinux.org/Jetson_Nano#Ecosystem_Products_and_Sensors) + +## Setup Opendatacam + +### 1. Flash Jetson Nano: + +Follow [Flashing guide](FLASH_JETSON.md#Jetson-Nano) (don't forget to verify if CUDA is in your PATH) + +### 2. Set correct Powermode according to your Power supply + +#### Using microUSB + +Using microUSD with a powerbank or a 5V⎓2A power supply, you just need to plug-in and the Jetson Nano will start when connected to it. + +When started, we advise you to set the powermode of the Jetson Nano to 5W so it won't crash, to do so, open a terminal and run: + +``` +sudo nvpmodel -m 1 +``` + +To switch back to 10W power mode (default) + +``` +sudo nvpmodel -m 0 +``` + +#### Using barrel jack (5V - 4A) + +When working with the Jetson Nano with the monitor connected, we advise to use the barrel jack power. In order to do so you need first to put a jumper on the J48 pin (more details on Jetson Nano power supply) + +![jumper](https://user-images.githubusercontent.com/533590/60701138-edca9500-9efa-11e9-8c51-6e2b421ed44b.png) + +By default, the Jetson Nano will already run on the 10W power mode, but you can make sure it is by running: + +``` +sudo nvpmodel -m 0 +``` + +### 3. Setup a swap partition (Optional) + +In order to reduce memory pressure (and crashes), it is a good idea to setup a 6GB swap partition. _(Nano has only 4GB of RAM)_ + +```bash +git clone https://github.com/JetsonHacksNano/installSwapfile +cd installSwapfile +chmod 777 installSwapfile.sh +./installSwapfile.sh +``` + +Reboot the Jetson nano + +### 4. Install Opendatacam + +You need to install [Docker compose](https://blog.hypriot.com/post/nvidia-jetson-nano-install-docker-compose/) (no official installer available for ARM64 devices) + +```bash +sudo apt install python3-pip + +sudo apt-get install -y libffi-dev +sudo apt-get install -y python-openssl +sudo apt-get install -y libssl-dev + +sudo -H pip3 install --upgrade pip +sudo -H pip3 install docker-compose +``` + +And then install OpenDataCam + +```bash +# Download install script +wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.2/docker/install-opendatacam.sh + +# Give exec permission +chmod 777 install-opendatacam.sh + +# NB: You will be asked for sudo password when installing the docker container + +# Install command for Jetson Nano +./install-opendatacam.sh --platform nano +``` + +### 5. Run on USB Camera (Optional) + +By default, OpenDataCam will start on a demo file, but if you want to run from an usbcam you should + +- Verify an USB Camera is connected + +```bash +ls /dev/video* +# Output should be: /dev/video1 +``` + +- Change `"VIDEO_INPUT"` in `config.json` + +```json +"VIDEO_INPUT": "usbcam" +``` + +- Change `"usbcam"` device in `config.json` depending on the result of `ls /dev/video*` + +For example: + +```json +"v4l2src device=/dev/video1 ..." +``` + +- Restart docker + +``` +sudo docker-compose restart +``` + +_N.B : there is some issue to support out of the box (docker install) run from the CSI cam (raspberry pi cam), please see: https://github.com/opendatacam/opendatacam/blob/master/documentation/CONFIG.md#run-from-raspberry-pi-cam-jetson-nano for more info, you need to do a manual install for this_ + +### 6. Test Opendatacam + +Open `http://localhost:8080`. + +This will be super slow if you are using this directly on the monitor connected to the Jetson nano, see next step to access Opendatacam from an external device. + +### 7. Access Opendatacam via Wifi hotspot (Optional) + +:::note +You need a wifi dongle for this. +::: + +In situations where the Jetson is deployed in the field it may come in handy to be able to connect to the device directly without the need to bring an extra Ethernet Cable or WiFi Access Point. +To do so, Jetson can be configured to be a WiFi hotspot of it's own that your Laptop or Phone can connect to the device directly. + +#### On Ubuntu 18.04 via UI + +##### 1. Open network manager + +![step-networkconnections](https://user-images.githubusercontent.com/533590/60503905-eae46000-9cc0-11e9-8b0e-e5a1d7f7c922.png) + +##### 2. Create a wifi connection + +![step2-wifi](https://user-images.githubusercontent.com/533590/60503906-eae46000-9cc0-11e9-8648-3ccb2ebb880a.png) + +##### 3. Configure as a hotspot + +You can name the wifi as you like, and you need to select "Hotspot" for the Mode. + +![step3-hotspotnane](https://user-images.githubusercontent.com/533590/60503909-eae46000-9cc0-11e9-83df-48a4a57b3799.png) + +##### 3.a Set a wifi password (optional) + +In the Wi-fi security tab, you can setup an password to protect your jetson from being accessed by others. + +##### 4. Set auto-connect + +In the "General" tab, you need to check "Automatically connect to this network when it is available" so it will start the wifi hotspot on boot. + +![step4-autoconnect](https://user-images.githubusercontent.com/533590/60503910-eb7cf680-9cc0-11e9-8f97-e317fbfe8e39.png) + +##### 5. Reboot + +After rebooting your device, you should be able to connect it via the wifi hotspot and access OpenDataCam if started. + +##### 6. Get IP adress of Wifi hotpot + +Click on the network icon on the top right > Connection Information + +![connectioninformatio](https://user-images.githubusercontent.com/533590/60710337-bf58b400-9f12-11e9-8056-987f0b5ea583.png) + +#### Via command line + +This was tested on the Jetson Nano Developer Kit SD Card Image (JP 4.4 released on 2020/07/07). +But it should work on any Linux distribution with [NetworkManager](https://en.wikipedia.org/wiki/NetworkManager) and [`nmcli`](https://developer.gnome.org/NetworkManager/stable/nmcli.html) installed. + +To create an hotspot or ad-hoc WiFi network execute the following command + +```bash +nmcli device wifi hotspot ifname wlan0 ssid password +``` + +This will create a hotspot with SSID ``. +The hotspot will remain available until the device reboots or the hotspot is closed manually. +If the hotspot should be automatically created on boot execute the following command after creating the hotspot + +```bash +nmcli con modify Hotspot connection.autoconnect true +``` + +#### ⁉️ Troubleshooting + +1.Wifi-Hotspot doesn't show up on other devices +--> go to /etc/modprobe.d/bcmdhd.conf and add the line: options bcmdhd op_mode=2 +--> reboot your device + +### 8. Build a case (Optional) + +**Here are the steps to set up the Jetson NANO in the [Wildlife Cam Casing from Naturebytes](http://naturebytes.org/our-tech/).** + +![IMG_20190529_100541](https://user-images.githubusercontent.com/10535875/60716891-c1c20a80-9f20-11e9-99cc-0e1b3ce691e7.jpg) + +The casing is originally designed for the raspberry pi 3. The good thing is, that the form-factor of the nano board is not that different so with some simple modifications of the base plate, the Jetson board will fit in without any problems. + +| Nano side | Cam side | +| --------------------------------- | -------------------------------- | +| ![Nano on Plate](./assets/4.jpg) | ![Cam on plate](./assets/3.jpg) | + +#### Steps + +##### One way to fit the board on the baseplate is + +- Print out the PDF file with the baseplate template +- Attach the print on the baseplate +- Drill the holes marked in red on the template (preferably with an 3mm bit since the screws used are M3s) + +##### Another way is + +- Fix it with one thread screw on the existing hole (marked in blue in the baseplate template below) +- Rotate the board to the point where it fits perfectly on the baseplate. +- Mark your own holes in the locations (also marked in red in the baseplate template below). +- Once it fits, mark the spots for the hole and drill it with an electric drill (preferably with an 3mm bit since the screws used are M3s) + +Make sure to leave some space for the power adapter since it takes a bit of space. ([Link to the adapter](https://www.amazon.de/gp/product/B004US2XPS/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1)) + +| Baseplate | Print version | +| ------------------------------------------ | --------------------------------------------- | +| ![Baseplate](./assets/nano_baseplate.jpg) | [Baseplate PDF](./assets/nano_baseplate.pdf) | + +- When installing the RaspiCam make sure to cover the screws with some electrical tape to prevent short circuits since the board will be mounted on the other side of the plate. + +| | | +| -------------------------- | ------------------------- | +| ![Before](./assets/8.jpg) | ![After](./assets/7.jpg) | + +- After mounting the camera on the board you can place the board on the plate and fix it there with 4 thread screws (3mm) as seen on the pictures below + +| | | +| ------------------------- | ------------------------ | +| ![Front](./assets/5.jpg) | ![Back](./assets/6.jpg) | + +- The last step is to fix the baseplate on the casing with the included screws. + +![Nano on Plate](./assets/4.jpg) + +#### Congrats you are finshed + +| | | +| ------------------------- | ------------------------ | +| ![Front](./assets/2.jpg) | ![Back](./assets/1.jpg) | + +## Tips + +- You'll notice there are no button to power on / off button your Jetson Nano. When you plug the power supply it will power on immediately. If you want to restart you can just un-plug / re-plug if you are not connected via a Monitor or SSH. There is a way to add power buttons via the J40 pins, [see nvidia forum](https://devtalk.nvidia.com/default/topic/1050888/jetson-nano/power-and-suspend-buttons-for-jetson-nano/post/5333577/#5333577). +- To control your Jetson device in Headless mode, you can SSH into the machine using `ssh @.local` where `` and `` are the values chosing during headless install. (e.g. `ssh me@my-jetson.local`) diff --git a/docs/04-platform-support/_category_.json b/docs/04-platform-support/_category_.json new file mode 100644 index 0000000..53b0acb --- /dev/null +++ b/docs/04-platform-support/_category_.json @@ -0,0 +1,7 @@ +{ + "label": "Platforms", + "link": { + "type": "generated-index", + "description": "Platform specific information to run OpenDataCam on the different platforms" + } +} diff --git a/docs/04-platform-support/assets/1.jpg b/docs/04-platform-support/assets/1.jpg new file mode 100644 index 0000000..58abdda Binary files /dev/null and b/docs/04-platform-support/assets/1.jpg differ diff --git a/docs/04-platform-support/assets/2.jpg b/docs/04-platform-support/assets/2.jpg new file mode 100644 index 0000000..6a2e32f Binary files /dev/null and b/docs/04-platform-support/assets/2.jpg differ diff --git a/docs/04-platform-support/assets/3.jpg b/docs/04-platform-support/assets/3.jpg new file mode 100644 index 0000000..64ec641 Binary files /dev/null and b/docs/04-platform-support/assets/3.jpg differ diff --git a/docs/04-platform-support/assets/4.jpg b/docs/04-platform-support/assets/4.jpg new file mode 100644 index 0000000..1198fec Binary files /dev/null and b/docs/04-platform-support/assets/4.jpg differ diff --git a/docs/04-platform-support/assets/5.jpg b/docs/04-platform-support/assets/5.jpg new file mode 100644 index 0000000..3a88457 Binary files /dev/null and b/docs/04-platform-support/assets/5.jpg differ diff --git a/docs/04-platform-support/assets/6.jpg b/docs/04-platform-support/assets/6.jpg new file mode 100644 index 0000000..ef48639 Binary files /dev/null and b/docs/04-platform-support/assets/6.jpg differ diff --git a/docs/04-platform-support/assets/7.jpg b/docs/04-platform-support/assets/7.jpg new file mode 100644 index 0000000..d65da57 Binary files /dev/null and b/docs/04-platform-support/assets/7.jpg differ diff --git a/docs/04-platform-support/assets/8.jpg b/docs/04-platform-support/assets/8.jpg new file mode 100644 index 0000000..1215d5a Binary files /dev/null and b/docs/04-platform-support/assets/8.jpg differ diff --git a/docs/04-platform-support/assets/nano_baseplate.jpg b/docs/04-platform-support/assets/nano_baseplate.jpg new file mode 100644 index 0000000..556397d Binary files /dev/null and b/docs/04-platform-support/assets/nano_baseplate.jpg differ diff --git a/docs/04-platform-support/assets/nano_baseplate.pdf b/docs/04-platform-support/assets/nano_baseplate.pdf new file mode 100644 index 0000000..c6d12e2 Binary files /dev/null and b/docs/04-platform-support/assets/nano_baseplate.pdf differ diff --git a/docs/05-api.md b/docs/05-api.md new file mode 100644 index 0000000..4990d0b --- /dev/null +++ b/docs/05-api.md @@ -0,0 +1,8 @@ +# API + +OpenDataCam comes with a simple to use [REST API](https://opendatacam.github.io/opendatacam/apidoc/). + +## 🗃 Data export documentation + +- [Counter data](https://opendatacam.github.io/opendatacam/apidoc/#api-Recording-Counter_data) +- [Tracker data](https://opendatacam.github.io/opendatacam/apidoc/#api-Recording-Tracker_data) \ No newline at end of file diff --git a/docs/06-development/01-create-docker-image.md b/docs/06-development/01-create-docker-image.md new file mode 100644 index 0000000..f9cd7cc --- /dev/null +++ b/docs/06-development/01-create-docker-image.md @@ -0,0 +1,144 @@ +# How to create / update a Docker Image + +Opendatacam provides Docker images for the last release as well as automatic developer previews through [Opendatacam's Docker Hub](https://hub.docker.com/r/opendatacam/opendatacam). + +This document explains the generic steps to create the Docker images for + +- Desktop +- Xavier +- Nano +- CPU + +platforms. + +## 1. Build environment + +In order to build the docker images you can either build them natively if you have access to the required Hardware (e.g. a Jetson Nano device), or you can use [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) for cross platform builds. + +The build environment should have the following software installed: + +- [Docker installed](https://docs.docker.com/install/linux/docker-ce/ubuntu/) +- [Docker compose installed](https://docs.docker.com/compose/install/) +- [Nvidia drivers installed](https://developer.nvidia.com/cuda-downloads) (you don't need all CUDA but we didn't found a easy install process for only the drivers) +- [Nvidia Container toolkit installed](https://github.com/NVIDIA/nvidia-docker) +- [Opendatacam Source Code](https://github.com/opendatacam/opendatacam) + +## 2. Build the image + +The following steps assume a native build. +If you are using [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) to do Multi- or Cross-Platform builds please read the Buildx documentation for the correct commands. + +In the exmaple below the following placeholder have been used + +- `OPENDATACAM_PLATFORM` either nano, xavier or desktop +- `IMAGE_ID` the ID of the generated docker image +- `DOCKERHUB_USERNAME` your Docker Hub username to tag the image + +```bash +# Get the Opendatacam source code if you don't have it already +git clone git@github.com:opendatacam/opendatacam.git + +# Go to the Opendatacam repository root +cd opendatacam + +# Download the weights for your platform +# Nano +wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights +# Desktop and Xavier +wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights + +# Build the docker image +docker build --file docker/ -t opendatacam . + +# Optional: Tag the local image +sudo docker tag /opendatacam:local- +``` + +### 2.1. Test The Image (Optional) + +Edit your `docker-compose.yml` file to use the `/opendatacam:latest-` image. +E.g. + +```yaml +services: + opendatacam: + image: /opendatacam:latest- +``` + +### 2.2. Publish the Docker image (Optional) + +```bash +# Log into the Docker Hub +docker login --username= + +# Check the image ID +# +# You should see something like: +# +# REPOSITORY TAG IMAGE ID CREATED SIZE +# opendatacam latest 023ab91c6291 3 minutes ago 1.975 GB +docker images + +# Tag your image if you have not yet done so +docker tag /opendatacam:latest- + +# Untag image (if you made a typo) +docker rmi /opendatacam:latest- + +# Push image +docker push /opendatacam:latest- +``` + +## Appendix + +### Xavier and Nano: Note about Darknet Makefile differences for docker build + +Change: + +```Makefile +NVCC=nvcc +# to +NVCC=/usr/local/cuda-10.0/bin/nvcc +``` + +Change: + +```Makefile +COMMON+= -DGPU -I/usr/local/cuda/include/ +# to +COMMON+= -DGPU -I/usr/local/cuda-10.0/include/ +``` +Change: + +```Makefile +LDFLAGS+= -L/usr/local/cuda/lib -lcuda -lcudart -lcublas -lcurand +# to +LDFLAGS+= -L/usr/local/cuda-10.0/lib -lcuda -lcudart -lcublas -lcurand +``` +Change: + +```Makefile +LDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand +# to +LDFLAGS+= -L/usr/local/cuda-10.0/lib64 -lcuda -lcudart -lcublas -lcurand +``` + +Change: + +```Makefile +CFLAGS+= -DCUDNN -I/usr/local/cuda/include +LDFLAGS+= -L/usr/local/cuda/lib -lcudnn +# to +CFLAGS+= -DCUDNN -I/usr/local/cuda-10.0/include +LDFLAGS+= -L/usr/local/cuda-10.0/lib -lcudnn +``` + +Change: + +```Makefile +CFLAGS+= -DCUDNN -I/usr/local/cudnn/include +LDFLAGS+= -L/usr/local/cudnn/lib64 -lcudnn +# to +CFLAGS+= -DCUDNN -I/usr/local/cuda-10.0/include +LDFLAGS+= -L/usr/local/cuda-10.0/lib64 -lcudnn +``` diff --git a/docs/06-development/02-install-without-docker.md b/docs/06-development/02-install-without-docker.md new file mode 100644 index 0000000..f601294 --- /dev/null +++ b/docs/06-development/02-install-without-docker.md @@ -0,0 +1,309 @@ +# How to install OpenDataCam without docker + +## 1. Install dependencies + +**For Jetsons:** Flash jetson to Jetpack 5x + +https://developer.nvidia.com/embedded/jetpack + +**For GNU/Linux x86_64 machine with a CUDA compatible GPU:** Install nvidia drivers and CUDA + +https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html + +## 2. Install Darknet (Neural network framework running YOLO) + +### Get the source files + +_NB: Make sure you reinstall darknet entirely if you were on ODC v2.x, for v3 the version has changed._ + +```bash +git clone --depth 1 https://github.com/opendatacam/darknet + +#NB: the changes from https://github.com/alexeyab/darknet are documented here : https://github.com/opendatacam/darknet/pull/6 +``` + +### Modify the Makefile before compiling + +Open the `Makefile` in the darknet folder and make these changes: + +*For Jetson Nano* + +```Makefile +# Set these variable to 1: +GPU=1 +CUDNN=1 +OPENCV=1 + +# Uncomment the following line +# For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment: +ARCH= -gencode arch=compute_53,code=[sm_53,compute_53] + +# Replace NVCC path +NVCC=/usr/local/cuda/bin/nvcc +``` + +*For Jetson TX2* + +```Makefile +# Set these variable to 1: +GPU=1 +CUDNN=1 +OPENCV=1 + +# Uncomment the following line +# For Jetson Tx2 or Drive-PX2 uncomment +ARCH= -gencode arch=compute_62,code=[sm_62,compute_62] +``` + +*For Jetson Xavier* + +```Makefile +# Set these variable to 1: +GPU=1 +CUDNN=1 +CUDNN_HALF=1 +OPENCV=1 + +# Uncomment the following line +# Jetson XAVIER +ARCH= -gencode arch=compute_72,code=[sm_72,compute_72] +``` + +*For Generic Ubuntu machine with CUDA GPU* + +Make sure you have CUDA installed: + +``` +# Type this command +nvcc --version + +# If it returns Command 'nvcc' not found , you need to install cuda properly: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#package-manager-installation and also add cuda to your PATH with the post install instructions: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions +``` + +Make change to Makefile: + +```Makefile +# Set these variable to 1: +GPU=1 +CUDNN=1 +OPENCV=1 +``` + +### Compile darknet + +```bash +# Go to darknet folder +cd darknet +# Optional: put jetson in performance mode to speed up things +sudo nvpmodel -m 0 +sudo jetson_clocks +# Compile +make +``` + +If you have an error "nvcc not found" on Jetson update path to NVCC in Makefile + +``` +NVCC=/usr/local/cuda/bin/nvcc +``` + +### Download weight file + +The .weights files that need to be in the root of the `/darknet` folder + +```bash +cd darknet #if you are not already in the darknet folder + +# YOLOv4-tiny +wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights --no-check-certificate +# YOLOv4 +wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights --no-check-certificate +``` + +Or if you want to copy them over SSH from your main machine with scp + +``` +scp yolov4-tiny.weights nvidia@192.168.1.210:/home/nvidia/Documents/darknet +``` + +### (Optional) Test darknet + +```bash +# Go to darknet folder +cd darknet +# Run darknet (yolo) on webcam +./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights "v4l2src ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink" -ext_output -dont_show + +# Run darknet on file +./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights opendatacam_videos/demo.mp4 -ext_output -dont_show + +# Run darknet on raspberrycam +./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink" -ext_output -dont_show +``` + +## 3. Install node.js, mongodb + +```bash +# Install Node.js +sudo apt-get install curl +curl -sL https://deb.nodesource.com/setup_18.x | sudo -E bash - +sudo apt-get install -y nodejs +``` + +### Mongodb for Jetson devices (ARM64) + +```bash +# Install mongodb + +# Detailed doc: https://computingforgeeks.com/how-to-install-latest-mongodb-on-ubuntu-18-04-ubuntu-16-04/ +# NB: at time of writing this guide, we install the mongodb package for ubuntu 16.04 as the arm64 version of it isn't available for 18.04 +sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 +echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list +sudo apt-get update +sudo apt-get install -y openssl libcurl3 mongodb-org + +# Start service +sudo systemctl start mongod + +# Enable service on boot +sudo systemctl enable mongod +``` + +### Mongodb for Generic Ubuntu machine with CUDA GPU + +```bash +# Install mongodb + +# Detailed doc: https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/ +sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 && \ + echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-4.0.list +sudo apt-get update && apt-get install -y --no-install-recommends openssl libcurl3 mongodb-org + +# Start service +sudo systemctl start mongod + +# Enable service on boot +sudo systemctl enable mongod +``` + +## 4. Install opendatacam + +- Download source + +```bash +git clone --depth 1 https://github.com/opendatacam/opendatacam.git +cd opendatacam +``` + +- Change `"MONGODB_URL"` in `opendatacam/config.json` (default is the docker service mongo URL) + +```json +{ + "MONGODB_URL": "mongodb://127.0.0.1:27017" +} +``` + +- Specify **ABSOLUTE** `PATH_TO_YOLO_DARKNET` path in `opendatacam/config.json` + +```json +{ + "PATH_TO_YOLO_DARKNET" : "/home/nvidia/darknet" +} +``` + +```bash +# To get the absolute path, go the darknet folder and type +pwd . +``` + +- Specify `VIDEO_INPUT` and `NEURAL_NETWORK` in `opendatacam/config.json` + +*For Jetson Nano (recommanded)* + +```json +{ + "VIDEO_INPUT": "usbcam", + "NEURAL_NETWORK": "yolov4-tiny" +} +``` + +*For Jetson Xavier (recommanded)* + +```json +{ + "VIDEO_INPUT": "usbcam", + "NEURAL_NETWORK": "yolov4" +} +``` + +Learn more in the [config documentation](/docs/configuration) page. + +- Install **OpenDataCam** + +```bash +cd +npm install +npm run build +``` + +- Run **OpenDataCam** + +```bash +cd +npm run start +``` + +- (optional) Config **OpenDataCam** to run on boot + +```bash +# install pm2 +npm install pm2 -g | + +# go to opendatacam folder +cd +# launch pm2 at startup +# this command gives you instructions to configure pm2 to +# start at ubuntu startup, follow them +sudo pm2 startup + +# Once pm2 is configured to start at startup +# Configure pm2 to start the Open Traffic Cam app +sudo pm2 start npm --name "opendatacam" -- start +sudo pm2 save +``` + +- (optional) Open ports 8080 8090 and 8070 to outside world on cloud deployment machine + +``` +sudo ufw allow 8080 +sudo ufw allow 8090 +sudo ufw allow 8070 +``` + +## (Optional) How to compile Opencv with Gstreamer support on desktop + +```bash +sudo apt-get install -y libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev + +sudo apt-get install -y pkg-config zlib1g-dev libwebp-dev libtbb2 libtbb-dev libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev cmake libv4l-dev + +sudo apt-get install -y autoconf autotools-dev build-essential gcc git + +sudo apt-get install -y ffmpeg + +git clone --depth 1 -b 4.1.1 https://github.com/opencv/opencv.git + +cd opencv +mkdir build +cd build +# Note here you need to set both FFMPEG and GSTREAMER to ON +# Running this command should output a summary of which dependencies are gonna be build with opencv +# Double check that both gstreamer and ffmpeg are ON +cmake -D CMAKE_INSTALL_PREFIX=/usr/local CMAKE_BUILD_TYPE=Release -D WITH_GSTREAMER=ON -D WITH_GSTREAMER_0_10=OFF -D WITH_CUDA=OFF -D WITH_TBB=ON -D WITH_LIBV4L=ON WITH_FFMPEG=ON -DOPENCV_GENERATE_PKGCONFIG=ON .. + +sudo make install + +#reload if opencv already installed +sudo /bin/bash -c 'echo "/usr/local/lib" >> /etc/ld.so.conf.d/opencv.conf' +sudo ldconfig +``` diff --git a/docs/06-development/_category_.json b/docs/06-development/_category_.json new file mode 100644 index 0000000..98638a3 --- /dev/null +++ b/docs/06-development/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Development Guidelines" +} diff --git a/docs/06-development/index.md b/docs/06-development/index.md new file mode 100644 index 0000000..bd9c2d2 --- /dev/null +++ b/docs/06-development/index.md @@ -0,0 +1,100 @@ +--- +slug: /development +--- + +# Development Guidelines + +Technical architecture overview: + +![Technical architecture](https://user-images.githubusercontent.com/533590/60489282-3f2d1700-9ca4-11e9-932c-19bf84e04f9a.png) + +## Run in simulation mode + +Simulation mode is useful to work on the UI and node.js feature deployment without having to run the neural network / the webcam. + +**Dependency:** Mongodb installed _(optional, only to record data)_ : [see tutorial](https://docs.mongodb.com/manual/installation/#mongodb-community-edition) + +```bash +# Clone repo +git clone https://github.com/opendatacam/opendatacam.git + +or + +git@github.com:opendatacam/opendatacam.git +# Install dependencies +npm i +# Run in dev mode +npm run dev +# Open browser on http://localhost:8080/ +``` + +If you have an error while doing `npm install` it is probably a problem with node-gyp, you need to install additional dependencies depending on your platform: https://github.com/nodejs/node-gyp#on-unix + +### Simulation Mode + +The new simulation mode allows to feed YOLO JSON detections into OpenDataCam. As for the video either pre-extracted frames or a video file where the frames will be extracted using [`ffmpeg`](https://ffmpeg.org/). + +The simulation can be customized in the OpenDataCam config by adding it as a new video source. + +```json +"VIDEO_INPUTS_PARAMS": { + "simulation": "--yolo_json public/static/placeholder/alexeydetections30FPS.json --video_file_or_folder public/static/placeholder/frames --isLive true --jsonFps 20 --mjpgFps 0.2" +} +``` + +Whereas + +- `detections` A relative or absolute path to a [MOT challenge](https://motchallenge.net/), or the JSON file with Darknet detections to use. + For relative paths, the repository root will be used as the base. +- `video_file_or_folder`: A file or folder to find JPGs. + If `detections` points to a MOT challenge, the image folder will be taken for MOT's `seqinfo.ini`. + If it's a file the images will be extracted using `ffmpeg`. + If it's a folder it will expect the images in MOT format, or short format (`001.jpg`, `002.jpg`, ..., `101.jpg`, `102.jpg`, ...) to be present there. +- `isLive`: Should the simulation behave like a live source (e.g. WebCam), or like a file. + If `true`, the simulation will silently loop from the beginning without killing the stream. + If `false`, the simulation will kill the streams at the end of JSON file just like Darknet. +- `jsonFps`: Approximate frames per second for the JSON stream. +- `mjpgFps`: **Only when using `ffmpeg`**. Approximate frames per second for the MJPG stream. + Having this set lower than `jsonFps`, will make the video skip a few frames. +- `darknetStdout`: If the simulation should mimic the output of Darknet on stdout. +- `json_port`: The TCP port for JSON streaming +- `mjpg_port`: The TCP port for MJGP streeaming +- `yolo_json`: Deprecated. Use `detection` instead + +The simulation JSON and MJPG streams can also be started without Opendatacam by invoking `node scripts/YoloSimulation.js` from the repository root folder. + +## Release checklist + +- For next release only: Set $VERSION instead of master for the Kubernete install script, see: https://github.com/opendatacam/opendatacam/pull/247 +- Make sure that config.json has the TO_REPLACE_VIDEO_INPUT, TO_REPLACE_VIDEO_INPUT values that will be replaced by sed on installation +- Search and replace OLD_VERSION with NEW_VERSION in all documentation +- Make sure correct version in config.json > OPENDATACAM_VERSION +- Make sure correct version in package.json +- Make sure correct version in README "Install and start OpenDataCam" wget install script +- Make sure correct version in JETSON_NANO.md "Install OpenDataCam" wget install script +- Make sure correct VERSION in /docker/install-opendatacam.sh +- Generate up to date api documentation `npm run generateapidoc` (not needed anymore since https://github.com/opendatacam/opendatacam/pull/336) +- Add Release on github + +After you've added the release to GitHub, a GitHub Action Workflow will create the Docker images and automatically upload them to Docker Hub. +It is no longer necessary to create a git tag or Docker Images manually. + +## Markdown table of content generator + +https://ecotrust-canada.github.io/markdown-toc/ + +## List all cams + +```bash +v4l2-ctl --list-devices +``` + +## Technical architecture + +![Technical architecture](https://user-images.githubusercontent.com/533590/60489282-3f2d1700-9ca4-11e9-932c-19bf84e04f9a.png) + +## Code Style + +Opendatacam uses the https://github.com/airbnb/javascript style. +You can run `npm run lint` to check the whole code base. +Or `npx eslint yourfile.js` to check only a single file. diff --git a/docs/intro.md b/docs/intro.md deleted file mode 100644 index 8a2e69d..0000000 --- a/docs/intro.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -sidebar_position: 1 ---- - -# Tutorial Intro - -Let's discover **Docusaurus in less than 5 minutes**. - -## Getting Started - -Get started by **creating a new site**. - -Or **try Docusaurus immediately** with **[docusaurus.new](https://docusaurus.new)**. - -### What you'll need - -- [Node.js](https://nodejs.org/en/download/) version 16.14 or above: - - When installing Node.js, you are recommended to check all checkboxes related to dependencies. - -## Generate a new site - -Generate a new Docusaurus site using the **classic template**. - -The classic template will automatically be added to your project after you run the command: - -```bash -npm init docusaurus@latest my-website classic -``` - -You can type this command into Command Prompt, Powershell, Terminal, or any other integrated terminal of your code editor. - -The command also installs all necessary dependencies you need to run Docusaurus. - -## Start your site - -Run the development server: - -```bash -cd my-website -npm run start -``` - -The `cd` command changes the directory you're working with. In order to work with your newly created Docusaurus site, you'll need to navigate the terminal there. - -The `npm run start` command builds your website locally and serves it through a development server, ready for you to view at http://localhost:3000/. - -Open `docs/intro.md` (this page) and edit some lines: the site **reloads automatically** and displays your changes. diff --git a/docs/tutorial-basics/_category_.json b/docs/tutorial-basics/_category_.json deleted file mode 100644 index 2e6db55..0000000 --- a/docs/tutorial-basics/_category_.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "label": "Tutorial - Basics", - "position": 2, - "link": { - "type": "generated-index", - "description": "5 minutes to learn the most important Docusaurus concepts." - } -} diff --git a/docs/tutorial-basics/congratulations.md b/docs/tutorial-basics/congratulations.md deleted file mode 100644 index 04771a0..0000000 --- a/docs/tutorial-basics/congratulations.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -sidebar_position: 6 ---- - -# Congratulations! - -You have just learned the **basics of Docusaurus** and made some changes to the **initial template**. - -Docusaurus has **much more to offer**! - -Have **5 more minutes**? Take a look at **[versioning](../tutorial-extras/manage-docs-versions.md)** and **[i18n](../tutorial-extras/translate-your-site.md)**. - -Anything **unclear** or **buggy** in this tutorial? [Please report it!](https://github.com/facebook/docusaurus/discussions/4610) - -## What's next? - -- Read the [official documentation](https://docusaurus.io/) -- Modify your site configuration with [`docusaurus.config.js`](https://docusaurus.io/docs/api/docusaurus-config) -- Add navbar and footer items with [`themeConfig`](https://docusaurus.io/docs/api/themes/configuration) -- Add a custom [Design and Layout](https://docusaurus.io/docs/styling-layout) -- Add a [search bar](https://docusaurus.io/docs/search) -- Find inspirations in the [Docusaurus showcase](https://docusaurus.io/showcase) -- Get involved in the [Docusaurus Community](https://docusaurus.io/community/support) diff --git a/docs/tutorial-basics/create-a-blog-post.md b/docs/tutorial-basics/create-a-blog-post.md deleted file mode 100644 index ea472bb..0000000 --- a/docs/tutorial-basics/create-a-blog-post.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -sidebar_position: 3 ---- - -# Create a Blog Post - -Docusaurus creates a **page for each blog post**, but also a **blog index page**, a **tag system**, an **RSS** feed... - -## Create your first Post - -Create a file at `blog/2021-02-28-greetings.md`: - -```md title="blog/2021-02-28-greetings.md" ---- -slug: greetings -title: Greetings! -authors: - - name: Joel Marcey - title: Co-creator of Docusaurus 1 - url: https://github.com/JoelMarcey - image_url: https://github.com/JoelMarcey.png - - name: Sébastien Lorber - title: Docusaurus maintainer - url: https://sebastienlorber.com - image_url: https://github.com/slorber.png -tags: [greetings] ---- - -Congratulations, you have made your first post! - -Feel free to play around and edit this post as much you like. -``` - -A new blog post is now available at [http://localhost:3000/blog/greetings](http://localhost:3000/blog/greetings). diff --git a/docs/tutorial-basics/create-a-document.md b/docs/tutorial-basics/create-a-document.md deleted file mode 100644 index ffddfa8..0000000 --- a/docs/tutorial-basics/create-a-document.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -sidebar_position: 2 ---- - -# Create a Document - -Documents are **groups of pages** connected through: - -- a **sidebar** -- **previous/next navigation** -- **versioning** - -## Create your first Doc - -Create a Markdown file at `docs/hello.md`: - -```md title="docs/hello.md" -# Hello - -This is my **first Docusaurus document**! -``` - -A new document is now available at [http://localhost:3000/docs/hello](http://localhost:3000/docs/hello). - -## Configure the Sidebar - -Docusaurus automatically **creates a sidebar** from the `docs` folder. - -Add metadata to customize the sidebar label and position: - -```md title="docs/hello.md" {1-4} ---- -sidebar_label: 'Hi!' -sidebar_position: 3 ---- - -# Hello - -This is my **first Docusaurus document**! -``` - -It is also possible to create your sidebar explicitly in `sidebars.js`: - -```js title="sidebars.js" -module.exports = { - tutorialSidebar: [ - 'intro', - // highlight-next-line - 'hello', - { - type: 'category', - label: 'Tutorial', - items: ['tutorial-basics/create-a-document'], - }, - ], -}; -``` diff --git a/docs/tutorial-basics/create-a-page.md b/docs/tutorial-basics/create-a-page.md deleted file mode 100644 index 20e2ac3..0000000 --- a/docs/tutorial-basics/create-a-page.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -sidebar_position: 1 ---- - -# Create a Page - -Add **Markdown or React** files to `src/pages` to create a **standalone page**: - -- `src/pages/index.js` → `localhost:3000/` -- `src/pages/foo.md` → `localhost:3000/foo` -- `src/pages/foo/bar.js` → `localhost:3000/foo/bar` - -## Create your first React Page - -Create a file at `src/pages/my-react-page.js`: - -```jsx title="src/pages/my-react-page.js" -import React from 'react'; -import Layout from '@theme/Layout'; - -export default function MyReactPage() { - return ( - -

My React page

-

This is a React page

-
- ); -} -``` - -A new page is now available at [http://localhost:3000/my-react-page](http://localhost:3000/my-react-page). - -## Create your first Markdown Page - -Create a file at `src/pages/my-markdown-page.md`: - -```mdx title="src/pages/my-markdown-page.md" -# My Markdown page - -This is a Markdown page -``` - -A new page is now available at [http://localhost:3000/my-markdown-page](http://localhost:3000/my-markdown-page). diff --git a/docs/tutorial-basics/deploy-your-site.md b/docs/tutorial-basics/deploy-your-site.md deleted file mode 100644 index 1c50ee0..0000000 --- a/docs/tutorial-basics/deploy-your-site.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -sidebar_position: 5 ---- - -# Deploy your site - -Docusaurus is a **static-site-generator** (also called **[Jamstack](https://jamstack.org/)**). - -It builds your site as simple **static HTML, JavaScript and CSS files**. - -## Build your site - -Build your site **for production**: - -```bash -npm run build -``` - -The static files are generated in the `build` folder. - -## Deploy your site - -Test your production build locally: - -```bash -npm run serve -``` - -The `build` folder is now served at [http://localhost:3000/](http://localhost:3000/). - -You can now deploy the `build` folder **almost anywhere** easily, **for free** or very small cost (read the **[Deployment Guide](https://docusaurus.io/docs/deployment)**). diff --git a/docs/tutorial-basics/markdown-features.mdx b/docs/tutorial-basics/markdown-features.mdx deleted file mode 100644 index 62f92ff..0000000 --- a/docs/tutorial-basics/markdown-features.mdx +++ /dev/null @@ -1,149 +0,0 @@ ---- -sidebar_position: 4 ---- - -# Markdown Features - -Docusaurus supports **[Markdown](https://daringfireball.net/projects/markdown/syntax)** and a few **additional features**. - -## Front Matter - -Markdown documents have metadata at the top called [Front Matter](https://jekyllrb.com/docs/front-matter/): - -```text title="my-doc.md" -// highlight-start ---- -id: my-doc-id -title: My document title -description: My document description -slug: /my-custom-url ---- -// highlight-end - -## Markdown heading - -Markdown text with [links](./hello.md) -``` - -## Links - -Regular Markdown links are supported, using url paths or relative file paths. - -```md -Let's see how to [Create a page](/create-a-page). -``` - -```md -Let's see how to [Create a page](./create-a-page.md). -``` - -**Result:** Let's see how to [Create a page](./create-a-page.md). - -## Images - -Regular Markdown images are supported. - -You can use absolute paths to reference images in the static directory (`static/img/docusaurus.png`): - -```md -![Docusaurus logo](/img/docusaurus.png) -``` - - -You can reference images relative to the current file as well. This is particularly useful to colocate images close to the Markdown files using them: - -```md -![Docusaurus logo](./img/docusaurus.png) -``` - -## Code Blocks - -Markdown code blocks are supported with Syntax highlighting. - - ```jsx title="src/components/HelloDocusaurus.js" - function HelloDocusaurus() { - return ( -

Hello, Docusaurus!

- ) - } - ``` - -```jsx title="src/components/HelloDocusaurus.js" -function HelloDocusaurus() { - return

Hello, Docusaurus!

; -} -``` - -## Admonitions - -Docusaurus has a special syntax to create admonitions and callouts: - - :::tip My tip - - Use this awesome feature option - - ::: - - :::danger Take care - - This action is dangerous - - ::: - -:::tip My tip - -Use this awesome feature option - -::: - -:::danger Take care - -This action is dangerous - -::: - -## MDX and React Components - -[MDX](https://mdxjs.com/) can make your documentation more **interactive** and allows using any **React components inside Markdown**: - -```jsx -export const Highlight = ({children, color}) => ( - { - alert(`You clicked the color ${color} with label ${children}`) - }}> - {children} - -); - -This is Docusaurus green ! - -This is Facebook blue ! -``` - -export const Highlight = ({children, color}) => ( - { - alert(`You clicked the color ${color} with label ${children}`); - }}> - {children} - -); - -This is Docusaurus green ! - -This is Facebook blue ! diff --git a/docs/tutorial-extras/_category_.json b/docs/tutorial-extras/_category_.json deleted file mode 100644 index a8ffcc1..0000000 --- a/docs/tutorial-extras/_category_.json +++ /dev/null @@ -1,7 +0,0 @@ -{ - "label": "Tutorial - Extras", - "position": 3, - "link": { - "type": "generated-index" - } -} diff --git a/docs/tutorial-extras/img/docsVersionDropdown.png b/docs/tutorial-extras/img/docsVersionDropdown.png deleted file mode 100644 index 97e4164..0000000 Binary files a/docs/tutorial-extras/img/docsVersionDropdown.png and /dev/null differ diff --git a/docs/tutorial-extras/img/localeDropdown.png b/docs/tutorial-extras/img/localeDropdown.png deleted file mode 100644 index e257edc..0000000 Binary files a/docs/tutorial-extras/img/localeDropdown.png and /dev/null differ diff --git a/docs/tutorial-extras/manage-docs-versions.md b/docs/tutorial-extras/manage-docs-versions.md deleted file mode 100644 index e12c3f3..0000000 --- a/docs/tutorial-extras/manage-docs-versions.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -sidebar_position: 1 ---- - -# Manage Docs Versions - -Docusaurus can manage multiple versions of your docs. - -## Create a docs version - -Release a version 1.0 of your project: - -```bash -npm run docusaurus docs:version 1.0 -``` - -The `docs` folder is copied into `versioned_docs/version-1.0` and `versions.json` is created. - -Your docs now have 2 versions: - -- `1.0` at `http://localhost:3000/docs/` for the version 1.0 docs -- `current` at `http://localhost:3000/docs/next/` for the **upcoming, unreleased docs** - -## Add a Version Dropdown - -To navigate seamlessly across versions, add a version dropdown. - -Modify the `docusaurus.config.js` file: - -```js title="docusaurus.config.js" -module.exports = { - themeConfig: { - navbar: { - items: [ - // highlight-start - { - type: 'docsVersionDropdown', - }, - // highlight-end - ], - }, - }, -}; -``` - -The docs version dropdown appears in your navbar: - -![Docs Version Dropdown](./img/docsVersionDropdown.png) - -## Update an existing version - -It is possible to edit versioned docs in their respective folder: - -- `versioned_docs/version-1.0/hello.md` updates `http://localhost:3000/docs/hello` -- `docs/hello.md` updates `http://localhost:3000/docs/next/hello` diff --git a/docs/tutorial-extras/translate-your-site.md b/docs/tutorial-extras/translate-your-site.md deleted file mode 100644 index caeaffb..0000000 --- a/docs/tutorial-extras/translate-your-site.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -sidebar_position: 2 ---- - -# Translate your site - -Let's translate `docs/intro.md` to French. - -## Configure i18n - -Modify `docusaurus.config.js` to add support for the `fr` locale: - -```js title="docusaurus.config.js" -module.exports = { - i18n: { - defaultLocale: 'en', - locales: ['en', 'fr'], - }, -}; -``` - -## Translate a doc - -Copy the `docs/intro.md` file to the `i18n/fr` folder: - -```bash -mkdir -p i18n/fr/docusaurus-plugin-content-docs/current/ - -cp docs/intro.md i18n/fr/docusaurus-plugin-content-docs/current/intro.md -``` - -Translate `i18n/fr/docusaurus-plugin-content-docs/current/intro.md` in French. - -## Start your localized site - -Start your site on the French locale: - -```bash -npm run start -- --locale fr -``` - -Your localized site is accessible at [http://localhost:3000/fr/](http://localhost:3000/fr/) and the `Getting Started` page is translated. - -:::caution - -In development, you can only use one locale at a same time. - -::: - -## Add a Locale Dropdown - -To navigate seamlessly across languages, add a locale dropdown. - -Modify the `docusaurus.config.js` file: - -```js title="docusaurus.config.js" -module.exports = { - themeConfig: { - navbar: { - items: [ - // highlight-start - { - type: 'localeDropdown', - }, - // highlight-end - ], - }, - }, -}; -``` - -The locale dropdown now appears in your navbar: - -![Locale Dropdown](./img/localeDropdown.png) - -## Build your localized site - -Build your site for a specific locale: - -```bash -npm run build -- --locale fr -``` - -Or build your site to include all the locales at once: - -```bash -npm run build -``` diff --git a/docusaurus.config.js b/docusaurus.config.js index a16616a..e9b0d42 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -45,7 +45,7 @@ const config = { // Please change this to your repo. // Remove this to remove the "edit this page" links. editUrl: - 'https://github.com/facebook/docusaurus/tree/main/packages/create-docusaurus/templates/shared/', + 'https://github.com/opendatacam/opendatacam-website/tree/main/', }, theme: { customCss: require.resolve('./src/css/custom.css'), @@ -53,7 +53,7 @@ const config = { sitemap: { changefreq: 'weekly', priority: 0.5, - ignorePatterns: ['/opendatacam/', '/docs/**'], + ignorePatterns: ['/opendatacam/'], filename: 'sitemap.xml', }, }), @@ -103,6 +103,11 @@ const config = { label: 'For Professionals', position: 'left', }, + { + href: '/docs', + label: 'Documentation', + position: 'left', + }, { href: 'https://github.com/opendatacam/opendatacam', label: 'GitHub', @@ -132,6 +137,10 @@ const config = { label: 'For Professionals', to: '/professionals', }, + { + label: 'Documentation', + to: '/docs', + }, ], }, { @@ -145,10 +154,6 @@ const config = { label: 'LinkedIn', href: 'https://www.linkedin.com/company/opendatacam', }, - { - label: 'Twitter', - href: 'https://twitter.com/opendatacam', - }, ], }, {