opendatacam

⚙️ Customize OpenDataCam

We offer several customization options:

Table of content

General

For a standard install of OpenDataCam

All settings are in the config.json file that you will find in the same directory you run the install script.

When you modify a setting, you will need to restart the docker container, you can do so by:

# Go to the directory you ran install script (where is your docker-compose.yml file)

# Stop container
sudo docker-compose down

# Restart container (after modifying the config.json file for example)
sudo docker-compose restart

# Start container
# detached mode
sudo docker-compose up -d
# interactive mode
sudo docker-compose up

For a non-docker install of OpenDataCam

You need to modify the config.json file located the opendatacam folder.

<PATH_TO_OPENDATACAM>/config.json

Once modified, you just need to restart the node.js app (npm run start), no need to re-build it, it loads the config file at runtime.

Run opendatacam on a video file

By default, OpenDataCam will run on a demo video file, if you want to change it, you should just drag & drop on the UI the new file.

Learn more about the others video inputs available (IP camera, Rasberry Pi in the Advanced use section)

Specificities of running on a file

Neural network weights

You can change YOLO weights files depending on what objects you want to track and which hardware your are running opendatacam on.

Lighters weights file results in speed improvements, but loss in accuracy, for example yolov4 run at ~1-2 FPS on Jetson Nano, ~5-6 FPS on Jetson TX2, and ~22 FPS on Jetson Xavier

In order to have good enough tracking accuracy for cars and mobility objects, from our experiments we found out that the sweet spot was to be able to run YOLO at least at 8-9 FPS.

For a standard install of opendatacam, these are the default weights we pick depending on your hardware:

If you want to use other weights, please see use custom weigths.

Track only specific classes

By default, the opendatacam will track all the classes that the neural network is trained to track. In our case, YOLO is trained with the VOC dataset, here is the complete list of classes

You can restrict the opendatacam to some specific classes with the VALID_CLASSES option in the config.json file .

Find which classes YOLO is tracking depending on the weights you are running. For example yolov4 trained on COCO dataset classes

Here is a way to only track buses and person:

{
  "VALID_CLASSES": ["bus","car"]
}

In order to track all the classes (default value), you need to set it to:

{
  "VALID_CLASSES": ["*"]
}

Extra note: the tracking algorithm might work better by allowing all the classes, in our test we saw that for some classes like Bike/Motorbike, YOLO had a hard time distinguishing them well, and was switching between classes across frames for the same object. By keeping all the detections classes we saw that we can avoid losing some objects, this is discussed here

Display custom classes

By default we are displaying the mobility classes:

Display classes

If you want to customize it you should modify the DISPLAY_CLASSES config.

"DISPLAY_CLASSES": [
  { "class": "bicycle", "hexcode": "1F6B2"},
  { "class": "person", "hexcode": "1F6B6"},
  { "class": "truck", "hexcode": "1F69B"},
  { "class": "motorbike", "hexcode": "1F6F5"},
  { "class": "car", "hexcode": "1F697"},
  { "class": "bus", "hexcode": "1F683"}
]

You can associate any icon that are in the public/static/icons/openmojis folder. (they are from https://openmoji.org/, you can search the hexcode / unicode icon id directly there)

For example:

"DISPLAY_CLASSES": [
    { "class": "dog", "icon": "1F415"},
    { "class": "cat", "icon": "1F431"}
  ]

Display classes custom

LIMITATION: You can display a maximum of 6 classes, if you add more, it will just display the first 6 classes

Customize pathfinder colors

You can change the PATHFINDER_COLORS variable in the config.json. The app picks randomly for each new tracked object a color inside it. The colors need to be in HEX format.

"PATHFINDER_COLORS": [
  "#1f77b4",
  "#ff7f0e",
  "#2ca02c",
  "#d62728",
  "#9467bd",
  "#8c564b",
  "#e377c2",
  "#7f7f7f",
  "#bcbd22",
  "#17becf"
]

For example, with only 2 colors:

"PATHFINDER_COLORS": [
  "#1f77b4",
  "#e377c2"
]

Demo 2 colors

Customize Counter colors

You can change the COUNTER_COLORS variable in the config.json. As you draw counter lines, the app will pick the colors in the order you specified them.

You need to add “key”:”value” for counter lines, the key should be the label of the color (without space, numbers or special characters), and the value the color in HEX.

For example, you can modify the default from:

"COUNTER_COLORS": {
  "yellow": "#FFE700",
  "turquoise": "#A3FFF4",
  "green": "#a0f17f",
  "purple": "#d070f0",
  "red": "#AB4435"
}

To

"COUNTER_COLORS": {
  "white": "#fff"
}

And after restarting OpenDataCam you should get a white line when defining a counter area:

Screenshot 2019-05-24 at 21 03 44

NOTE: If you draw more line than COUNTER_COLORS defined, the lines will be black

Advanced settings

Video input

OpenDataCam is capable to take in input several video streams: pre-recorded file, usbcam, raspberry cam, remote IP cam etc etc..

This is configurable via the VIDEO_INPUT ans VIDEO_INPUTS_PARAMS settings.

"VIDEO_INPUTS_PARAMS": {
  "file": "opendatacam_videos/demo.mp4",
  "usbcam": "v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
  "raspberrycam": "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink",
  "remote_cam": "YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv",
  "remote_hls_gstreamer": "souphttpsrc location=http://YOUR_HLSSTREAM_URL_HERE.m3u8 ! hlsdemux ! decodebin ! videoconvert ! videoscale ! appsink"
}

With the default installation, OpenDataCam will have VIDEO_INPUT set to usbcam. See below how to change this

Technical note:

Behind the hoods, this config input becomes the input of the darknet process which then get fed into OpenCV VideoCapture().

As we compile OpenCV with Gstreamer support when installing OpenDataCam, we can use any Gstreamer pipeline as input + other VideoCapture supported format like video files / IP cam streams.

You can add your own gstreamer pipeline for your needs by adding an entry to "VIDEO_INPUTS_PARAMS"

Run from an usbcam
  1. Verify if you have an usbcam detected
ls /dev/video*
# Output should be: /dev/video0
#
  1. Change VIDEO_INPUT to "usbcam"
"VIDEO_INPUT": "usbcam"
  1. (Optional) If your device is on video1 or video2 instead of default video0, change VIDEO_INPUTS_PARAMS > usbcam to your video device, for example if /dev/video1
"VIDEO_INPUTS_PARAMS": {
  "usbcam": "v4l2src device=/dev/video1 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink"
}
Run from a file

You have two options to run from a file:

For example, you have a file.mp4 you want to run OpenDataCam on :

For a docker (standard install) of OpenDataCam:

You need to mount the file in the docker container, copy the file in the folder where you have the docker-compose.yml file

volumes:
  - './config.json:/var/local/opendatacam/config.json'
  - './opendatacam_videos:/var/local/darknet/opendatacam_videos'

Once you do have the video file inside the opendatacam_videos folder, you can modify the config.json the following way:

  1. Change VIDEO_INPUT to "file"
"VIDEO_INPUT": "file"
  1. Change VIDEO_INPUTS_PARAMS > file with the path to your file
"VIDEO_INPUTS_PARAMS": {
  "file": "opendatacam_videos/file.mp4"
}

Once config.json is saved, you only need to restart the docker container using

sudo docker-compose restart

For a non docker install of OpenDataCam:

Same steps as above but instead of mountin the opendatacam_videos you should just create in in the /darknet folder.

Run from IP cam
  1. Change VIDEO_INPUT to "remote_cam"
"VIDEO_INPUT": "remote_cam"
  1. Change VIDEO_INPUTS_PARAMS > remote_cam to your IP cam stream, for example
"VIDEO_INPUTS_PARAMS": {
  "remote_cam": "http://162.143.172.100:8081/-wvhttp-01-/GetOneShot?image_size=640x480&frame_count=1000000000"
}

NB: this IP cam won’t work, it is just an example. Only use IP Cam you own yourself.

Run from Raspberry Pi cam (Jetson nano)

For a docker (standard install) of OpenDataCam:

Not supported yet, follow https://github.com/opendatacam/opendatacam/issues/178 for updates

For a non docker install of OpenDataCam:

  1. Change VIDEO_INPUT to "raspberrycam"
"VIDEO_INPUT": "raspberrycam"
  1. Restart node.js app
Change webcam resolution

As explained on the Technical note above, you can modify the Gstreamer pipeline as you like, by default we use a 640x360 feed from the webcam.

If you want to change this, you need to:

NOTE: Increasing webcam resolution won’t increase OpenDataCam accuracy, the input of the neural network is 400x400 max, and it might cause the UI to have logs as the MJPEG stream becomes very slow for higher resolution

Use Custom Neural Network weights

For a docker (standard install) of OpenDataCam:

We ship inside the docker container those YOLO weights:

In order to switch to another one you need:

For example, if you want to use yolov3-tiny-prn , you need to:

volumes:
  - './config.json:/var/local/opendatacam/config.json'
  - './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights'
volumes:
  - './config.json:/var/local/opendatacam/config.json'
  - './yolov3-tiny-prn.weights:/var/local/darknet/yolov3-tiny-prn.weights'
  - './coco.data:/var/local/darknet/cfg/coco.data'
  - './yolov3-tiny-prn.cfg:/var/local/darknet/cfg/yolov3-tiny-prn.cfg'
  - './coco.names:/var/local/darknet/cfg/coco.names'
"yolov3-tiny-prn": {
  "data": "cfg/coco.data",
  "cfg": "cfg/yolov3-tiny-prn.cfg",
  "weights": "yolov3-tiny-prn.weights"
}
"NEURAL_NETWORK": "yolov3-tiny-prn"
sudo docker-compose up -d
sudo docker-compose restart

For a non-docker install of opendatacam:

It is the same as above, but instead of mounting the files in the docker container you just need to directly copy them in the /darknet folder

Tracker settings

You can tweak some settings of the tracker to optimize OpenDataCam better for your needs

"TRACKER_SETTINGS": {
  "objectMaxAreaInPercentageOfFrame": 80,
  "confidence_threshold": 0.2,
  "iouLimit": 0.05,
  "unMatchedFrameTolerance": 5,
  "fastDelete": true
}

Counter settings

"COUNTER_SETTINGS": {
  "countingAreaMinFramesInsideToBeCounted": 1,
  "countingAreaVerifyIfObjectEntersCrossingOneEdge": true,
  "minAngleWithCountingLineThreshold": 5,
  "computeTrajectoryBasedOnNbOfPastFrame": 5
}

Counting line angle illustration

CounterBuffer

NB: if the object has changed ID in the past frames, it will take the last past frame known with the same ID.

MongoDB URL

If you want to persist the data on a remote mongodb instance, you can change the setting MONGODB_URL .

By default the Mongodb will be persisted in the /data/db directory of your host machine

Ports

You can modify the default ports used by OpenDataCam.

"PORTS": {
  "app": 8080,
  "darknet_json_stream": 8070,
  "darknet_mjpeg_stream": 8090
}

Tracker accuracy display

The tracker accuracy layer shows a heatmap like this one:

Screenshot 2019-06-12 at 18 59 54

This heatmap highlights the areas where the tracker accuracy isn’t really good to help you:

Behind the hoods, it displays a metric of the tracker called “zombies” which represent the predicted bounding box when the tracked isn’t able to asign a bounding box from the YOLO detections.

You can tweak all the settings of this display with the TRACKER_ACCURACY_DISPLAY setting.

nbFrameBuffer Number of previous frames displayed on the heatmap
radius Radius of the points displayed on the heatmap (in % of the width of the canvas)
blur Blur of the points displayed on the heatmap (in % of the width of the canvas)
step For each point displayed, how much the point should contribute to the increase of the heapmap value (the range is between 0-1), increasing this will cause the heatmap to reach the higher values of the gradient faster
gradient Colors gradient, insert as many values as you like between 0-1 (hex value supported, ex: “#fff” or “white”)
canvasResolutionFactor In order to improve performance, the tracker accuracy canvas resolution is downscaled by a factor of 10 by default (set a value between 0-1)
"TRACKER_ACCURACY_DISPLAY": {
  "nbFrameBuffer": 300,
  "settings": {
    "radius": 3.1,
    "blur": 6.2,
    "step": 0.1,
    "gradient": {
      "0.4":"orange",
      "1":"red"
    },
    "canvasResolutionFactor": 0.1
  }
}

For example, if you change the gradient with:

"gradient": {
  "0.4":"yellow",
  "0.6":"#fff",
  "0.7":"red",
  "0.8":"yellow",
  "1":"red"
}

Other gradient

Use Environment Variables

Some of the entries in config.json can be overwritten using environment variables. Currently this is the PORTS object and the setting for the MONGODB_URL. See the file .env.example as an example how to set them. Make sure the use the exact same names or opendatacam will fall back to config.json, and if that is not present the general defaults.

Without Docker

If you are running opendatacam without docker you can set these by:

With docker-compose

If you are running opendatacam with docker-compose.yml you can set them as environment section to the service opendatacam like shown below.

service:
  opendatacam:
    environment:
      - PORT_APP=8080

You also can can declare these environment variables in a .env file in the folder where the docker-compose command is invoked. Then these will be available within the docker-compose.yml file and you can pass them through to the container like shown below.

The .env file.

PORT_APP=8080

the docker-compose.yml file.

service:
  opendatacam:
    environment:
      - PORT_APP

There is also the possibility the have the .env in the directory where the docker-compose command is executed and add the env_file section to the docker-compose.yml configuration.

service:
  opendatacam:
    env_file:
      - ./.env

You also can add these variables to the call of the docker-compose command. For example like this docker-compose up -e PORT_APP=8080.

GPS

OpenDataCam can obtain the current position of the tracker via GPS and persist it along other counter data. This is useful in situations where the OpenDataCam is mobile e.g. used as a dashcam or mounted to a drone.

Requirements

To receive GPS position a GPS enabled device must be connected to your Jetson or PC. See GPSD’s list of supported devices.

Additionally you will need GPSD running. GPSD can either run in Docker or as a system service.

Running GPSD in Docker

The easiest way to run GPSD is through docker using the opensourcefoundries/gpsd image.

The easiest way is to add the GPSD service to your docker-compose.yml the following way.

services:
  # Add the following service to you docker compose file
  gpsd:
    image: opensourcefoundries/gpsd
    # List your GPS device here and make sure that it matches the device in the entrypoint line
    devices:
      - /dev/ttyACM0
    entrypoint: ["/bin/sh", "-c", "/sbin/syslogd -S -O - -n & exec /usr/sbin/gpsd -N -n -G /dev/ttyACM0", "--"]
    ports:
      - "2947:2947"
    restart: always

If GPSD been added to your Docker compose file, please change your GPS hostname setting in config.json to the name of your GPSD service. In the example above this would be "hostname": "gpsd".

Alternatively, if you don’t run OpenDataCam in docker you can only start GPSD via the following command:

# This assumes your device is /dev/ttyACM0. Please Change accordingly to your setup.
GPS_DEVICE=/dev/ttyACM0; docker run -d -p 2947:2947 --device=$GPS_DEVICE opensourcefoundries/gpsd $GPS_DEVICE
Running GPSD as system service

Please read your operating system documentation.

Configuration

To enable GPS add the following section to your config.json

"GPS": {
  "enabled": true,
  "port": 2947,
  "hostname": "localhost",
  "signalLossTimeoutSeconds": 60,
  "csvExportOpenStreetMapsUrl": true
}

Whereas