Skip to content

Commit

Permalink
Merge branch 'release_v2.22.0.0'
Browse files Browse the repository at this point in the history
  • Loading branch information
themarpe committed Jun 13, 2023
2 parents c3d9d7b + 21463dd commit 4f65e78
Show file tree
Hide file tree
Showing 76 changed files with 995 additions and 251 deletions.
3 changes: 3 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,8 @@ pybind11_add_module(${TARGET_NAME}
src/pipeline/node/AprilTagBindings.cpp
src/pipeline/node/DetectionParserBindings.cpp
src/pipeline/node/WarpBindings.cpp
src/pipeline/node/UVCBindings.cpp
src/pipeline/node/ToFBindings.cpp

src/pipeline/datatype/ADatatypeBindings.cpp
src/pipeline/datatype/AprilTagConfigBindings.cpp
Expand All @@ -135,6 +137,7 @@ pybind11_add_module(${TARGET_NAME}
src/pipeline/datatype/CameraControlBindings.cpp
src/pipeline/datatype/EdgeDetectorConfigBindings.cpp
src/pipeline/datatype/FeatureTrackerConfigBindings.cpp
src/pipeline/datatype/ToFConfigBindings.cpp
src/pipeline/datatype/ImageManipConfigBindings.cpp
src/pipeline/datatype/ImgDetectionsBindings.cpp
src/pipeline/datatype/ImgFrameBindings.cpp
Expand Down
2 changes: 1 addition & 1 deletion depthai-core
Submodule depthai-core updated 103 files
4 changes: 2 additions & 2 deletions docs/source/components/nodes/color_camera.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ Usage
pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
cam.setPreviewSize(300, 300)
cam.setBoardSocket(dai.CameraBoardSocket.RGB)
cam.setBoardSocket(dai.CameraBoardSocket.CAM_A)
cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam.setInterleaved(False)
cam.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
Expand All @@ -97,7 +97,7 @@ Usage
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::ColorCamera>();
cam->setPreviewSize(300, 300);
cam->setBoardSocket(dai::CameraBoardSocket::RGB);
cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setInterleaved(false);
cam->setColorOrder(dai::ColorCameraProperties::ColorOrder::RGB);
Expand Down
4 changes: 2 additions & 2 deletions docs/source/components/nodes/mono_camera.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@ Usage

pipeline = dai.Pipeline()
mono = pipeline.create(dai.node.MonoCamera)
mono.setBoardSocket(dai.CameraBoardSocket.RIGHT)
mono.setCamera("right")
mono.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)

.. code-tab:: c++

dai::Pipeline pipeline;
auto mono = pipeline.create<dai::node::MonoCamera>();
mono->setBoardSocket(dai::CameraBoardSocket::RIGHT);
mono->setCamera("right");
mono->setResolution(dai::MonoCameraProperties::SensorResolution::THE_720_P);

Examples of functionality
Expand Down
215 changes: 215 additions & 0 deletions docs/source/components/nodes/stereo_depth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,221 @@ as:
For the final disparity map, a filtering is applied based on the confidence threshold value: the pixels that have their confidence score larger than
the threshold get invalidated, i.e. their disparity value is set to zero. You can set the confidence threshold with :code:`stereo.initialConfig.setConfidenceThreshold()`.

Calculate depth using disparity map
===================================

Disparity and depth are inversely related. As disparity decreases, depth increases exponentially depending on baseline and focal length. Meaning, if the disparity value is close to zero, then a small change in disparity generates a large change in depth. Similarly, if the disparity value is big, then large changes in disparity do not lead to a large change in depth.

By considering this fact, depth can be calculated using this formula:

.. code-block:: python
depth = focal_length_in_pixels * baseline / disparity_in_pixels
Where baseline is the distance between two mono cameras. Note the unit used for baseline and depth is the same.

To get focal length in pixels, you can :ref:`read camera calibration <Calibration Reader>`, as focal length in pixels is
written in camera intrinsics (``intrinsics[0][0]``):

.. code-block:: python
import depthai as dai
with dai.Device() as device:
calibData = device.readCalibration()
intrinsics = calibData.getCameraIntrinsics(dai.CameraBoardSocket.CAM_C)
print('Right mono camera focal length in pixels:', intrinsics[0][0])
Here's theoretical calculation of the focal length in pixels:

.. code-block:: python
focal_length_in_pixels = image_width_in_pixels * 0.5 / tan(HFOV * 0.5 * PI/180)
# With 400P mono camera resolution where HFOV=71.9 degrees
focal_length_in_pixels = 640 * 0.5 / tan(71.9 * 0.5 * PI / 180) = 441.25
# With 800P mono camera resolution where HFOV=71.9 degrees
focal_length_in_pixels = 1280 * 0.5 / tan(71.9 * 0.5 * PI / 180) = 882.5
Examples for calculating the depth value, using the OAK-D (7.5cm baseline):

.. code-block:: python
# For OAK-D @ 400P mono cameras and disparity of eg. 50 pixels
depth = 441.25 * 7.5 / 50 = 66.19 # cm
# For OAK-D @ 800P mono cameras and disparity of eg. 10 pixels
depth = 882.5 * 7.5 / 10 = 661.88 # cm
Note the value of disparity depth data is stored in :code:`uint16`, where 0 is a special value, meaning that distance is unknown.

Min stereo depth distance
=========================

If the depth results for close-in objects look weird, this is likely because they are below the minimum depth-perception distance of the device.

To calculate this minimum distance, use the :ref:`depth formula <Calculate depth using disparity map>` and choose the maximum value for disparity_in_pixels parameter (keep in mind it is inveresly related, so maximum value will yield the smallest result).

For example OAK-D has a baseline of **7.5cm**, focal_length_in_pixels of **882.5 pixels** and the default maximum value for disparity_in_pixels is **95**. By using the :ref:`depth formula <Calculate depth using disparity map>` we get:

.. code-block:: python
min_distance = focal_length_in_pixels * baseline / disparity_in_pixels = 882.5 * 7.5cm / 95 = 69.67cm
or roughly 70cm.

However this distance can be cut in 1/2 (to around 35cm for the OAK-D) with the following options:

1. Changing the resolution to 640x400, instead of the standard 1280x800.

2. Enabling Extended Disparity.

Extended Disparity mode increases the levels of disparity to 191 from the standard 96 pixels, thereby 1/2-ing the minimum depth. It does so by computing the 96-pixel disparities on the original 1280x720 and on the downscaled 640x360 image, which are then merged to a 191-level disparity. For more information see the Extended Disparity tab in :ref:`this table <Currently configurable blocks>`.

Using the previous OAK-D example, disparity_in_pixels now becomes **190** and the minimum distance is:

.. code-block:: python
min_distance = focal_length_in_pixels * baseline / disparity_in_pixels = 882.5 * 7.5cm / 190 = 34.84cm
or roughly 35cm.

.. note::

Applying both of those options is possible, which would set the minimum depth to 1/4 of the standard settings, but at such short distances the minimum depth is limited by focal length, which is 19.6cm, since OAK-D mono cameras have fixed focus distance: 19.6cm - infinity.

See `these examples <https://github.com/luxonis/depthai-experiments/tree/master/gen2-camera-demo#real-time-depth-from-depthai-stereo-pair>`__ for how to enable Extended Disparity.

Disparity shift to lower min depth perception
---------------------------------------------

Another option to perceive closer depth range is to use disparity shift. Disparity shift will shift the starting point
of the disparity search, which will significantly decrease max depth (MazZ) perception, but it will also decrease min depth (MinZ) perception.
Disparity shift can be combined with extended/subpixel/LR-check modes.

.. image:: https://user-images.githubusercontent.com/18037362/189375017-2fa137d2-ad6b-46de-8899-6304bbc6c9d7.png

**Left graph** shows min and max disparity and depth for OAK-D (7.5cm baseline, 800P resolution, ~70° HFOV) by default (disparity shift=0). See :ref:`Calculate depth using disparity map`.
Since hardware (stereo block) has a fixed 95 pixel disparity search, DepthAI will search from 0 pixels (depth=INF) to 95 pixels (depth=71cm).

**Right graph** shows the same, but at disparity shift set to 30 pixels. This means that disparity search will be from 30 pixels (depth=2.2m) to 125 pixels (depth=50cm).
This also means that depth will be very accurate at the short range (**theoretically** below 5mm depth error).

**Limitations**:

- Because of the inverse relationship between disparity and depth, MaxZ will decrease much faster than MinZ as the disparity shift is increased. Therefore, it is **advised not to use a larger than necessary disparity shift**.
- Tradeoff in reducing the MinZ this way is that objects at **distances farther away than MaxZ will not be seen**.
- Because of the point above, **we only recommend using disparity shift when MaxZ is known**, such as having a depth camera mounted above a table pointing down at the table surface.
- Output disparity map is not expanded, only the depth map. So if disparity shift is set to 50, and disparity value obtained is 90, the real disparity is 140.

**Compared to Extended disparity**, disparity shift:

- **(+)** Is faster, as it doesn't require an extra computation, which means there's also no extra latency
- **(-)** Reduces the MaxZ (significantly), while extended disparity only reduces MinZ.

Disparity shift can be combined with extended disparity.

.. doxygenfunction:: dai::StereoDepthConfig::setDisparityShift
:project: depthai-core
:no-link:

Max stereo depth distance
=========================

The maximum depth perception distance depends on the :ref:`accuracy of the depth perception <Depth perception accuracy>`. The formula used to calculate this distance is an approximation, but is as follows:

.. code-block:: python
Dm = (baseline/2) * tan((90 - HFOV / HPixels)*pi/180)
So using this formula for existing models the *theoretical* max distance is:

.. code-block:: python
# For OAK-D (7.5cm baseline)
Dm = (7.5/2) * tan((90 - 71.9/1280)*pi/180) = 3825.03cm = 38.25 meters
# For OAK-D-CM4 (9cm baseline)
Dm = (9/2) * tan((90 - 71.9/1280)*pi/180) = 4590.04cm = 45.9 meters
If greater precision for long range measurements is required, consider enabling Subpixel Disparity or using a larger baseline distance between mono cameras. For a custom baseline, you could consider using `OAK-FFC <https://docs.luxonis.com/projects/hardware/en/latest/pages/DM1090.html>`__ device or design your own baseboard PCB with required baseline. For more information see Subpixel Disparity under the Stereo Mode tab in :ref:`this table <Currently configurable blocks>`.

Depth perception accuracy
=========================

Disparity depth works by matching features from one image to the other and its accuracy is based on multiple parameters:

* Texture of objects / backgrounds

Backgrounds may interfere with the object detection, since backgrounds are objects too, which will make depth perception less accurate. So disparity depth works very well outdoors as there are very rarely perfectly-clean/blank surfaces there - but these are relatively commonplace indoors (in clean buildings at least).

* Lighting

If the illumination is low, the diparity map will be of low confidence, which will result in a noisy depth map.

* Baseline / distance to objects

Lower baseline enables us to detect the depth at a closer distance as long as the object is visible in both the frames. However, this reduces the accuracy for large distances due to less pixels representing the object and disparity decreasing towards 0 much faster.
So the common norm is to adjust the baseline according to how far/close we want to be able to detect objects.

Limitation
==========

Since depth is calculated from disparity, which requires the pixels to overlap, there is inherently a vertical
band on the left side of the left mono camera and on the right side of the right mono camera, where depth
cannot be calculated, since it is seen by only 1 camera. That band is marked with :code:`B`
on the following picture.

.. image:: https://user-images.githubusercontent.com/59799831/135310921-67726c28-07e7-4ffa-bc8d-74861049517e.png

Meaning of variables on the picture:

- ``BL [cm]`` - Baseline of stereo cameras.
- ``Dv [cm]`` - Minimum distace where both cameras see an object (thus where depth can be calculated).
- ``B [pixels]`` - Width of the band where depth cannot be calculated.
- ``W [pixels]`` - Width of mono in pixels camera or amount of horizontal pixels, also noted as :code:`HPixels` in other formulas.
- ``D [cm]`` - Distance from the **camera plane** to an object (see image :ref:`here <Measuring real-world object dimensions>`).
- ``F [cm]`` - Width of image at the distance ``D``.

.. image:: https://user-images.githubusercontent.com/59799831/135310972-c37ba40b-20ad-4967-92a7-c71078bcef99.png

With the use of the :code:`tan` function, the following formulas can be obtained:

- :code:`F = 2 * D * tan(HFOV/2)`
- :code:`Dv = (BL/2) * tan(90 - HFOV/2)`

In order to obtain :code:`B`, we can use :code:`tan` function again (same as for :code:`F`), but this time
we must also multiply it by the ratio between :code:`W` and :code:`F` in order to convert units to pixels.
That gives the following formula:

.. code-block:: python
B = 2 * Dv * tan(HFOV/2) * W / F
B = 2 * Dv * tan(HFOV/2) * W / (2 * D * tan(HFOV/2))
B = W * Dv / D # pixels
Example: If we are using OAK-D, which has a :code:`HFOV` of 72°, a baseline (:code:`BL`) of 7.5 cm and
:code:`640x400 (400P)` resolution is used, therefore :code:`W = 640` and an object is :code:`D = 100` cm away, we can
calculate :code:`B` in the following way:

.. code-block::
Dv = 7.5 / 2 * tan(90 - 72/2) = 3.75 * tan(54°) = 5.16 cm
B = 640 * 5.16 / 100 = 33 # pixels
Credit for calculations and images goes to our community member gregflurry, which he made on
`this <https://discuss.luxonis.com/d/339-naive-question-regarding-stereodepth-disparity-and-depth-outputs/7>`__
forum post.

.. note::

OAK-D-PRO will include both IR dot projector and IR LED, which will enable operation in no light.
IR LED is used to illuminate the whole area (for mono/color frames), while IR dot projector is mostly
for accurate disparity matching - to have good quality depth maps on blank surfaces as well. For outdoors,
the IR laser dot projector is only relevant at night. For more information see the development progress
`here <https://github.com/luxonis/depthai-hardware/issues/114>`__.

Measuring real-world object dimensions
======================================

Expand Down
2 changes: 1 addition & 1 deletion docs/source/samples/host_side/device_information.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Demo
Found device '192.168.33.192', MxId: '1844301011F4C51200', State: 'BOOTLOADER'
Booting the first available camera (1.3)...
Available camera sensors: {<CameraBoardSocket.RIGHT: 2>: 'OV9282', <CameraBoardSocket.RGB: 0>: 'IMX378', <CameraBoardSocket.LEFT: 1>: 'OV9282'}
Available camera sensors: {<CameraBoardSocket.CAM_C: 2>: 'OV9282', <CameraBoardSocket.CAM_A: 0>: 'IMX378', <CameraBoardSocket.CAM_B: 1>: 'OV9282'}
Product name: OAK-D Pro AF, board name DM9098
Expand Down
2 changes: 1 addition & 1 deletion examples/AprilTag/apriltag.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

# Properties
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
monoLeft.setCamera("left")

aprilTag.initialConfig.setFamily(dai.AprilTagConfig.Family.TAG_36H11)

Expand Down
2 changes: 1 addition & 1 deletion examples/AprilTag/apriltag_rgb.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

# Properties
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)

manip.initialConfig.setResize(480, 270)
manip.initialConfig.setFrameType(dai.ImgFrame.Type.GRAY8)
Expand Down
2 changes: 1 addition & 1 deletion examples/ColorCamera/rgb_isp_scale.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
xoutVideo.setStreamName("video")

# Properties
camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)
camRgb.setIspScale(1, 2)
camRgb.setVideoSize(1920, 1080)
Expand Down
2 changes: 1 addition & 1 deletion examples/ColorCamera/rgb_undistort.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import numpy as np

camRes = dai.ColorCameraProperties.SensorResolution.THE_1080_P
camSocket = dai.CameraBoardSocket.RGB
camSocket = dai.CameraBoardSocket.CAM_A
ispScale = (1,2)

def getMesh(calibData, ispSize):
Expand Down
Loading

0 comments on commit 4f65e78

Please sign in to comment.