Getting Started#
This page shows how to start using the library. Here you will be able to install the 3D+AI library on the Starter Kit and run your first execution.
Download and install 3D+AI#
First things first, you’ll need to download the .deb package containing everything needed to use the library and run the demo app.
You can download the latest installation package from the Releases section of the 3D+AI repository on GitHub on your PC.
Then you need to copy the .deb to the device. You can use the scp command to copy the file from your PC to the Starter Kit, for example
$ scp 3dai_1.0.0-r0_arm64.deb root@192.168.10.123:/home/root
Replace the IP address with the actual IP address of your Starter Kit, you can run the following command to retrieve the IP address:
$ ip addr | grep inet
Once the .deb package is copied to the device, you can proceed with the installation:
$ dpkg -i 3dai_1.0.0-r0_arm64.deb
Before you continue, you need to accept the ULA (User License Agreement) and activate the time-limited trial.
Accept license and activate time-limited trial#
To accept the ULA and activate the time-limited trial you need to run the activation executable with the command:
$ 3dai_accept_ula_and_start_trial
You will be prompted to the following text
Please read the following 3D+AI User License Agreement
Press 'enter' to continue...
Press Enter to scroll and read through the ULA. When you are finished you will be prompted the following line:
To accept 3D+AI User License Agreement, type 'yes' and press Enter:
To accept, type ‘yes’ and press Enter.
After that, you will be prompted to confirm the activation of the time-limited trial
To activate the 3D+AI Trial license, type 'Yes I agree' and press Enter:
To start the time-limited trial, type ‘Yes I agree’ and press Enter. Make sure that you are connected to the internet, since it is required to complete the license activation. This will execute the activation process. If this finishes successfully a “3D+AI Trial license activated” message will be printed.
Sometimes the command could fail on a first run and return an error, please try again. If the error persists, please get in touch with the Deep Vision team.
Note
You need an internet connection each time you run the library using the time-limited trial. This is a limitation exclusively of the time-limited trial, only needed for license verification. No data is sent to the internet during the library utilization. Also no internet connection will be needed to use the fully licensed 3D+AI library.
Run 3D+AI for the first time#
You are now able to start the first run of the 3D+AI library.
Run the following command:
$ 3dai_demo --use-video --stereo-video /usr/share/3dai/demo_video.mp4 --calibration-path /usr/share/3dai/demo_video_calib.json --left-right --depth --point-cloud
This command will run a demo execution of the library on a recorded test video. You will see the rectified left and right video streams, the depth and two versions of the point cloud: from top and from side view. If you want to explore more of the demo functionalities, check Demo Application.
Workflow overview#
The 3D+AI library offers a great flexibility of usage in the hands of the OEM. However, to simplify the first steps, let’s sketch in this paragraph the most linear and common workflow that will satisfy most of the use cases.
The library by itself does nothing as it requires a main application that takes care of setting everything up (just once), and then handling the loop for the 3D reconstruction task.
The demo application launched in the previous paragraph provides a basic version of this workflow. You can use it as a basis to better understand the usage of the library and to get inspiration for creating your optimized custom main application. Indeed, it’s exactly for these purposes that the demo application is provided in source format.
The setup step mainly consists of:
connecting to the video source (i.e.: live feed from camera modules, video streaming from the network, video files from storage),
loading the camera calibration parameters,
starting the 3D+AI library, which in turn prepares the hardware resources for the 3D reconstruction task; at this point you will be required to provide 3D+AI setup parameters, for example: image resolution, type of 3D output, log-level, etc;
optionally, other setup actions according to the logics of your application
Then the workflow enters a loop that consists of:
pre-processing: image grab, and (optional) image resize and rectification;
data-input: send images for asynchronous execution of the 3D reconstruction with 3D+AI, which performs: image rectification if necessary, stereo matching, preparation of 3D outputs, like: disparity, depth and/or point cloud;
go to the next loop iteration.
The 3D outputs are asynchronously returned in a callback, where the user can implement the preferred logics, like: launch user’s 3D-vision tasks, showing results on display or streaming out data, using GPIOs to actuate controls, etc;
This is the process as implemented in the demo application, but nothing hinders the advanced user to set up multi-thread processes for optimizing the performance in the above loop.
What’s next#
At this point, it’s time to dig deeper in the understanding of 3D+AI. In the next chapter Demo Application, you will better understand 3D+AI from the perspective of the library user. Instead, in the chapter Library Usage you will get into the details of the library from the developer perspective.