Testing and development
You can point your device at a prerecorded video playing on a separate monitor to test some features that rely only on visual detection and classification.
You must use a physical Android device for all development and testing with Vision-related products. You can use the camera built into the device or an external camera connected to a physical device. You cannot use an Android emulated device with Android Studio.
You can use prerecorded videos from a dashboard-mounted camera to test some features like detections, segmentations, and sign classifications. You can use any dashboard camera video (for example, from YouTube) or record your own video to be sure to capture local traffic signs and road markings important to your application.
For the best results when recording your own videos, you should use the same in-car setup described in the Requirements and Device setup sections of this documentation. Some tips for recording a video:
- The Vision SDK works best under good lighting conditions. Lighting conditions may impact the reliability of certain behaviors so you should collect video in any lighting conditions you’d like to test.
- Plan your route ahead of time to include a diversity of driving situations that may be important to your application (roads with different lane configurations, various speed limits, encounters with pedestrians and cyclists, etc).
After you have selected or recorded a video, you will need to set up your physical development environment. You will need:
- A physical device with a built-in camera (or set up an external video source following the code example)
- A monitor to play the video on
- A method for positioning the device to point at the video
Position the device so the camera is pointed at the video. The video should fill the entire screen of your device.
This approach to development can help you test some features of the SDK, but will not work for testing all features since some features rely on GPS and other sensors.
Examples of features that can be tested using this setup:
- Road sign classification to determine the
signTypefor road signs that appear in the video.
- Lane detection.
- Vehicle, pedestrian, and bicycle detection for the 2D screen location of objects only.
Examples of features that cannot be tested using this setup:
- AR Navigation cannot be tested using this setup because it requires GPS data. Using a simple simulated location is not a workable alternative because the location needs to match the current location featured in the video.
- Vision Safety cannot be tested using this setup because Vision Safety requires the current vehicle speed and the ability to estimate distance for objects in view.
To record a session including video, GPS, sensors, and other necessary data, you'll need to:
- Prepare a Vision application and load it on your device.
VisionManager.startRecordingmethod to start session recording, then stop it with
VisionManager.start();string path = "/path/to/recorded/session/";VisionManager.startRecording(path);// some code during session is runningVisionManager.stopRecording();// record another session to different directorypath = "/path/to/recorded/session2/";VisionManager.startRecording(path);// some code during session is runningVisionManager.stopRecording();VisionManager.stop();
After you've recorded a session, you can replay it using
VisionReplayManager, which is a replacement of
VisionManager for replay scenario. The only method of
VisionManager that still must be called is
Provide path to the recorded session to
string path = "/path/to/recorded/session/";VisionReplayManager.create(path);VisionReplayManager.start();