Vision SDK for iOS
Beta

Current version: v0.9.0 View changelog

  • AI and AR features for drivers that run on today’s mobile and embedded applications
  • Build augmented reality navigation with turn-by-turn directions and custom objects
  • Create custom alerts for speeding, lane departures, tailgating, and more
  • Neural networks run on-device: real time performance without taxing your data plan

The Mapbox Vision SDK for iOS is a library for interpreting road scenes in real-time directly on iOS devices using the device’s built-in camera. Features include:

  • Classification and display of regulatory and warning signs
  • Object detection for vehicles, pedestrians, road signs, and traffic lights
  • Semantic segmentation of the roadway into 14 different classes
  • Augmented reality navigation with global coverage
  • Support for external cameras: WiFi or wired connection

SDK structure

The Vision SDK for iOS is composed of three libraries you can interact with directly: Vision, Vision AR, and Vision Safety. All three depend on the VisionCore module.

Available libraries

  • Vision (Vision) is the primary library, needed for any application of Mapbox Vision. Its components enable camera configuration, display of classification, detection, and segmentation layers, lane feature extraction, and other interfaces. Vision accesses real-time inference running in Vision Core.
  • Vision AR (VisionAR) is an add-on module for Vision used to create customizable augmented reality experiences. It allows configuration of the user’s route visualization: lane material (shaders, textures), lane geometry, occlusion, custom objects, and more. Read more in the AR navigation guide.
  • Vision Safety (VisionSafety) is an add-on module for Vision used to create customizable alerts for speeding, nearby vehicles, cyclists, pedestrians, lane departures, and more. Read more in the Safety alerts guide.

Core logic

Vision Core is the core logic of the system, including all machine learning models. Importing any of the Vision-related modules listed above into your project automatically brings VisionCore along.

Requirements

The Vision SDK for iOS is written in Swift 4.2 and can be used with iOS 11.2 and higher on iPhone 6s or newer.

Use of the Vision SDK requires that the device is pointed with a view of the road. We strongly recommend using a dashboard or windshield mount to keep your phone oriented correctly while you drive. We have tested a few options and have seen positive results with two mounts (option 1 and option 2).

Getting started

To set up the Vision SDK you will need to download the SDK, install the frameworks relevant to your project, and complete a few configuration steps.

Download and install the SDK

Download from vision.mapbox.com/install

You must download the relevant frameworks from vision.mapbox.com/install before continuing. You can download the framework directly or import it into your project with CocoaPods or Carthage. This will require that you are logged into your Mapbox account.

SDK configuration

After downloading or importing the SDK into your project, configure the following in your Xcode project.

Set your Mapbox access token

Mapbox APIs require a Mapbox account and access token.

  1. Get an access token from the Mapbox account page.
  2. In the project editor, select the application target, then go to the Info tab.
  3. Under the “Custom iOS Target Properties” section, set MGLMapboxAccessToken to your access token.

Configure permissions

  1. In order for the SDK to track the user’s location, set NSLocationWhenInUseUsageDescription and NSLocationAlwaysAndWhenInUseUsageDescription to a description of location usage.
  2. Set NSCameraUsageDescription to a description of camera usage.

Set up the ViewController

Required: Import relevant modules, MapboxVision being required.

import MapboxVision
 // OPTIONAL: include Vision AR functionality
import MapboxVisionAR
// OPTIONAL: include Vision Safety functionality
import MapboxVisionSafety

Required: Initialize video source and create instance of VisionManager with it.

let videoSource = CameraVideoSource()
let visionManager = 
    VisionManager.create(videoSource: videoSource)

Optional: If you want to subscribe to AR events, create VisionARManager.

// Create AR module
// `self` should implement `VisionARManagerDelegate` protocol
let visionARMAnager = 
    VisionARManager.create(visionManager: visionManager, delegate: self)

Optional: If you want to subscribe to Safety events, create VisionSafetyManager.

// Create Safety module
// `self` should implement `VisionSafetyManagerDelegate` protocol
let visionSafetyManager = 
    VisionSafetyManager.create(visionManager: visionManager, delegate: self)

Required: Control events sending with an instance of VisionManager.

videoSource.start()
// `self` should implement `VisionManagerDelegate` protocol
visionManager.start(delegate: self)
visionManager.stop()

Required: Clean up the resources when you don’t need them anymore.

videoSource.stop()
visionManager.stop()

// AR and/or Safety should be destroyed first
visionARManager.destroy()
visionSafetyManager.destroy()

// Finally destroy instance of `VisionManager`
visionManager.destroy()

Device setup

After installing the framework, you will need to set up the device in the vehicle. Some things to consider when choosing and setting up a mount:

  • Generally, shorter length mounts will vibrate less. Mounting to your windshield or to the dashboard itself are both options.
  • Place the phone near or behind your rearview mirror. Note that your local jurisdiction may have limits on where mounts may be placed.
  • Make sure the phone’s camera view is unobstructed (you will be able to tell with any of the video screens open).

Testing and development

Read more about setting up your development environment for testing the capabilities of the Vision SDK in the Testing and development guide.

Conditions

  • Pricing: For details on pricing, read the Vision FAQ.
  • Attribution: While the Vision SDK is using the camera you must display the Mapbox watermark on screen. Read more about attribution requirements in our terms of service.
Was this page helpful?