Draw safety alerts using the Mapbox Vision SDK for iOS
Familiarity with Xcode and Swift, and completion of the Mapbox Vision SDK for
iOS Install and configure
guide.
The Mapbox Vision SDK for iOS is a library for interpreting road scenes in real time directly on iOS devices using the device’s built-in camera. The Vision SDK detects many types of objects including cars, people, road signs, and more.
In this tutorial, you'll learn how to use the Vision Safety module from the Mapbox Vision SDK and apply it to detect overspeeding and potential collisions with cars.
In the example used in this tutorial, all custom alerts have predefined images.In your own application, you can use your own images.
Getting started
Here are the resources that you need before getting started:
- An application including Mapbox Vision SDK for iOS. Before starting this tutorial, go through the Install and configure steps in the Vision SDK for iOS documentation. This will walk you through how to install the Vision SDK and configure your application.
- Recorded session. This tutorial is based on a recorded driving session through a city and the replay capabilities of
MapboxVision
. You will useVisionReplayManager
for a recorded session.- Check our Testing and development guide to familiarize yourself with record and replay functionality.
- You can download the recorded session used in this tutorial below.
To configure your application with a prerecorded session:
- Unzip the contents to a folder on your local machine.
- Go your Xcode project
Info.plist
and setYES
forUIFileSharingEnabled
thus enabling file sharing through Finder. - Install the app to the device (
⌘
+R
). - Connect your device, choose it in Finder under
Locations
section, and selectFiles
tab. - Drag and drop the folder with the recorded session onto your app. Now the session is available in the
Documents
folder inside the app container. - In the code use
VisionReplayManager.create(recordPath:)
method to create an instance ofVisionReplayManager
by providing a path to a recorded session.
Configure Vision SDK lifecycle
In this tutorial, you'll use VisionReplayManager
to run a prerecorded session (which includes video and telemetry data) and find speed signs and cars in the video. The VisionReplayManager
class is the main object for registering for events from the Vision SDK and controlling its delivery. For production applications or testing in a live environment, use VisionManager
instead of VisionReplayManager
. See the Next steps section for details.
To set up the Vision SDK:
- Create a
VisionReplayManager
instance with a recorded session path.
- Register its
delegate
.
- Create an instance of
VisionSafetyManager
configured withVisionReplayManager
instance.
- Register its
delegate
to receive safety related events.
- Create
VisionPresentationViewController
and configure it withVisionReplayManager
to display camera frames.
- Start delivering events by calling
start
onVisionReplayManager
.
- Stop delivering events by calling
stop
onVisionReplayManager
.
Set up views to draw alerts
Overspeeding alert
To show an overspeeding alert you'll use a UIView
element named alertOverspeedingView
:
Collision detections
You will draw a collision state using a custom CollisionDetectionView
with a custom border color to show a bounding box around potential collision objects:
Having a custom class also helps to track detection views in UIView
's subviews.
If you need to remove all CollisionDetectionView
objects from superview, you can use the following code:
Respond to SDK events
Implement VisionManagerDelegate
All delegate methods are called on a
background thread, so consider dispatching execution to the
DispatchQueue.main
when working with UI elements or the queue of your choice
for data synchronization.
You need to implement several methods from VisionManagerDelegate
to get the necessary data.
Use visionManager(_:didUpdateVehicleState:)
to get the latest vehicle state.
You need to save the latest state of the vehicle:
Use visionManagerDidCompleteUpdate(_:)
to know the whole update iteration is completed. That means all the data that came from delegate methods is in sync.
When the update is completed, you can use all the data to update UI:
Implement VisionSafetyManagerDelegate
All delegate methods are called on a
background thread, so consider dispatching execution to the
DispatchQueue.main
when working with UI elements or the queue of your choice
for data synchronization.
You need to implement several methods from VisionManagerSafetyDelegate
.
Use visionSafetyManager(_:didUpdateRoadRestrictions:)
to know when the road restrictions were updated.
When VisionSafetyManagerDelegate
provides you with new information about road restrictions (speed alerts, more specifically), you store it in a local variable.
Then you'll use this information in the visionManagerDidCompleteUpdate(_ visionManager: VisionManagerProtocol)
method to decide if a speed limit has been exceeded:
Use visionSafetyManager(_:didUpdateCollisions:)
to get collision objects.
In this method you need to filter collision objects that have type Car
and store them in a local array:
Handle overspeeding
As soon as all necessary data is in sync within the visionManagerDidCompleteUpdate(_ visionManager: VisionManagerProtocol)
method, you can draw an overspeeding alert:
Handle collisions
As soon as all necessary data is in sync within the visionManagerDidCompleteUpdate(_ visionManager: VisionManagerProtocol)
method, you can draw custom collision detection views.
Remove CollisionDetectionView
objects from the view:
Iterating through the CollisionObject
array, do the following steps to show the collision detection view for each item.
First, calculate absolute coordinates of the bounding box, given the relative coordinates of the rectangle around the detected object on the camera frame:
Convert the coordinates of the bounding box from the camera frame space to the view space saving the aspect ratio:
Having the bounding box with coordinates in the view space, draw a collision detection alert:
Final result
Here is the complete code for the demo:
import MapboxVision
import MapboxVisionSafety
import UIKit
/**
* "Safety alerts" example demonstrates how to utilize events from MapboxVisionSafetyManager
* to alert a user about exceeding allowed speed limit and potential collisions with other cars.
*/
// Custom UIView to draw a red bounding box
class CollisionDetectionView: UIView {
override init(frame: CGRect) {
super.init(frame: frame)
// Transparent view with a red border
backgroundColor = .clear
layer.borderWidth = 3
layer.borderColor = UIColor.red.cgColor
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
class SafetyAlertsViewController: UIViewController {
private var visionManager: VisionReplayManager!
private var visionSafetyManager: VisionSafetyManager!
private let visionViewController = VisionPresentationViewController()
private var alertOverspeedingView: UIView!
private var vehicleState: VehicleState?
private var speedLimits: SpeedLimits?
private var carCollisions = [CollisionObject]()
override func viewDidLoad() {
super.viewDidLoad()
// Documents directory path with files uploaded via Finder
let documentsPath =
NSSearchPathForDirectoriesInDomains(.documentDirectory,
.userDomainMask,
true).first!
let path = documentsPath.appending("/safety-alerts-drawing")
// create VisionReplayManager with a path to recorded session
visionManager = try? VisionReplayManager.create(recordPath: path)
// register its delegate
visionManager.delegate = self
// create VisionSafetyManager and register as its delegate to receive safety related events
visionSafetyManager = VisionSafetyManager.create(visionManager: visionManager)
// register its delegate
visionSafetyManager.delegate = self
// configure Vision view to display sample buffers from video source
visionViewController.set(visionManager: visionManager)
// add Vision view as a child view
addVisionView()
// add view to draw overspeeding alert
addOverspeedingAlertView()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
visionManager.start()
}
override func viewDidDisappear(_ animated: Bool) {
super.viewDidDisappear(animated)
visionManager.stop()
}
deinit {
// free up VisionSafetyManager's resources
visionSafetyManager.destroy()
// free up VisionManager's resources
visionManager.destroy()
}
private func addVisionView() {
addChild(visionViewController)
view.addSubview(visionViewController.view)
visionViewController.didMove(toParent: self)
}
private func addOverspeedingAlertView() {
alertOverspeedingView = UIImageView(image: UIImage(named: "alert"))
alertOverspeedingView.isHidden = true
alertOverspeedingView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(alertOverspeedingView)
NSLayoutConstraint.activate([
alertOverspeedingView.topAnchor
.constraint(equalToSystemSpacingBelow: view.safeAreaLayoutGuide.topAnchor, multiplier: 1),
view.safeAreaLayoutGuide.trailingAnchor
.constraint(equalToSystemSpacingAfter: alertOverspeedingView.trailingAnchor, multiplier: 1)
])
}
// MARK: - Handle VisionSafety events
private func updateCollisionDrawing() {
// remove `CollisionDetectionView` objects from the view
for subview in view.subviews {
if subview.isKind(of: CollisionDetectionView.self) {
subview.removeFromSuperview()
}
}
// iterate the collection of `CollisionObject`s and draw each of them
for carCollision in carCollisions {
let relativeBBox = carCollision.lastDetection.boundingBox
let cameraFrameSize = carCollision.lastFrame.image.size.cgSize
// calculate absolute coordinates
let bboxInCameraFrameSpace = CGRect(x: relativeBBox.origin.x * cameraFrameSize.width,
y: relativeBBox.origin.y * cameraFrameSize.height,
width: relativeBBox.size.width * cameraFrameSize.width,
height: relativeBBox.size.height * cameraFrameSize.height)
// at this stage, bbox has the coordinates in the camera frame space
// you should convert it to the view space saving the aspect ratio
// first, construct left-top and right-bottom coordinates of a bounding box
var leftTop = CGPoint(x: bboxInCameraFrameSpace.origin.x,
y: bboxInCameraFrameSpace.origin.y)
var rightBottom = CGPoint(x: bboxInCameraFrameSpace.maxX,
y: bboxInCameraFrameSpace.maxY)
// then convert the points from the camera frame space into the view frame space
leftTop = leftTop.convertForAspectRatioFill(from: cameraFrameSize,
to: view.bounds.size)
rightBottom = rightBottom.convertForAspectRatioFill(from: cameraFrameSize,
to: view.bounds.size)
// finally, construct a bounding box in the view frame space
let bboxInViewSpace = CGRect(x: leftTop.x,
y: leftTop.y,
width: rightBottom.x - leftTop.x,
height: rightBottom.y - leftTop.y)
// draw a collision detection alert
let view = CollisionDetectionView(frame: bboxInViewSpace)
self.view.addSubview(view)
}
}
private func updateOverspeedingDrawing() {
// when update is completed all the data has the most current state
guard let vehicle = vehicleState, let limits = speedLimits else { return }
// decide whether speed limit is exceeded by comparing it with the current speed
let isOverSpeeding = vehicle.speed > limits.speedLimitRange.max
alertOverspeedingView.isHidden = !isOverSpeeding
}
}
extension SafetyAlertsViewController: VisionManagerDelegate {
func visionManager(_ visionManager: VisionManagerProtocol,
didUpdateVehicleState vehicleState: VehicleState) {
// dispatch to the main queue in order to sync access to `VehicleState` instance
DispatchQueue.main.async { [weak self] in
// save the latest state of the vehicle
self?.vehicleState = vehicleState
}
}
func visionManagerDidCompleteUpdate(_ visionManager: VisionManagerProtocol) {
// dispatch to the main queue in order to work with UIKit elements
DispatchQueue.main.async { [weak self] in
// update UI elements
self?.updateOverspeedingDrawing()
self?.updateCollisionDrawing()
}
}
}
extension SafetyAlertsViewController: VisionSafetyManagerDelegate {
func visionSafetyManager(_ visionSafetyManager: VisionSafetyManager,
didUpdateRoadRestrictions roadRestrictions: RoadRestrictions) {
// dispatch to the main queue in order to sync access to `SpeedLimits` instance
DispatchQueue.main.async { [weak self] in
// save currenly applied speed limits
self?.speedLimits = roadRestrictions.speedLimits
}
}
func visionSafetyManager(_ visionSafetyManager: VisionSafetyManager,
didUpdateCollisions collisions: [CollisionObject]) {
// we will draw collisions with cars only, so we need to filter `CollisionObject`s
let carCollisions = collisions.filter { $0.object.detectionClass == .car }
// dispatch to the main queue in order to sync access to `[CollisionObject]` array
DispatchQueue.main.async { [weak self] in
// update current collisions state
self?.carCollisions = carCollisions
}
}
}
// This comment is here to assure the correct rendering of code snippets in a public documentation
Next steps
Use real-time data
When you're done testing, follow these steps to start working with real-time data.
- Change the type of
visionManager
var
toVisionManager
- Create and save a
CameraVideoSource
instance
// create a video source obtaining buffers from camera module
cameraVideoSource = CameraVideoSource()
- Create
VisionManager
with thecameraVideoSource
you created above
// create VisionManager with video source
visionManager = VisionManager.create(videoSource: cameraVideoSource!)
- Start
CameraVideoSource
along withVisionManager
inviewWillAppear(_:)
- Stop
CameraVideoSource
along withVisionManager
inviewDidDisappear(_:)
- Update drawing asserts and logic in
updateCollisionDrawing
andupdateOverspeedingDrawing
methods if it's needed