MapboxSpeechSynthesizer
open class MapboxSpeechSynthesizer : NSObject, SpeechSynthesizing
extension MapboxSpeechSynthesizer: AVAudioPlayerDelegate
SpeechSynthesizing implementation, using MapboxSpeech framework. Uses pre-caching mechanism for upcoming instructions.
-
Declaration
Swift
public weak var delegate: SpeechSynthesizingDelegate? -
Declaration
Swift
public var muted: Bool { get set } -
Declaration
Swift
public var volume: Float { get set } -
Declaration
Swift
public var locale: Locale? -
Number of upcoming
Instructionsto be pre-fetched.Higher number may exclude cases when required vocalization data is not yet loaded, but also will increase network consumption at the beginning of the route. Keep in mind that pre-fetched instuctions are not guaranteed to be vocalized at all due to re-routing or user actions. “0” will effectively disable pre-fetching.
Declaration
Swift
public var stepsAheadToCache: UInt -
An
AVAudioPlayerthrough which spoken instructions are played.Declaration
Swift
public var audioPlayer: AVAudioPlayer? -
Controls if this speech synthesizer is allowed to manage the shared
AVAudioSession. Set this field tofalseif you want to manage the session yourself, for example if your app has background music. Default value istrue.Declaration
Swift
public var managesAudioSession: Bool -
Mapbox speech engine instance.
The speech synthesizer uses this object to convert instruction text to audio.
Declaration
Swift
public private(set) var remoteSpeechSynthesizer: SpeechSynthesizer { get }
-
Checks if speech synthesizer is now pronouncing an instruction.
Declaration
Swift
public var isSpeaking: Bool { get } -
Creates new
MapboxSpeechSynthesizerwith standardSpeechSynthesizerfor converting text to audio.Declaration
Swift
public init(accessToken: String? = nil, host: String? = nil)Parameters
accessTokenA Mapbox access token used to authorize Mapbox Voice API requests. If an access token is not specified when initializing the speech synthesizer object, it should be specified in the
MBXAccessTokenkey in the main application bundle’s Info.plist.hostAn optional hostname to the server API. The Mapbox Voice API endpoint is used by default.
-
Creates new
MapboxSpeechSynthesizerwith providedSpeechSynthesizerinstance for converting text to audio.Declaration
Swift
public init(remoteSpeechSynthesizer: SpeechSynthesizer)Parameters
remoteSpeechSynthesizerCustom
SpeechSynthesizerused to provide voice data. -
Declaration
Swift
open func prepareIncomingSpokenInstructions(_ instructions: [SpokenInstruction], locale: Locale? = nil) -
Declaration
Swift
open func speak(_ instruction: SpokenInstruction, during legProgress: RouteLegProgress, locale: Locale? = nil) -
Declaration
Swift
open func stopSpeaking() -
Declaration
Swift
open func interruptSpeaking() -
Vocalize the provided audio data.
This method is a final part of a vocalization pipeline. It passes audio data to the audio player.
instructionis used mainly for logging and reference purposes. It’s text contents do not affect the vocalization while the actual audio is passed viadata.Declaration
Swift
open func speak(_ instruction: SpokenInstruction, data: Data)Parameters
instructioncorresponding instruction to be vocalized. Used for logging and reference. Modifying it’s
textorssmlTextdoes not affect vocalization.dataaudio data, as provided by
remoteSpeechSynthesizer, to be played.
Install in Dash
MapboxSpeechSynthesizer Class Reference