MapboxSpeechSynthesizer
open class MapboxSpeechSynthesizer : NSObject, SpeechSynthesizing
extension MapboxSpeechSynthesizer: AVAudioPlayerDelegate
SpeechSynthesizing
implementation, using MapboxSpeech
framework. Uses pre-caching mechanism for upcoming instructions.
-
Declaration
Swift
public weak var delegate: SpeechSynthesizingDelegate?
-
Declaration
Swift
public var muted: Bool { get set }
-
Declaration
Swift
public var volume: Float { get set }
-
Declaration
Swift
public var locale: Locale?
-
Number of upcoming
Instructions
to be pre-fetched.Higher number may exclude cases when required vocalization data is not yet loaded, but also will increase network consumption at the beginning of the route. Keep in mind that pre-fetched instuctions are not guaranteed to be vocalized at all due to re-routing or user actions. “0” will effectively disable pre-fetching.
Declaration
Swift
public var stepsAheadToCache: UInt
-
An
AVAudioPlayer
through which spoken instructions are played.Declaration
Swift
public var audioPlayer: AVAudioPlayer?
-
Controls if this speech synthesizer is allowed to manage the shared
AVAudioSession
. Set this field tofalse
if you want to manage the session yourself, for example if your app has background music. Default value istrue
.Declaration
Swift
public var managesAudioSession: Bool
-
Mapbox speech engine instance.
The speech synthesizer uses this object to convert instruction text to audio.
Declaration
Swift
public private(set) var remoteSpeechSynthesizer: SpeechSynthesizer { get }
-
Checks if speech synthesizer is now pronouncing an instruction.
Declaration
Swift
public var isSpeaking: Bool { get }
-
Creates new
MapboxSpeechSynthesizer
with standardSpeechSynthesizer
for converting text to audio.Declaration
Swift
public init(accessToken: String? = nil, host: String? = nil)
Parameters
accessToken
A Mapbox access token used to authorize Mapbox Voice API requests. If an access token is not specified when initializing the speech synthesizer object, it should be specified in the
MBXAccessToken
key in the main application bundle’s Info.plist.host
An optional hostname to the server API. The Mapbox Voice API endpoint is used by default.
-
Creates new
MapboxSpeechSynthesizer
with providedSpeechSynthesizer
instance for converting text to audio.Declaration
Swift
public init(remoteSpeechSynthesizer: SpeechSynthesizer)
Parameters
remoteSpeechSynthesizer
Custom
SpeechSynthesizer
used to provide voice data. -
Declaration
Swift
open func prepareIncomingSpokenInstructions(_ instructions: [SpokenInstruction], locale: Locale? = nil)
-
Declaration
Swift
open func speak(_ instruction: SpokenInstruction, during legProgress: RouteLegProgress, locale: Locale? = nil)
-
Declaration
Swift
open func stopSpeaking()
-
Declaration
Swift
open func interruptSpeaking()
-
Vocalize the provided audio data.
This method is a final part of a vocalization pipeline. It passes audio data to the audio player.
instruction
is used mainly for logging and reference purposes. It’s text contents do not affect the vocalization while the actual audio is passed viadata
.Declaration
Swift
open func speak(_ instruction: SpokenInstruction, data: Data)
Parameters
instruction
corresponding instruction to be vocalized. Used for logging and reference. Modifying it’s
text
orssmlText
does not affect vocalization.data
audio data, as provided by
remoteSpeechSynthesizer
, to be played.