メインコンテンツまでスキップ

Performance Statistics

Maps SDK v11 introduces several new tools aimed at monitoring and enhancing the quality of the map rendering experience. Notably, the performance statistics API has been added to provide real-time information about the resource usage and processing duration of various rendering units.

Among others, these statistics include the rendering durations of individual layers and the global memory consumption of textures and vertices. These values are instrumental in the identification of rendering bottlenecks and in the prevention of performance regressions across different versions and style updates.

Note

The PerformanceStatistics API is an experimental feature of Maps SDK v11.

Getting started

It's important to emphasize that performance monitoring can introduce a slight overhead to the map rendering process. As a result, it's advisable to limit its use primarily for development purposes. By default, all monitoring probes are switched off, and the extent of the overhead may vary based on the type of samplers used and the frequency of reporting.

To enable performance monitoring, include any sampler in the performance statistics configuration options and provide it to the map instance. You can find practical examples of this in the Maps SDK repositories.

The collected statistics can be observed using the PerformanceStatisticsCallback function, which is invoked after the configured collectionDurationMillis has elapsed. These statistics include a summary of all the frames monitored within the configured sample duration. For optimal results and minimal impact on performance, it is advisable to set the collection duration to cover the entire testing process. For instance, if the test being observed lasts 2 minutes, then collectionDurationMillis could be configured for 120000 milliseconds so that a single PerformanceStatistics object is generated at the conclusion of the session which can then be used to assess the test case.

let options = PerformanceStatisticsOptions([.perFrame, .cumulative], samplingDurationMillis: 5000)
Map()
.collectPerformanceStatistics(options) { statistics in
print(statistics)
}

Format

In the initial release of v11, it's important to note that the performance statistics collection API is designated as experimental. This implies that both the configuration parameters and the format of the returned values may undergo changes in future updates. During this experimental phase, we actively seek feedback from our customers to determine which parameters need more context and to consider the inclusion of additional probes in our monitoring service.

In the current version, the API offers two types of samplers: one for cumulative metrics and another for per-frame statistics. The cumulative metrics provide insights into the state of the map, aiding in the identification of increases in resource usage. For example, we provide data about the bytes reserved for textures and vertices within the rendering session. An uptick in these values typically indicates the presence of more features to be rendered or an increase in the map's runtime content caching.

let stats = PerformanceStatistics(
collectionDurationMillis: 10.0,
mapRenderDurationStatistics: DurationStatistics(maxMillis: 1.0909, medianMillis: 0.932),
cumulativeStatistics: CumulativeRenderingStatistics(
drawCalls: nil,
textureBytes: 149898414,
vertexBytes: 24362720,
graphicsPrograms: nil,
graphicsProgramsCreationTimeMillis: nil
),
perFrameStatistics: nil
)

The per-frame statistics mainly contains details about the rendering durations of different components. If the collection duration is configured to zero, this object will hold values for each individual frame. A collection duration containing more than one frame time will present an aggregation of those frames.

Within the per-frame statistics, you can find information on the duration of the entire rendering process, which is useful for estimating the maximum potential frame rate based on your map's configuration. Furthermore, we provide statistics about the rendering durations of individual layers. This feature proves useful in identifying potential bottlenecks within specific style properties.

let stats = PerformanceStatistics(
collectionDurationMillis: 10.0,
mapRenderDurationStatistics: DurationStatistics(maxMillis: 1.0909, medianMillis: 0.932),
cumulativeStatistics: nil,
perFrameStatistics: PerFrameRenderingStatistics(
topRenderGroups: [
GroupPerformanceStatistics(durationMillis: 0.119, name: "settlement-minor-label"),
GroupPerformanceStatistics(durationMillis: 0.209, name: "country-label"),
],
topRenderLayers: [],
shadowMapDurationStatistics: DurationStatistics(maxMillis: 1.0909, medianMillis: 0.932),
uploadDurationStatistics: DurationStatistics(maxMillis: 1.0909, medianMillis: 0.932)
)
)

Analyzing the results

In a development environment, it's more beneficial to focus on the percentage difference between collected values in various test runs rather than the standalone values. For instance, it's valuable to examine the metrics before and after adding a new layer to the map's style. While it's expected that the new layer will increase resource usage, it's important to check this increase by comparing it with the existing style. This kind of analysis can guide decisions on whether the new layer should be filtered at specific zoom levels or if certain properties should be adjusted to decrease its impact on performance.

Furthermore, it's worth it to remember that the map's performance is influenced by various runtime conditions, with the camera configuration being a key factor. The volume of content to be loaded is highly dependent on factors such as the center coordinate, zoom level, and pitch of the map. For example, let's examine the standard style in the following three scenarios.

At lower zoom levels, the map is displayed as a globe, showing multiple countries within the viewport. Even though the covered physical area is large, there aren't too many labels visible simultaneously. Based on the collected statistics, a single frame can be rendered in less than 1 millisecond, and the layer responsible for rendering minor-settlement-labels takes the most time, which is less than 0.2 milliseconds. These results show that no further improvements are necessary at this zoom level.

Next, we can see a pitched map view near New York City. This view includes several symbol and line layers but does not involve 3D content. Given the increased number of features in this scenario, the statistics reveal that the median duration of a frame is 5.9 milliseconds, with the maximum frame duration reaching 12.8 milliseconds. The most time-consuming layer in an average frame is the natural-point-label layer, which takes up 2.1 milliseconds. It's worth to explore adjustments to the layer's filters to reduce the number of features on this zoom level, which can enhance performance in this context.

Finally, we explore a scenario displaying the standard style at the street level. In this setup, there's a high density of 3D features with shadows, including building models, fill extrusions, trees, besides line and symbol layers. It's important to remember that the performance metrics collection is influenced by the target hardware as well. To guarantee an optimal user experience, it's advisable to gauge performance on the oldest model supported by the software. By considering the minimum hardware requirements, it’s possible to fine-tune the map through runtime configurations to reduce the volume of rendered content and improve the user experience on slower hardware.

Statistics collection for dynamic test cases

The MapRecorder utility is a valuable companion to the performance statistics API, allowing for the comparison of performance in dynamic use-cases. This is particularly useful in navigation software, where the camera often continuously tracks a location indicator or provides an overview of the route with rapid zoom level changes. MapRecorder is well-suited for capturing these scenarios, and the performance statistics API enables you to analyze rendering statistics during playback.

This setup can help to uncover performance regressions during interactive use-cases, which are typically challenging to reproduce manually. It's particularly beneficial to pay close attention to the performance of layers that receive frequent runtime updates. This can include elements like the location indicator puck or the route lines with vanishing gradient effects. The frequency of updates during a navigation session can significantly influence the performance of these layers. During a zoom out animation some of these updates can be throttled for optimal experience. The ideal update frequency can be determined by examining the visual impact during the MapRecorder playback and by considering the performance effect from the collected metrics.

RELATED
MapRecorder

Automate performance testing scenarios by recording user interaction.

このpageは役に立ちましたか?