This document provides an overview of the changes for ViSense CrowdDynamics v1.1.0, released on April 18th, 2016.
The following sections describe the most notable changes for this release (aside from bugfixes and minor improvements). For the complete changelog, see below.
New metric calculation: speed
A new metric calculation was added to calculate the speed of each passer-by. The speed graph shows the distribution of the speed of all objects (passers-by) that have been registered since the page has been loaded. Options are available to filter on minimum/maximum speed, select the speed step size of each column and to display the total number of objects or the relative percentage.
The speed graph only displays speed of objects that pass by from the moment that the web page is loaded, no history data is available. This is to reduce the network load when displaying the speed statistics. When using the WebSocket API for object speed data, the speed can be retrieved for any object for which data is stored in the system (so also for history data).
Please note that for speed calculations, the camera installation needs to calibrated, as the system requires information about real-world distances. The camera calibration is manually performed from our office, please contact us for more information and pricing. If camera calibration is not activated, speed calculation and the live speed graph are unavailable.
Special message for system Integrators
If you want to integrate the speed calculations with your current system, you can retrieve the speed information per registered object using the following WebSocket API URI:
To calculate the speed of each object, we convert the camera position of each person to real-world locations. The resulting locations are also made available for system integrators to retrieve using the API. The trajectory locations of each object (resp. image and world locations) can be retrieved at:
Please note that, depending on the query selected time period, the transmission of trajectory/speed data can lead to large amounts of network traffic, as each registered object is represented individually. Trajectory data, for instance, represents many locations (possibly hundreds or more) per object.
For more documentation about the existing WebSocket APIs, please check the documentation on your box at:
If you have no access to a ViSense system, please see here.
Of course we are always busy improving our current system and documentation. We are always interested in suggestions to improve ViSense, thus if you have any remarks or tips, don’t hesitate to contact us.
- Speed graph
- Improve visualization of counting symbols
- Fix crash when system tries to load too many files due to power/camera outages.
- Time dialog box now updates to new time after selecting new timezone.
- Fixed entering an URL after entering an invalid URL.
- Sped up rebooting the system.
- Select and filter parameters now working in WebSocket API queries.
- Web UI: camera configuration fixes:
- If the ROI height is set to zero, disable save/next button.
- Correctly save and load network settings.
- Fix font for arrows in counting directions.
- Fixed drag-mode when setting box to smallest possible version in perspective tunnel.
- ROI undo button now reverts to default ROI instead of temporarily saved ROI.
- Web UI: statistics fixes:
- Scene information now showing correctly in live stream.
- Statistics graphs now show [No Data] when no data is available in the selected time range.