Case Studies

Smart city solutions – Autonomous access control, traffic planning and OTA software updates

The Device Chronicle speaks to Matt Madison and Dan Walkes, software development and engineering experts from and Boulder AI who explain the application of machine learning models and powerful edge devices in access control and traffic planning applications in smart city solutions. Both are also highly familiar with the power of robust and secure OTA software updates

Matt Madison is a highly respected software developer with a wealth of knowledge of enterprise technology from his time at Amazon AWS, Verizon and Cisco Systems. Now he finds himself in the brave new world of edge devices developing systems at that increase the security of commercial and public buildings. He is also a highly esteemed open source community contributor to Meta-Tegra for Nvidia Jetson platforms on Github. 

Matt Madison, Software Developer,
Matt Madison, Software Developer,


Dan Walkes is the VP of Product Engineering at Boulder AI. He also guest lectures in advanced embedded engineering at the University of Colorado, Boulder, and is a highly esteemed community maintainer on the Mender Hub. Dan has incredible experience in leveraging AI, edge technologies and OTA software updates for applications in smart city solutions and smart city planning. 

Both Matt and Dan, and their organisations and use to perform secure and robust software updates to the edge devices in their projects. 

Dan Walkes, VP of Engineering, Boulder AI
Dan Walkes, VP of Engineering, Boulder AI

Autonomous access control in smart city solutions

Matt begins by describing how the facial authentication platform he works on. He explains that the approach at is to interface with existing access control systems and there are many manufacturers of these systems out there. Typically, these access control system manufacturers specialize in making badge readers. The user inserts and waves their badge over the reader and a signal is sent to an access control system and then it makes a determination whether the badge should be given access through the door. innovates by interfacing between the access control system and the badge reader, providing a mechanism whereby a user can be enrolled by their face and associate that face with their badge. 

Importance of privacy protection in smart city solutions

Privacy and collection is of paramount concern for Matt and his colleagues at As such the technology is described as facial authentication rather than face recognition. He stresses “We are not collecting billions of photos, amassing a giant database of personal information. This is not how our system works.” He further explains that the machine learning models convert picture information into profile IDs with only the required information to allow the model to identify the user specifically for our application and not have a general purpose facial recognition application. Any profile information that is stored is under the customer’s control and it is not sent off to some general purpose database that folks might be concerned about. The profile is a collection of data points based on the contours  and other aspects of the images that come in through the different cameras, they are correlated together and this is what forms the pattern that is used to identify the user. 

Facial authentication use cases

There are multiple use cases for the technology and approach. The first is single factor authentication where after the system has been trained to authenticate a user’s face, and the badge has been associated with the face, the user no longer needs to use their badge to pass through the door. As the system authenticates the user’s face, it automatically unlocks the door or tells the access control system that it is the correct user. 

The second is two factor authentication where alongside recognising the face, the user must also physically present the badge to the reader. 

The third is three factor authentication where the user must also use a keypad alongside the reader to enter a passcode to identify themselves as well as presenting the badge and having their face recognised. 

The use of three factor authentication, Matt explains, largely depends on the stringent security requirements of the customer and the system is then configured to best meet the needs of their security policy and requirements. Three factor authentication is normally used in highly secure locations such as military establishments or data centers. 

AI and machine learning applied in smart buildings

AI and machine learning is applied to the picture that comes through standard video camera, IR or depth cameras and the images that are presented are converted into a profile that associates with a user’s identity. The processing of the picture information is handled locally on the device and a cloud server is used for storage of profile information for enrolment purposes. When certain security events occur, for example a user tries to get unauthorized access and it fails, this event is sent to the server so security personnel can see and assess the unauthorised entry. There is also a tailgating feature to prevent authorised users from slipping in behind an authorised user when they get access, and these tailgating events are also sent to the server. 

Tailgating is one the major issues in secure building locations, sign forbidding unauthorised tailgating is largely ignored. The transferred learning framework is already set up by Nvidia to make it convenient and easier to train the machine learning models required to do this kind of work. There are pre trained models that can be used which are highly performant. 

There are a number of other interesting side features to the technology including masking due to the Covid 19 pandemic with a mode where the system can identify if the user is wearing a PPE face mask or not and prevent access in the event of the user not having their mask on. 

Smart city solutions and occupancy use cases

Dan and his colleagues at Boulder AI are building smart city solutions and occupancy applications. They are working with several cities with Denver being the most highly publicised example. Dan explains the primary use case “We assist the City of Denver (Denver Municipality) with getting analytics on what is happening with vehicle speed at intersections and the numbers of vehicles and pedestrians involved.” The software also looks at safety instances at crosswalks and tells us that there was  a car vehicle and a person in the same space of interest typically at a crosswalk at the same time. These insights are very useful for city planners who are trying to improve safety at crosswalks as they can analyse the volume and nature of close calls between vehicles and pedestrians at crosswalks.  

Edge in action 

The use of IoT and edge devices and computing helps to allay the privacy concerns of the City of Denver. Dan explains that “Users are not that excited about the idea that video footage of them at the intersections is being sent over the Internet to cloud infrastructure.” Boulder AI has the ability to anonymise the data on the device. Dan and his engineering team turn the close call events into bounding boxes which show where the car was and where the pedestrian was and how close they came to colliding. Critically for privacy protection, the technology doesn’t capture the driver, the pedestrian or the vehicle details. Only the track of the bounding box of the associated object is shared outside the edge device. This bounding box is re-overlayed on a picture of the intersection to  show each object’s location without identity details. Dan explains that the installs are supported by Power over Ethernet and metadata, just  kilobytes in size for each event is transferred. provides the software image updates in this very elegant smart city solution. 

Nvidia hardware used

Boulder AI is leveraging Nvidia hardware and development kit designs for their custom hardware products. Dan explains that the current generation products are based on the original Nvidia Jetson TX2 design and System on Module (SOM). The company’s next products, currently in development, and about to be released, are based on the design for the Nvidia Xavier NX and Nano form factors. Dan explains that the SOM has all the compute power on-board, the IO and POE is on the custom board, and then Boulder AI engineers use their considerable skills to get all this tech into a video camera case. Dan further explains the value working with the Nvidia solution. “Nvidia has great scalability for its hardware planned into its roadmap, and you can go from a Nano to a TX2 NX replacement, and then onto a higher performance Xavier NX, and retain the same form factor. This brings many options and the ability to support different use cases at different requirements and price points. This is a huge benefit that comes with moving to Nvidia’s new architecture.” 

Sleek form factor

The Nvidia Nano form factor is credit card sized with an 260 pin SO-DIMM edge connection.  The same edge connection and pin out is used for Xavier NX and the Jetson TX2 NX, which allows for one hardware carrier board to support multiple compute options. It sports a low powered design, and works with 15 watts of power. The TX2 packs in a lot of processing power with 4 ARM cores, 2 Nvidia CPUs, and a GPU. The Xavier NX has 6 ARM 8.2 cores and an even larger GPU, and a deep learning accelerator. All the models have hardware video converters and video codecs. There is a lot of power in a relatively small form factor.

Dan adds “Out of the box, with Nvidia’s default deep stream video processing pipeline, you can run multiple 30 frame per second video streams and also perform AI work on multiple video streams at once on an edge device. Software developers and engineers can do a tremendous amount f work with the built-in capabilities of the Nvidia platform without having to perform their own optimizations. Being able to leverage what the Nvidia engineers have built along with the powerful hardware capabilities is the key advantage of the Nvidia platform.” Matt adds “There are server-class features in an embedded device and it is an interesting crossover with big system features in a small package.” While Matt and his colleagues at conduct the training, development and maintenance of the Alcatraz models using large servers with big GPU complexes. For the actual face recognition, it all transfers relatively easily to the much smaller Nvidia Jetson devices.

Docker containers and the edge

Boulder AI also uses Docker containers with edge devices. Dan says that this is a comfortable working environment for users accustomed to working on server and container environments. “You can make small changes to Docker files for building a Docker container for Tegra instead of using a Desktop GPU. The workflow feels very familiar to Docker users working with edge devices and who want to follow their established workflow even when working on an edge device,” Dan explains.

We wish Matt and Dan well in their work to bring titanic computing power to the edge and assure end user data protection in the process.  

Recent Articles