A.I. and the future of CCTV by Boris Ploix

Advertisement

TPSO’s Mike O’Sullivan attended a talk held by the nice people at Calipsa recently and came away much impressed. The following article, written by Calipsa founder and CTO, Boris Ploix, looks at the tech they are developing to advance and enhance the use of CCTV

Machine learning: for the last couple of years it has been used everywhere. Some of the biggest companies are using it, sometimes without us even knowing. Everytime you make a search on Google, Amazon recommends you something or Facebook tags a friend for you; machine learning is being used.

Even if it seems like quite a new concept, the premise of this technique was invented as early as the 1950s by a man named Alan Turing. Already famous for cracking the code of the ‘enigma machines’, used to send and receive Nazi encrypted communications during the second world war, he was one of the first to realise that one day a machine could ‘think’. He came up with the famous ‘Turing test’ which is intended to understand whether an algorithm is truly intelligent.

For a very long time, this research has stayed in academia and those techniques were only theoretical. It was only recently that they began to be used in the real world. The massive increase of computational power made it possible to train machine learning systems at scale for real-world application.

Applying machine learning to security

We founded Calipsa to apply AI to crime detection, and developed a cloud-based machine learning solution that connects to CCTV cameras to understand the cause of an alarm. For this, we need to distinguish between two categories. A true alarm is an event that contains human activity in a scene where no one is supposed to be there. A false alarm, by opposition, is an alarm that does not have this human activity and is considered as ‘noise’ for the security company who is monitoring the camera. False alarms are a serious burden for the security industry, as more than 95 per cent of all alarms are false alarms, caused by anything from a change in lighting to a spider web across the lense.

False alarm

                                                                                        

True alarm

A number of different solutions have been developed attempting to tackle this issue. However, most analytics that have been developed to solve this, such as motion detection, line triggering etc. use what is known as a rule-based algorithm. This means they rely on a decision tree to make their final decision, where the algorithm will test multiple hypotheses until making a prediction.

The above image is one example of a scene analysed using a rule-based algorithm. Multiple questions were asked and tested by the algorithm, before it could understand whether the alarms were indeed true or false. In this particular case, the algorithm will ask multiple questions such as whether there was a movement and then how big that movement is etc. At each step, the algorithm will go down this decision tree in order to make a prediction.

This technique has serious limitations. It doesn’t matter how smart the engineers behind those algorithms are, there will always be an edge case where the techniques will not work. As a result, to make sure that nothing is being missed, the parameters are often tweaked at the expense of the overall accuracy. Since most alarms are very complex, those simple algorithms often fail at detecting them.

Calipsa does not use this technique but rather uses neural networks to solve the exact problem of false alarms. The following aims at providing some understanding of how it works behind the scenes.

The neural network approach

Even though it is scientifically incorrect to state that a neural network works like the brain, their invention was largely inspired by it. Instead of designing complex algorithms like the decision tree described above to understand that a particular image contains a specific element, we build a neural network where layers of neurons will be stacked on top of each other, between the input image and the decision.

Let’s dive a little deeper and try to understand how everything works by doing a “brain scan”. The figure below represents the output of the activations of each layers for a specific example where the objective is to know whether or not the image contains a cat. This process and the explanation is obviously transferable to Calipsa’s usecase.

A couple of insights can be derived from the illustration.

In the first layers of the neural network, the outputs look quite similar to the original image. However the deeper you go in the neural network, the more abstract the representation becomes. At each layer the data is transformed little by little in order to have a deeper representation of what is going on. In the first layers, the neural network will typically recognise some basic features such as edges and colors. Then in the later layers some more advanced features will be recognised such as faces and legs. This knowledge is built up through the neural network in order to make a final prediction – in this case, is this image a cat or not?

The process of getting the right representation for different images in order to make the correct prediction is done via a training process. Instead of changing each of the parameters of each neuron manually, we expose a neural network initialised with a random state to millions of images. At each step, we let the neural network make a prediction and update its parameters when it provides a wrong answer. This process is done via the backpropagation algorithm, and lets the neural network learn little by little from its mistakes until it outperforms all other methods.

It is quite interesting to notice that because we always start from a random state, the internal representation that is being built by training will be different every time we train a new neural network. It is the same with humans. Ask somebody to visualise a cat in their mind and you can be sure that this visualisation will be different from the person next to them.

If one pays enough attention to the above neural network, it is possible to notice that some neurons are completely black on the last layer. That simply means that some of the expected features were not present but others compensate to make the decision. Continuing with the analogy, for me a cat is a small animal with four legs and a tail. However, even if I don’t see the tail in the image, I am still pretty sure that this is indeed a cat.

This example of a neural network for cats can be extended to pretty much anything and provides a lot higher accuracy than other methods. Calipsa uses this extensively in order to make the right prediction on whether an alarm is true or false. Through continuous learning and iterations of our algorithms, we have transformed the reduction of our product from 30% accuracy to 90%. Those highly accurate algorithms are now used to detect false alarms in all the cameras that Calipsa serves.

It’s hard to deny the benefit that machine learning technology brings to security monitoring when done correctly. By integrating technology such as Calipsa into monitoring operations as an extension to human verification, security businesses can have confidence that they are providing their customers with an accurate and constantly improving service.

Boris Ploix – Co-Founder and CTO at Calipsa.

A French born entrepreneur, Boris founded Calipsa in April 2016. “There are more than 250 million CCTV cameras in the world. 99% of them are never watched and never analysed. Boris started Calipsa with a strong belief that this should be the other way around.” Using cutting edge machine learning, combined with advancements in CCTV technology, Calipsa is looking to detect incidents more effectively and prevent crime.

https://www.calipsa.io/