After weeks of understanding the tool designing process, I finally decided the essential function of my tool is about image processing.
But before we get to process the image we will need to recognize the image (either static or live). So I used the Haar Classifier to be my image grabbing sensor, which is a data file generated from a training process where an application is “taught” how to recognize something in different contexts. This can be things like recognizing whether a certain sound is a word being spoken by a user, whether a gesture is a certain shape, or, in the image shown below, whether a pattern of pixels constitute a face.
One interesting thing is that the Haar Classifier only recognize static image, so I will have to translate live video to images approximately 30 frames per second in order to make the Haar Classifier calculate number of people.
At this stage I really did not “processed” the image, but sure as you will notice on the button left there is a number of people indicator, which kind of makes this tool a digital head counter.
This is built with openframeworks.