The invisible work that makes the system work.

What did we do?

After finalizing our concept and confirming its feasibility, our task was prototyping our first iteration – sounding the buzzer when the door opened.
This corresponded to our MVP for that day. We worked with piezo buzzers and magnetic reed sensor to make the prototype work. After this, we worked to identify the process to send and record data collected to thingspeak which would help observe patterns of late comers in the form of infographics.
We used the node MCU (esp8266) to push data to the fields created on thingspeak.

After achieving this, we went on to understand how machine learning works and how we can custom train our model to identify people. We used ml5 and mobilenet documentation which are based on tensor flow to create a model to exclusively identify students from School of Design.
Initially we worked with the OV7670 camera module to capture images and identify the students but, due to compatibility and debugging issues we chose to use a web cam to execute the code.
We used the feature extractor function to train the the system by uploading multiple pictures of each subject to ensure a certain level of probable identification. We are working out the code on how to export the trained model so that it can run on other systems and models smoothly.
Moreover, we are trying to identify the process to push the pictorial data from the machine learning process onto the thingspeak channel we created.