A Low-Cost Prototype Classroom Attendance Checker and Logger Using Facial Recognition

Ian Gabriel C. Santillan
Ateneo de Davao University Senior High School, Davao City, PH [email protected]


Facial recognition is a category of biometric software that maps a person\’s facial highlights and stores them as a faceprint. It works by contrasting chosen facial highlights from a given picture with faces in a database. Commercially available systems are costly and do not provide the user full control of the system. This study aimed to create a flexible facial recognition and data logging system, using the Internet of Things, to allow users maximum control. This could help alleviate the problems faced by manual checking of attendance in a school or class setting, where time, effort, and human error are huge factors. The device has undergone functionality and accuracy tests, which tested the setup time, recognition time, and accuracy of the device. Results showed that the researcher successfully created a facial recognition device with full functionality and an accuracy rating of 87%, with an average setup time of 4.339 seconds and an average recognition time of 5.790 seconds. The results imply that Eos functions on par with commercially available facial recognition systems while also allowing maximum flexibility due to the open-source nature of the Raspberry Pi. The Raspberry Pi operates in Linux, and it has a primary supported operating system, Raspbian, wherein the source code is made freely available and may be redistributed and modified. Eos is also cheaper than commercially available systems by 33.4%; its affordability may make it more accessible for a variety of data logging and facial recognition tasks.
KEYWORDS: Facial Recognition, Database, Data Logging, Internet of Things, Raspberry Pi, Open-Source


Manual checking of the professor\’s and students’ attendance in the classroom in their class hours has been causing disturbances due to the opening of doors, and it is prone to personal errors. Factors such as time, effort, and human errors play a massive role in productivity regarding this task. There arises a need for a more efficient and effective method of checking the punctuality and attendance of the professors and students.[1]
A facial recognition system is a technology equipped for recognizing or confirming an individual from a digital image or a video frame from a video source.[2] There are different techniques by which facial recognition systems work. However, in general, they compare chosen facial features from a given picture with face prints inside a database.[3][4][5]
High cost, limited flexibility, and limited accessibility are some of the problems associated with existing commercial biometric systems, including facial recognition systems and products associated with verifying or recognizing the identity of a living person based on a physiological or behavioral aspect.[6]
The device Eos (named as a reference to the Goddess of the Dawn), discussed in this paper, can recognize facial prints and log the data in a comma delimited file which can easily be accessed by teachers and school officials for checking class attendance. Eos is controlled using peripherals connected to the Raspberry Pi where you can also see the live video feed from a camera. It also features a motion tracking function to track object movement during the facial recognition process. Since commercially available biometric systems are costly and have limited flexibility for development,[6] Eos could potentially be a low-cost substitute for commercially available systems. Eos is capable of taking facial prints, training a machine learning model with those features, and recognizing those features while simultaneously logging the data.

Background Information

Eos utilizes deep learning algorithms to handle a live capture or computerized picture. Live capture is the act or method of gathering biometric data from an individual while the individual is physically present. It then stores the face print with the end goal of confirming a person\’s identity.[7]
Facial recognition systems vary widely in price and functionality. Commercially available systems found online would range around $140-$210.[8]
Deep learning, also known as deep structured learning or hierarchical learning, is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised, or unsupervised.[9][10][11] Deep learning is a part of machine learning that utilizes neural networks with multiple layers. A deep neural network examines data with learned representations similarly to how a person looks at a problem.
One key favourable position of a facial recognition framework is that it can provide mass identification as it does not require the cooperation of the test subject to work. Appropriately planned frameworks introduced in airplane terminals, multiplexes, and other open spots can recognize people among the group, without passers-by being aware of the system.[12] Applications for the system include advanced human-computer interaction, video surveillance, automatic indexing of images, and video databasing, among others.[13]
Previous studies regarding facial recognition utilized other programming languages such as MATLAB;[14][15] however, they did not feature a free and open-source ecosystem for programming the desired biometric functions. Also, most of the studies relating to the prototype of facial recognition systems for attendance checking did not have a motion-tracking function. In the context of facial recognition, motion tracking can give greater accuracy compared to previous biometric recognition methods like iris and fingerprint recognition. It has also proven to be more secure and harder to trick, making it increasingly important for use in biometric systems.[16]

Engineering Goals

  • The device must be able to carry out basic facial recognition through datasets provided by the researcher.
  • The device must also be able to store or log the data in a spreadsheet program such as Excel that is easily accessible.
  • The device should be able to integrate its motion tracking function through its sensors.
  • The device should also be programmed in an open-source ecosystem for maximum flexibility and control of the device.
  • The total cost of the finished device will be kept under $100 so that it remains cost-effective and can be implemented in public schools.


The research methods – data collection via facial recognition, data organization, statistical analysis, software development, and electronic design construction – were undertaken using the following software packages: OpenCV,[17] JASP,[18] Arduino IDE,[19] Python IDLE 3,[20] and Microsoft Office Excel.[21]


The body of Eos (Figure 1) utilized a 6.3’’ x 3.74’’ x 2.17’’ phone box as the compartment for the electronic components of the device. The components were arranged with the ports facing the sides of the box to allow additional hardware such as USB, HDMI, and power cables to be connected. Holes were cut in the box to provide access to these ports and three additional holes were cut on the top of the box for the jumper cables, Raspberry Pi camera ribbon, and for the output spline of the servo motor.
The sensors and the camera were held by a modified ultrasonic sensor holder using hot glue and two 1’’ x 0.5’’ scrap prototype boards. The two ultrasonic sensors were attached on both ends with hot glue while maintaining an angle of 130º. The servo horn was then attached to the bottom of the ultrasonic sensor holder using hot glue.


All the electronics were attached inside the compartment (Figures 2 and 3) of the device interior with hot glue, and the wires were secured and organized with zip ties. The signal wires of the ultrasonic sensors and servo were connected to header pins soldered onto the prototype board. Jumper wires were also soldered onto the prototype board and directed to the respective signal pins on the Arduino microcontroller. The Raspberry Pi NoIR V2 camera was connected directly to the camera module port. The Arduino microcontroller was powered via a USB cable connected to the Raspberry Pi. The Raspberry Pi was then powered using a 5V, 2.5 Raspberry Pi B+ Power Supply.

Eos was controlled and monitored using peripherals connected to the Raspberry Pi which included a keyboard, a mouse, and a monitor. Data recognized by Eos was then stored in a comma delimited file which can be accessed using Excel.

Camera and Sensors

The Raspberry Pi camera was mounted on the modified holder along with the ultrasonic sensors. The camera provides a live feed to the monitor as well as live dataset construction and live facial recognition. Dataset construction was initiated by running the dataset script where the camera captures pictures for the dataset. This can also be done manually by uploading pictures directly to the dataset folder. The training script is then run before executing the facial recognition script which compares values stored as faceprints in the datasets to labels, in this case, names.
Eos also has two ultrasonic sensors that compare the distance of the object from each sensor. The data is then sent to the Arduino which then sends instructions for the servo motor to rotate in the direction closer to one of the sensors. This was for the motion tracking function of the device.


The code to control the servo motor and the ultrasonic sensors for the motion tracking function of Eos were written in Arduino. Much of the code for the motion tracking function was adapted from open sources such as Instructables.[22] The facial recognition function for the camera was written in the Python 3 IDLE within the Raspberry Pi OS while also utilizing OpenCV 3 Library. The code for the machine learning program was adapted from Jacky Le’s Raspberry-Facial-Recognition scripts,[23] which was modified to show user labels or the names associated with the faces and to log data directly to a comma delimited file. The scripts were mainly for each machine learning function, namely live dataset construction, dataset training, and facial recognition. Eos functioned by executing these Python scripts on the Raspberry Pi.


Eos was tested for three trials per repetition for three repetitions each, to test the functionality of the servo motor and the camera. The functionality of the ultrasonic sensors was also tested for three repetitions, each repetition consisted of object distances 5 cm, 10 cm, 15 cm, 20 cm, and 25 cm. The figures chosen were arbitrary and were simply used to test the sensor\’s functionality. The servo motor and the camera were functional throughout all the tests, while the ultrasonic sensors were functional until the 25 cm mark.
The facial recognition system was tested using three tests, namely: the setup time test, the recognition time test, and the accuracy test. The setup time refers to the amount of time it takes for Eos to boot up, while the recognition time refers to the time it takes for Eos to recognise a unique face print. A stopwatch was used to time the tests. The researcher was aware of the possibility of a reaction time error.
The datasets were divided into two groups. The first one was used for training while the second one was used for testing. The datasets consisted of five unique face prints to be utilized for the training and the testing. The results for the setup time and recognition time were measured in seconds, while the accuracy test was enumerated. The data was recorded and the mean, maximum, and minimum values were computed using JASP software. The faceprints were acquired by sending out informed consent requests to participants, asking them to use the photos for the tests. The photos are not to be published or made public. The limited number of participants consenting resulted in limited test data.


Fifteen setup times and fifteen recognition times were collected in total from the setup and recognition tests. The accuracy of the device was also tested for five unique face prints in three different configurations for a total of fifteen recorded results.
As shown in Figure 4, the bar graph shows the setup times of the facial recognition system for all five unique face prints and their three different configurations. The mean value was 4.37s.

Figure 5 shows the bar for the recognition times of the facial recognition system for all five unique face prints and their three different configurations. The mean value was 4.33s.

Lastly, Figure 6 then shows the pie graph for the accuracy of the facial recognition system for all five unique face prints and their three different configurations. The system was able to recognize 13 out of the 15 unique face print configurations, achieving an accuracy of 87%. The results were verified by checking the comma delimited files while specific faceprints were shown to know if they were registered or not.


Overall, Eos met the engineering goals and performed effectively in both the functionality tests and the facial recognition system tests. Eos was controlled via peripheral devices connected to the Raspberry Pi, and the live video stream allowed the user to check the accuracy of the device regarding its facial recognition function. The overall functionality was successful as Eos was able to recognize unique facial prints while simultaneously executing the motion tracking function and the data logging function. The total cost of the robot was $93.24, which is 6.76% lower than the 100-dollar budget and 33.4% less expensive than commercially available facial recognition systems which are priced at around $140-$210. Eos was successfully tested for its setup time and recognition time with the mean values being 4.37s and 4.33s. The accuracy of the facial recognition system was rated at 87%.
Going forward, some modifications can be made to improve the capabilities of the device. A better quality camera could be used to increase feature detection. Furthermore, the cameras do not function well in low visibility environments. Visibility can be improved upon the addition of a light source. A larger dataset could have been integrated to train the model better and improve its accuracy. This would have been possible if the system were to be implemented in public schools as there would be a larger consenting dataset. Lastly, the limitations for motion detection can be extended by using a more powerful sensor.
This project can be further developed with the addition of a Graphic User Interface (GUI) for easy setup and access to Eos. Thus far the device has been tested in a controlled environment, but implementation in a classroom or school setting is possible. There is also the potential for the addition of a Global System for Mobile Communications (GSM) Module to allow real-time messaging from the device to students’ guardians.


While it certainly had limitations, Eos met the budget requirement and was 33.4% less expensive than existing commercial counterparts. No programming or hardware errors were found during the functionality tests and Eos functioned properly. Eos was also able to successfully identify, recognize facial features and log the data with 87% accuracy, with an average setup time of 4.378s and an average recognition time of 4.339 s. The goal of the research was accomplished, as the device was programmed in an open-source ecosystem, allowing maximum flexibility and control. There is potential for improvement regarding lighting, GUIs, and the facial recognition system itself, which will make Eos more suitable for implementation in classroom or school settings. With its current limitations, the device performs most effectively in environments with high visibility and a high number of training samples. Eos has promising applications in integrating facial recognition at a lower cost.


The author thanks Henry Haranay, Romel Villarubia, Florentino Manuel Jr. of Tagum City National High School, and Engineer Chris Malecdan of Saint Louis University for assisting as research mentors. The author also thanks to his upperclassmen, Francine Racho Joseph Nino Boyles, Kurt Cabinian, and Jex Cansancio for their advice and guidance. To Sharon Mae Santillan and Rhynell Santillan for their support and for funding the research. To Nicole Rylen Astillo for her constant support. And to the mighty God who guided and protected the author to finish this research successfully and safely.


  1. Cruz, Jennifer Dela, Arnold C. Paglinawan, Miguel Isiah Bonifacio, Allan Jake Flores, and Earl Vic Hurna. 2015. \”Biometrics based attendance checking using Principal Component Analysis.\” 2015 IEEE Region 10 Humanitarian Technology Conference (R10-HTC). Manila: IEEE.
  2. Petrescu, Relly Victoria Virgil. 2019. \”Face Recognition as a Biometric Application.\” Journal of Mechatronics and Robotics 237-257.
  3. Heinzman, Andrew. 2019. How Does Facial Recognition Work? July 11. Accessed June 3, 2020. https://www.howtogeek.com/427897/how-does-facial-recognition-work/.
  4. Symanovich, Steve. n.d. How does facial recognition work? Accessed June 3, 2020. https://us.norton.com/internetsecurity-iot-how-facial-recognition-software-works.html.
  5. Techopedia. 2020. Facial Recognition. May 21. Accessed June 3, 2020. https://www.techopedia.com/definition/32071/facial-recognition.
  6. Thakkar, Danny. 2018. Biometric Devices: Cost, Types, and Comparative Analysis. September 12. https://www.bayometric.com/biometric-devices-cost/.
  7. Rouse, Margaret. 2018 . Facial Recognition. January. Accessed August 5, 2018. https://searchenterpriseai.techtarget.com/definition/facial-recognition.
  8. Indiamart. n.d. Facial Recognition System. Accessed June 17, 2020. https://dir.indiamart.com/search.mp?ss=facial+recognition+system.
  9. Schmidhuber, J. 2015. \”Deep Learning in Neural Networks: An Overview.\” Neural Networks 61: 85–117.
  10. Bengio, Y., A. Courville, and P. Vincent. 2013. \”\”Representation Learning: A Review and New Perspectives\”.\” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1798–1828.
  11. Bengio, Yoshua, Yann LeCun, and Geoffrey Hinton. 2015. \”Deep Learning.\” Nature 521 (7553): 436–444.
  12. Bayometric. 2017. \”Top Five Biometrics: Face, Fingerprint, Iris, Palm and Voice\”. January 23. Accessed October 30, 2018.
  13. Bramer, Max. 2006. Artificial Intelligence in Theory and Practice:IFIP 19th World Computer Congress. Santiago, Chile: Berlin: Springer Science+Business Media.
  14. Jha, Abhishek. 2007. \”Class Room Attendance System Using Facial Recognition System.\” The International Journal of Mathematics, Science, Technology and Management 2319-8125.
  15. Poornima, S., N. Sripriya, B. Vijayalakshmi, and P. Vishnupriya. 2017. \”Attendance monitoring system using facial recognition with audio output and gender classification.\” 2017 International Conference on Computer, Communication and Signal Processing (ICCCSP) 1-5.
  16. Sightcorp. n.d. Face Tracking: Everything about Face Tracking. Accessed June 17, 2020. https://sightcorp.com/knowledge-base/face-tracking/.
  17. OpenCV. n.d. Accessed August 12, 2018. https://opencv.org/.
  18. JASP. n.d. JASP. Accessed August 12, 2018. https://jasp-stats.org/download/.
  19. Arduino. n.d. Accessed August 12, 2018. https://www.arduino.cc/en/main/software.
  20. Python. n.d. https://www.python.org. Accessed August 12, 2018.
  21. Microsoft. n.d. Microsoft Excel. Accessed August 12, 2018. https://www.microsoft.com/en-us/microsoft-365/excel.
  22. Kielas-Jensen, Calvin. 2016. Motion Following Robot. https://www.instructables.com/id/Motion-Following-Robot/.
  23. Le, Jacky. 2017. Raspberry-Face-Recognition. October 16. https://github.com/trieutuanvnu/Raspberry-Face-Recognition.

About the author

Ian Gabriel Santillan is a rising senior at Ateneo de Davao University Senior High School, Davao City, PH. His research interests include Machine Learning, Artificial Intelligence, Computational Biology, Ecological Modeling, and Marine Biology.

2 thoughts on “A Low-Cost Prototype Classroom Attendance Checker and Logger Using Facial Recognition”

  1. HackerMaam

    Wohooo… gipadayon jd niya iyang study…. that’s a great attitude of a scientist..I hope Ian this will continue discovering a lot more for our life to be a lot more easier.

Leave a Comment

Your email address will not be published. Required fields are marked *