By C Koushik
Building a simple automated attendance system using facial recognition in Python that detects faces and records live attendance with the login time in an excel sheet.
An attendance system that records live attendance with the login time in an excel sheet using a webcam. The main package used here is the face_recognition to locate the facial features of a person. I have listed all the packages in the "requirements.txt" which is located inside the project directory.
The requirements text file contains the packages that are necessary to run this application. To install and deploy the packages, go to the project directory using the command prompt and execute the following command.pip install -r requirements.txt
Since my project file was larger than 10mb, I couldn't attach the images in the directory. Before running the code, you have to download your own images and rename them to 'name.JPG' accordingly.
pip install -r requirements.txt
Once the face is recognized during real-time, the name along with the current entry time is recorded and stored in 'attendancelist.csv' excel sheet.
Implementation:
1) Import necessary packages
import cv2 import numpy as np import face_recognition import os from datetime import datetime
2) Loading images and storing names using os path
path = 'Images' images = [] names = [] List = os.listdir(path)
for name in List:
img = cv2.imread(f'{path}/{name}')
images.append(img)
names.append(os.path.splitext(name)[0])
3) Function to encode all images in directory
def encode(images): encode_list = [] for img in images: img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) encodeImg = face_recognition.face_encodings(img)[0] encode_list.append(encodeImg) return encode_list
4) Initializing webcam
cap = cv2.VideoCapture(0)
5) Encoding live image for facial recognition
while True: success, img = cap.read() #Reducing size of real-time image to 1/4th imgResize = cv2.resize(img, (0,0), None, 0.25, 0.25) imgResize = cv2.cvtColor(imgResize, cv2.COLOR_BGR2RGB) # Finding face in current frame face = face_recognition.face_locations(imgResize) # Encode detected face encodeImg = face_recognition.face_encodings(imgResize, face) #Finding matches with existing images for encodecurr, loc in zip(encodeImg, face): match = face_recognition.compare_faces(encodeList, encodecurr) faceDist = face_recognition.face_distance(encodeList, encodecurr) print(faceDist) #Lowest distance will be best match index_BestMatch = np.argmin(faceDist) if match[index_BestMatch]: name = names[index_BestMatch] y1,x2,y2,x1 = loc #Retaining original image size for rectangle location y1,x2,y2,x1 = y1*4, x2*4, y2*4, x1*4 cv2.rectangle(img,(x1,y1),(x2,y2),(255,0,255),1) cv2.rectangle(img,(x1,y2-30),(x2,y2), (255,0,255), cv2.FILLED) cv2.putText(img, name, (x1+8, y2-10), cv2.FONT_HERSHEY_COMPLEX, 0.5, (0,255,255),2) record_attendance(name)
The face_distance() is used to calculate the distance between the facial features of live image and trained image. Lowest distance will be the best match. The face_locations() is used to detect the x and y coordinates of the rectangle that surrounds the face. The name corresponding to the encoded face is stored and recorded along with the current time in 'attendancelist.csv'
Submitted by C Koushik (CKoushik)
Download packets of source code on Coders Packet
Comments