Hello everyone, this is part three of the tutorial face recognition using OpenCV. We are using OpenCV 3.4.0 for making our face recognition app. In the earlier part of the tutorial, we covered how to write the necessary code implementation for recording and training the face recognition program. To follow along with the series and make your own face recognition application, I strongly advise you to check that out first. The part three of the series is titled, “Face Recognition using OpenCV – fetching data from SQLite”. In this part of the tutorial, we are going to focus on how to write the necessary code implementation for recognizing faces and fetching the corresponding user information from the SQLite database.
If you are following along with the tutorial series face recognition using OpenCV, by the end of part 2 of the series, you should have created three files:
- create_database.py: To create database and table
- record_face.py: To capture face images and record the corresponding name in the database.
- trainer.py: Use of OpenCV’s LBPH Face Recognizer to train the dataset that outputs trainingData.yml file that we’ll be using for face recognition.
You should already have trainingData.yml file inside the ‘recognizer’ directory in the working directory. If you don’t you might want to recheck the Part-2 of the tutorial series. You might also remember from Part-2 of the series that we used LBPH Face recognizer to train our data. If you are curious about how LBPH works, you can refer to this article here.
Face Recognition and fetch data from SQLite
Now, we will be using the file we prepared during training to recognize whose face is it in front of the camera. We already have our virtual environment activated and the necessary dependencies installed. So, let’s get right to it. Make a file named detector.py in the working directory and copy paste the code below:
import cv2 import numpy as np import sqlite3 import os conn = sqlite3.connect('database.db') c = conn.cursor() fname = "recognizer/trainingData.yml" if not os.path.isfile(fname): print("Please train the data first") exit(0) face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') cap = cv2.VideoCapture(0) recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read(fname) while True: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),3) ids,conf = recognizer.predict(gray[y:y+h,x:x+w]) c.execute("select name from users where id = (?);", (ids,)) result = c.fetchall() name = result if conf < 50: cv2.putText(img, name, (x+2,y+h-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (150,255,0),2) else: cv2.putText(img, 'No Match', (x+2,y+h-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255),2) cv2.imshow('Face Recognizer',img) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows()
Run it using the command
python3 detector.py and voila!
Your face recognition app is now ready. If you had any problem following along with the tutorial or have any confusions, do let us know in the comments. The full code used in this tutorial can be found in this github repo.
Also, if you want to learn how to detect faces using JQuery, here is a nice tutorial at thedebuggers.com