Hello everyone, this is part two of the tutorial face recognition using OpenCV. In part one of the tutorial, we discussed How to set up virtualenv and install necessary dependencies. To get a general idea of what face recognition and face detection is and to follow along with the tutorial, I advise you to check out part one of the tutorial series first if you haven’t already. The part two of the series is titled, “Face Detection and Face Recognition using OpenCV – training”. In this part of the tutorial, we are going to focus on how to write the necessary code implementation for recording and training the face recognition program. We can further divide this part into:
- Create database for face recognition
- Record faces
- Train Recognizer
Create Database for face recognition
We are going to first create a database which stores the name of the corresponding faces. We will be using SQLite 3 for this purpose. Make a file named create_database.py in the working directory and copy paste the code below:
import sqlite3 conn = sqlite3.connect('database.db') c = conn.cursor() sql = """ DROP TABLE IF EXISTS users; CREATE TABLE users ( id integer unique primary key autoincrement, name text ); """ c.executescript(sql) conn.commit() conn.close()
Run the python script using the command
python3 create_database.pyThis will create a database with filename database.db on the current directory. The database contains a table named users with two columns id and name. Once the database is created, you can use DB Browser for SQLite to view the table structure as well as data and it is going to look something like:
Now, we are going to prepare the dataset for face recognition. We will be using haarcascade_frontalface_default.xml file provided in the opencv/data/haarcascades directory of the opencv repo in github. Download the file and place it in the working directory. After that, make a file named record_face.py in the working directory and copy paste the code below:
import cv2 import numpy as np import sqlite3 import os conn = sqlite3.connect('database.db') if not os.path.exists('./dataset'): os.makedirs('./dataset') c = conn.cursor() face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') cap = cv2.VideoCapture(0) uname = input("Enter your name: ") c.execute('INSERT INTO users (name) VALUES (?)', (uname,)) uid = c.lastrowid sampleNum = 0 while True: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: sampleNum = sampleNum+1 cv2.imwrite("dataset/User."+str(uid)+"."+str(sampleNum)+".jpg",gray[y:y+h,x:x+w]) cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0), 2) cv2.waitKey(100) cv2.imshow('img',img) cv2.waitKey(1); if sampleNum > 20: break cap.release() conn.commit() conn.close() cv2.destroyAllWindows()
The code above, when run, will ask you to enter the name for the face first. It will then use the haarcascade to find the face in the camera stream. It will look for 20 samples each at the interval of 100ms. Once 20 sample faces have been found, it stores the sample data in the ‘dataset’ directory inside the working directory. In the next step, we are going to train recognizer for face recognition.
OpenCV provides three methods of:
- Local Binary Patterns Histograms (LBPH)
All three methods perform the recognition by comparing the face to be recognized with some training set of known faces. In the training set, we supply the algorithm faces and tell it to which person they belong.
Eigenfaces and Fisherfaces find a mathematical description of the most dominant features of the training set as a whole. LBPH analyzes each face in the training set separately and independently. The LBPH method is somewhat simpler, in the sense that we characterize each image in the dataset locally; and when a new unknown image is provided, we perform the same analysis on it and compare the result to each of the images in the dataset. We will be using the LBPH Face recognizer for our purpose. To do so, create a file named trainer.py in the working directory and copy paste the code below:
import os import cv2 import numpy as np from PIL import Image recognizer = cv2.face.LBPHFaceRecognizer_create() path = 'dataset' if not os.path.exists('./recognizer'): os.makedirs('./recognizer') def getImagesWithID(path): imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faces =  IDs =  for imagePath in imagePaths: faceImg = Image.open(imagePath).convert('L') faceNp = np.array(faceImg,'uint8') ID = int(os.path.split(imagePath)[-1].split('.')) faces.append(faceNp) IDs.append(ID) cv2.imshow("training",faceNp) cv2.waitKey(10) return np.array(IDs), faces Ids, faces = getImagesWithID(path) recognizer.train(faces,Ids) recognizer.save('recognizer/trainingData.yml') cv2.destroyAllWindows()
Run it using the command
python3 trainer.pyThis will create a file named
trainingData.yml inside the ‘recognizer’ directory inside the working directory.
This brings us to end of part 2 of the tutorial series, face recognition using OpenCV. In this part of the series, we created 3 files:
- create_database.py: To create database and table
- record_face.py: To capture face images and record the corresponding name in the database.
- trainer.py: Use of OpenCV’s LBPH Face Recognizer to train the dataset that outputs trainingData.yml file that we’ll be using later in the tutorial for face recognition.
Our face recognition app is almost complete now. All we need to do is recognize the faces and fetch data from SQLite now which is on part 3 of the tutorial series.