Hello everyone, this is part three of the tutorial face recognition using OpenCV. We are using OpenCV 3.4.0 for making our face recognition app. In the earlier part of the tutorial, we covered how to write the necessary code implementation for recording and training the face recognition program. To follow along with the series and make your own face recognition application, I strongly advise you to check that out first. The part three of the series is titled, “Face Recognition using OpenCV – fetching data from SQLite”. In this part of the tutorial, we are going to focus on how to write the necessary code implementation for recognizing faces and fetching the corresponding user information from the SQLite database.
If you are following along with the tutorial series face recognition using OpenCV, by the end of part 2 of the series, you should have created three files:
- create_database.py: To create database and table
- record_face.py: To capture face images and record the corresponding name in the database.
- trainer.py: Use of OpenCV’s LBPH Face Recognizer to train the dataset that outputs trainingData.yml file that we’ll be using for face recognition.
You should already have trainingData.yml file inside the ‘recognizer’ directory in the working directory. If you don’t you might want to recheck the Part-2 of the tutorial series. You might also remember from Part-2 of the series that we used LBPH Face recognizer to train our data. If you are curious about how LBPH works, you can refer to this article here.
Face Recognition and fetch data from SQLite
Now, we will be using the file we prepared during training to recognize whose face is it in front of the camera. We already have our virtual environment activated and the necessary dependencies installed. So, let’s get right to it. Make a file named detector.py in the working directory and copy paste the code below:
import cv2 import numpy as np import sqlite3 import os conn = sqlite3.connect('database.db') c = conn.cursor() fname = "recognizer/trainingData.yml" if not os.path.isfile(fname): print("Please train the data first") exit(0) face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') cap = cv2.VideoCapture(0) recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read(fname) while True: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),3) ids,conf = recognizer.predict(gray[y:y+h,x:x+w]) c.execute("select name from users where id = (?);", (ids,)) result = c.fetchall() name = result[0][0] if conf < 50: cv2.putText(img, name, (x+2,y+h-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (150,255,0),2) else: cv2.putText(img, 'No Match', (x+2,y+h-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255),2) cv2.imshow('Face Recognizer',img) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows()
Run it using the command python3 detector.py
and voila!
Your face recognition app is now ready. If you had any problem following along with the tutorial or have any confusions, do let us know in the comments. The full code used in this tutorial can be found in this github repo.
Also, if you want to learn how to detect faces using JQuery, here is a nice tutorial at thedebuggers.com
Awesome program everything ran absolutely fine!!
but i have a query
how can we replace live video feed of camera with an actual video saved on my system?
and can we insert images in dataset directly {like directly placing pictures in dataset folder.}
Hello Nice Article!
What can I do if I only have one or 2 training image for each person and I have a lot of testing images for that person that I need to recognize?
so my question is what options do I have and what algorithms/library I need to use to get the best results?
Thanks In Advance
You might want to check out MTCNN Facenet detection for your purpose as OpenCV LBPHFaceRecogniser doesn’t perform good with only a few image dataset per person.
there is with:
ids, conf = recognizer.predict(gray[y:y + h, x:x + w])
cv2.error: OpenCV(3.4.5) C:\projects\opencv-python\opencv_contrib\modules\face\src\lbph_faces.cpp:406: error: (-5:Bad argument) This LBPH model is not computed yet. Did you call the train method? in function ‘cv::face::LBPH::predict’
Hi,
i got an empty faces list!!
but it is got good in part2
why in this part do not read any image from .yml file??
please clarify your query. I don’t understand what you’re trying to say
please help i keep receiving this in tutorials 2 and i don’t know what to do???
C:\Users\SWINO\Desktop\opencvenv>python trainer.py
Traceback (most recent call last):
File “trainer.py”, line 6, in
recognizer = cv2.face.LBPHFaceRecognizer_create()
AttributeError: module ‘cv2.cv2’ has no attribute ‘face’
The error could be because you don’t have the OpenCV Contrib installed. Please make sure that you have it installed using the command
pip install opencv-contrib-python
Hi,
I have same problem please help me out solution u proposed isnt working
PS C:\Users\Rohit\project> pip install opencv-contrib –user
Collecting opencv-contrib
Could not find a version that satisfies the requirement opencv-contrib (from versions: )
No matching distribution found for opencv-contrib
Instead of doing
pip3 install opencv-contrib,
type,
pip3 install opencv-contrib-python
as stated in the first part of the tutorial 🙂
if u use low version of python<3.6 ,shoud work fine
c:\Python36\Scripts>pip install opencv-contrib-python
Collecting opencv-contrib-python
Using cached https://files.pythonhosted.org/packages/c8/94/1e4d01518a87c7de4591892d48ac403e721e13504a819cc358f93409d94a/opencv_contrib_python-3.4.3.18-cp36-cp36m-win_amd64.whl
Requirement already satisfied: numpy>=1.11.3 in c:\python36\lib\site-packages (from opencv-contrib-python) (1.15.2)
Installing collected packages: opencv-contrib-python
Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: ‘c:\\python36\\Lib\\site-packages\\cv2\\cv2.cp36-win_amd64.pyd’
Consider using the `–user` option or check the permissions.
please help me with this error
Hello,
Did you get the fix of this error? I am also getting the same error, even after running anaconda cmd as administrator.
Please help
Hello! I wanna detect faces on image, what can I do? Thanks
Please have a look at the comments section. Your query has already been answered.
Hello !
Nice article, worked perfectly on my computer.
Just, there is a little recurrent error of indentation in each snippet : when you check if the directory exist.
Thanks a lot !
PS : Just, I need to go deeper in this domain, where should I look to understand better than copy/paste ?
OpenCV Error: Bad argument (This LBPH model is not computed yet. Did you call the train method?) in cv::face::LBPH::predict, file C:\bld\opencv_1510966172919\work\opencv-3.3.0\opencv_contrib-3.3.0\modules\face\src\lbph_faces.cpp, line 396
Traceback (most recent call last):
File “detector.py”, line 21, in
ids,conf = recognizer.predict(gray[y:y+h,x:x+w])
cv2.error: C:\bld\opencv_1510966172919\work\opencv-3.3.0\opencv_contrib-3.3.0\modules\face\src\lbph_faces.cpp:396: error: (-5) This LBPH model is not computed yet. Did you call the train method? in function cv::face::LBPH::predict
The error you received is telling you that you need to call the train method in the parent class of LBPHFaceRecognizer before you ask it to do any recognition. Please refer to part-2 of the tutorial series, link: http://www.python36.com/face-recognition-using-opencv-part-2/ for how to capture images and train the LBPHFaceRecognizer. You will then be able to use the generated file for recognizing faces. Do you have the trainingData.yml file inside the recognizer directory in the working directory? Also, are you sure you’re following the code provided in the tutorial? Because if the trainingData.yml file was not found, the program would have printed an error and exited before execution reached line 21. Thanks. Please do let me know if you’re still getting the error.
First I want to say many thanks for this awesome work.
Question: I have trained 4 famous people with 20 pics each. 2 brunettes and 2 blondes but I would say they are different but not sure how good LBPH is in recognition.
But when I go to recognize a new picture it doesn’t pick the right person. I happen to notice confidence level was around 95. I will try with other pictures but so far 2 pictures and no correct match so far.
Person 1 – Salma Hayek
Person 2 – Angelina Jolie
Person 3 – Jennifer Lawrence
Person 4 – Sharon Stone
Any ideas in improving accurancy is welcome.
Hello James,
I’m glad that you enjoyed the post. Is it possible for you to upload your code (along with the images) on github so that we can have a better look at it and see if something can be done regarding the improvement of accuracy?
Did you try with other pictures as well? What was the result? Also, since the accuracy is not 100%, there might be some false positives.
Thanks
Hello Aryal,
Sorry for the late response. I was ill. I am setting up the GitHub project, but I am afraid I should not put the pictures due to copyrights, but I will try, but I will remove them eventually to cover myself.
I have 5 test images of each actress but I have not tried them yet, due to illness.
https://github.com/JamesMcCoder/FaceDetectRecogUsingOpenCV
Finished. I have added a Results.txt to show print output of testfile containing actress initials and result name and the confidence level for curiosity.
8 out of 20 correct
1 of the 8 correct was below confidence level 100 (was 96)
Feels like the other 7 that were correct were just by chance.
Hello Again, Sorry for the many replies. I think I may have found the issue. My dataset, were never converted to grey and trimmed for just the face. The 2 lines in your generator:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
probably will resave my raw(original files) into cleaned and trimmed set before then trainning those.
Hello James. I am glad to see that you enjoyed the post. It is clear that you did your fair share of research on this as well. I’m glad that this post was of help in your quest of learning. Cheers.
New Update: After doing the above, the outcome was still 8 out of 20 but the confidence level dropped. Assuming this is good. Then I decided to make the code usable by both Eigen and LBPH. This forced me to make all the pics the same size. After that work, Eigen was not better, but LBPH improved almost by 15% percent and confidence levels improved drastically as well. This time I had 13 out of 20 correct. Next step is to just add more training pics per actress. Perhaps 50 each.
same problem with me
Thanks for making this simple, I really enjoyed it.
Sir if i want to use cv2.imread instead of video beacuse i dont have video cam on my pc
so what to do
If you already have the trainingData.yml file ready, you don’t need the cv2.VideoCapture(0)
You can then omit the while loop and use img = cv2.imread(“PATH TO YOUR FILE”,1) instead of the cap.read() function get face recognition on your device with no video cam.
If you don’t have the trainingData.yml file ready (see part 2 of the tutorial to find how to create that), you will need to manually paste the images into the ./dataset directory of the working directory in correct filename format (User.userid.samplenumber) and run the trainer.py file.