本教程我将分享几个简单步骤解释如何使用OpenCV进行Python对象计数。
需要安装一些软件:
import cv2, time#1. Create an object.Zero for external cameravideo=cv2. VideoCapture(0)#1. a variablea=0while True: a=a+1 #3. Create frame object check, frame = video.read() print(check) print(frame) # Reprsenting image #6. converting to grascale gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #4. shadow the frame cv2.imshow("Capturing", gray) #5. for press any key to out (milisecond) #cv2.waitKey(0) #7. for playing key=cv2.waitKey(1) if key==ord('q'): break print (a)#2. Shutdown the cameravideo.release() cv2.destroyAllWindows
现在我们将开始逐步学习这个车辆计数教程。第一步是打开我们将在本教程中使用的视频录制。Python示例代码如下:
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filewhile(cap.isOpened()): ret, frame = cap.read() #read a frame try: cv2.imshow('Frame',frame) except: #if there are no more frames to show... print('EOF') break #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
这部分非常简单,因为我们只在视频上显示文字或画线。
使用Python代码在视频文件中显示文本如下:
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filew = cap.get(3) #get widthh = cap.get(4) #get heightmx = int(w/2)my = int(h/2)count = 0while(cap.isOpened()): ret, frame = cap.read() #read a frame try: count = count + 1 text = "Statistika UII " + str(count) cv2.putText(frame, text ,(mx,my),cv2.FONT_HERSHEY_SIMPLEX ,1,(255,255,255),1,cv2.LINE_AA) cv2.imshow('Frame',frame) except: #if there are no more frames to show... print('EOF') break #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
除了显示文字,我们还可以绘制线条,圆圈等。OpenCV有许多绘制几何形状的方法
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filewhile(cap.isOpened()): ret, frame = cap.read() #read a frame try: cv2.imshow('Frame',frame) frame2 = frame except: #if there are no more frames to show... print('EOF') break line1 = np.array(<<100,100>,<300,100>,<350,200>>, np.int32).reshape((-1,1,2)) line2 = np.array(<<400,50>,<450,300>>, np.int32).reshape((-1,1,2)) frame2 = cv2.polylines(frame2,,False,(255,0,0),thickness=2) frame2 = cv2.polylines(frame2, ,False,(0,0,255),thickness=1) cv2.imshow('Frame 2',frame2) #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
此方法通过区分背景和对象(前景)的移动来分离对象。该方法非常广泛地用于进入或离房间计数,交通信息系统中车辆统计,访客数量等。
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filefgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractorwhile(cap.isOpened()): ret, frame = cap.read() #read a frame fgmask = fgbg.apply(frame) #Use the substractor try: cv2.imshow('Frame',frame) cv2.imshow('Background Substraction',fgmask) except: #if there are no more frames to show... print('EOF') break #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
在图像中,黑色的图像为背景,而白色的图像是检测的对象。
图像处理中的形态学,即数学形态学(mathematical Morphology),是图像处理中应用最为广泛的技术之一,主要用于从图像中提取对表达和描绘区域形状有意义的图像分量,使后续的识别工作能够抓住目标对象最为本质〈最具区分能力-most discriminative)的形状特征,如边界和连通区域等。同时像细化、像素化和修剪毛刺等技术也常应用于图像的预处理和后处理中,成为图像增强技术的有力补充。
经常使用的形态学操作:包括腐蚀、膨胀, 以及开、闭运算。
膨胀: 输出像素的值是所有输入像素值中的最大值。在二值图像中,如果领域中有一个像素值为1,则输出像素值为1。如下图
腐蚀:输出像素的值是所有输入像素值中的最小值,在二值图像中,若果领域中有一个像素值为0,则输出像素值为0,看下图:
膨胀和腐蚀的Python实现如下:
import cv2import numpy as npimg = cv2.imread("carcount.png")ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)kernel = np.ones((3,3),np.uint8)erosion = cv2.erode(img,kernel,iterations = 1)dilation = cv2.dilate(img,kernel,iterations = 1)cv2.imwrite("erode.png",erosion)cv2.imwrite("dilate.png",dilation)
查看侵蚀和扩张的结果如下图:
开运算:先腐蚀再膨胀,可以去掉目标外的孤立点。目标外的孤立点是和目标像素值一样的点,而非背景像素点,即为1而非0(0表示选取的空洞或背景像素值)。使用腐蚀,背景扩展,该孤立点被腐蚀掉,但是腐蚀会导致目标区域缩小一圈,因此需要再进行膨胀操作,将目标区域扩展回原来大小。所以,要使用开运算去除目标外的孤立点。
闭运算:先膨胀再腐蚀,可以去掉目标内的孔。目标内的孔,属于周围都是值为1,内部空洞值为0.目的是去除周围都是1的像素中间的0值。闭运算首先进行膨胀操作,目标区域扩张一圈,将目标区域的0去除,但是目标区域同时也会向外扩张一圈,因此需要使用腐蚀操作,使得图像中的目标区域恢复到之前的大小。
代码实现如下:
import cv2import numpy as npimg = cv2.imread("carcount.png")ret,thresh1 = cv2.threshold(img,200,255,cv2.THRESH_BINARY)kernel = np.ones((5,5),np.uint8)opening = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)closing = cv2.morphologyEx(thresh1, cv2.MORPH_CLOSE, kernel)cv2.imwrite("carcount_closing.png",closing)cv2.imwrite("carcount_opening.png",opening)
到目前为止,我们已经过滤了视频流文件,然后我们将检测移动对象上的轮廓。
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filefgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractorkernelOp = np.ones((3,3),np.uint8)kernelCl = np.ones((11,11),np.uint8)while(cap.isOpened()): ret, frame = cap.read() #read a frame fgmask = fgbg.apply(frame) #Use the substractor try: ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY) #Opening (erode->dilate) mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp) #Closing (dilate -> erode) mask = cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl) except: #if there are no more frames to show... print('EOF') break _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) for cnt in contours0: cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8) cv2.imshow('Frame',frame) #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
这是一个非常有趣的部分,我们将轮廓分类为车辆对象。此定义以小红点标记。Python实现如下:
import numpy as npimport cv2cap = cv2.VideoCapture('traf.mp4') #Open video filefgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractorkernelOp = np.ones((3,3),np.uint8)kernelCl = np.ones((11,11),np.uint8)areaTH = 500while(cap.isOpened()): ret, frame = cap.read() #read a frame fgmask = fgbg.apply(frame) #Use the substractor try: ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY) #Opening (erode->dilate) mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp) #Closing (dilate -> erode) mask = cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl) except: #if there are no more frames to show... print('EOF') break _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) for cnt in contours0: cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8) area = cv2.contourArea(cnt) print (area) if area > areaTH: ################# # TRACKING # ################# M = cv2.moments(cnt) cx = int(M<'m10'>/M<'m00'>) cy = int(M<'m01'>/M<'m00'>) x,y,w,h = cv2.boundingRect(cnt) cv2.circle(frame,(cx,cy), 5, (0,0,255), -1) img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2) cv2.imshow('Frame',frame) #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
您已经知道我们的视频上有什么对象,现在您想知道它们往哪里移动(如:向上/向下)。在第一帧中,您需要将检测到的ID对象保存初始位置。然后,在下一帧中,要继续跟踪对象,必须将帧中对象的轮廓与首次出现时的ID匹配,并保存该对象的坐标。然后,在对象跨越视频的边界(或一定量的限制)之后,您可以使用存储的位置来评估它是向上或是向下移动。
import numpy as npimport cv2import Carimport timecap = cv2.VideoCapture('peopleCounter.avi') #Open video filefgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractorkernelOp = np.ones((3,3),np.uint8)kernelCl = np.ones((11,11),np.uint8)#Variablesfont = cv2.FONT_HERSHEY_SIMPLEXcars = <>max_p_age = 5pid = 1areaTH = 500while(cap.isOpened()): ret, frame = cap.read() #read a frame fgmask = fgbg.apply(frame) #Use the substractor try: ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY) #Opening (erode->dilate) mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp) #Closing (dilate -> erode) mask = cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl) except: #if there are no more frames to show... print('EOF') break _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) for cnt in contours0: cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8) area = cv2.contourArea(cnt) if area > areaTH: ################# # TRACKING # ################# M = cv2.moments(cnt) cx = int(M<'m10'>/M<'m00'>) cy = int(M<'m01'>/M<'m00'>) x,y,w,h = cv2.boundingRect(cnt) new = True for i in cars: if abs(x-i.getX()) <= w and abs(y-i.getY()) <= h: # the object is close to one that was already detected before new = False i.updateCoords(cx,cy) #Update coordinates on the object and resets age break if new == True: p = Car.MyCar(pid,cx,cy, max_p_age) cars.append(p) pid += 1 cv2.circle(frame,(cx,cy), 5, (0,0,255), -1) img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2) cv2.drawContours(frame, cnt, -1, (0,255,0), 3) for i in cars: if len(i.getTracks()) >= 2: pts = np.array(i.getTracks(), np.int32) pts = pts.reshape((-1,1,2)) frame = cv2.polylines(frame,,False,i.getRGB()) if i.getId() == 9: print (str(i.getX()), ',', str(i.getY())) cv2.putText(frame, str(i.getId()),(i.getX(),i.getY()),font,0.3,i.getRGB(),1,cv2.LINE_AA) cv2.imshow('Frame',frame) #Abort and exit with 'Q' or ESC k = cv2.waitKey(30) & 0xff if k == 27: breakcap.release() #release video filecv2.destroyAllWindows() #close all openCV windows
你之前的部分已经知道如何检测对象运动的方法。现在,我们必须看到这个列表并确定对象是否在我们的视频中向上或下降。要做到这一点,首先将创造两条线,这将显示什么时候来评估对象的方向(line_up,line_down)。而且还会有两行界限,告诉我们什么时候停止跟踪物体(up_limit,down_limit)。