Object tracking in OpenCV is a fundamental technique used in computer vision to monitor and follow the movement of objects within a video stream or sequence of images. It plays a crucial role in various applications such as surveillance, automotive safety systems, human-computer interaction, and augmented reality. At its core, object tracking involves locating a particular object of interest in successive frames of a video and determining its trajectory over time. This process typically involves identifying the object using features like color, shape, or texture and then employing algorithms to estimate its position and motion.
The importance of object tracking lies in its ability to extract valuable information from video data in real-time or offline scenarios. In surveillance applications, for instance, it enables automated monitoring of activities and detection of suspicious behavior by continuously tracking moving objects within a scene. Similarly, in automotive safety systems like autonomous driving, object tracking facilitates the detection and tracking of vehicles, pedestrians, and other obstacles to ensure safe navigation on roads. Moreover, in interactive systems, such as gesture recognition or virtual reality, object tracking enables the seamless integration of virtual objects into the real-world environment, enhancing user experiences.
Object tracking in OpenCV finds extensive applications across various domains. In surveillance systems, it enables automated monitoring of security cameras, facilitating real-time detection of intruders, suspicious activities, and abandoned objects. In autonomous vehicles, object tracking ensures safe navigation by continuously monitoring the movement of vehicles, pedestrians, and obstacles, allowing the vehicle to make informed decisions and avoid collisions. Moreover, in augmented reality and human-computer interaction, object tracking enables immersive experiences by overlaying virtual objects onto the real-world environment based on the user’s movements and interactions.
Example program to track an object based on color in OpenCV:
import cv2
import numpy as np
print (cv2. __version__)
def onTrack1(val):
global hueLow
hueLow=val
print (‘Hue Low’,hueLow)
def onTrack2(val):
global hueHigh
hueHigh=val
print (‘Hue High’,hueHigh)
def onTrack3(val):
global satLow
satLow=val
print (‘Sat Low’,satLow)
def onTrack4(val):
global satHigh
satHigh=val
print (‘Sat High’,satHigh)
def onTrack5(val):
global valLow
valLow=val
print (‘Val Low’,valLow)
def onTrack6(val):
global valHigh
valHigh=val
print (‘Val High’,valHigh)
width=640
height=360
cam=cv2.VideoCapture(0, cv2. CAP_DSHOW)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
cam.set(cv2.CAP_PROP_FPS,30)
cv2.namedWindow(‘myTracker’)
cv2.moveWindow(‘myTracker’, width,0)
cv2.createTrackbar(‘Hue Low’,‘myTracker’,10,179, onTrack1)
cv2.createTrackbar(‘Hue High’,‘myTracker’,20,179, onTrack2)
cv2.createTrackbar(‘Sat Low’,‘myTracker’,20,255, onTrack3)
cv2.createTrackbar(‘Sat High’,‘myTracker’,250,255, onTrack4)
cv2.createTrackbar(‘Val Low’,‘myTracker’,10,255, onTrack5)
cv2.createTrackbar(‘Val High’,‘myTracker’,250,255, onTrack6)
while True:
ignore, frame=cam.read()
frameHSV=cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lowerBound=np.array([hueLow,satLow,valLow])
upperBound=np.array([hueHigh,satHigh,valHigh])
myMask=cv2.inRange(frameHSV,lowerBound,upperBound)
myObject=cv2.bitwise_and(frame,frame,mask=myMask)
myObjectSmall=cv2.resize(myObject,(int(width/2), int(height/2)))
cv2.imshow(‘My Object’,myObjectSmall)
cv2.moveWindow(‘My Object’,int(width/2), int(height))
cv2.imshow(‘My Mask’,myMask)
myMaskSmall=cv2.resize(myMask,(int(width/2), int(height/2)))
cv2.imshow(‘My Mask’,myMaskSmall)
cv2.moveWindow(‘My Mask’,0, height)
cv2.imshow(‘my WEBcam’, frame)
cv2.moveWindow(‘my WEBcam’,0,0)
if cv2.waitKey(1) & 0xff==ord(‘q’):
break
cam.release