<< Chapter < Page Chapter >> Page >

Opencv and python

While CUDA is an excellent language for maximizing performance, it could be infeasible due to inexperience with C,time constraints, or lack of an NVIDIA GPU. To address these constraints, we also implemented our algorithms in Pythonusing the Python OpenCV library. We chose Python because it is very easy for inexperienced programmers to use, and wechose OpenCV because it offers implementations of many common image processing algorithms. Unfortunately, usingOpenCV limits us to the library’s implementation, which does not currently support the use of GPU parallelization in Pythonwithout using C wrappers. For each of the algorithms in this paper, we present code snippets that provide a sample CPUbasedPython implementation alongside a GPU-based parallel CUDA implementation.

Motivation

GPU parallelization is a powerful tool for accelerating image processing tasks. The motivation for our CUDA motion detectionand facial recognition algorithms stems from their diverse applications and the ease of coding a GPU implementation,which is due in large part to the simplicity of CUDA APIs and OpenCV libraries. For example, motion detection is applied inreal-time traffic analysis, vision-based security, and consumer technology (e.g. motion-triggered home automation solutions).Further, since our computer vision algorithm calculates the location of motion in a frame based on detected edges, our2 approach is both more accurate and supplemented with morespecific data. Facial recognition is applied in real-time security, smart photographic analysis, and military purposes, includingtarget-based missile guiding systems. Therefore, we view an opportunity use CUDA to explore the speedup possible byusing a GPU to accelerate these common tasks in computer vision.

Edge detection algorithm

Edge detection is the process of determining the locations of boundaries that separate objects, or sections of objects, inan image. This edge data simplifies the features of an image to only its basic outlines, which makes it a convenient inputfor many other problems in computer vision, including motion detection and tracking, object classification, three-dimensionalreconstruction, and others. The edge detection algorithm we present is composed of three stages:

  1. Reduce the amount of high-frequency noise from the image using a two-dimensional low pass filter.
  2. Determine the two-dimensional gradient of the filtered image by applying partial derivatives in both the horizontaland vertical directions.
  3. Classify edges by suppressing surrounding pixels in the gradient direction whose magnitude is lesser, and applyingselective thresholding to a limited apron around each selected pixel.

Edges correspond to the points on the image that exhibit a high gradient magnitude. The edge data from our algorithm isthen used as input to our motion detection algorithm.

Fig. 1. high-level block diagram describing our implementation of a real-time edge-based motion detector

High-level block diagram of motion detection system

Separable convolution

The filtering necessary in steps (1) and (2) from our algorithm above involve the convolution of a two-dimensionalimage and a two-dimensional kernel filter of constant width and height. A na¨ıve implementation of two-dimensional convolutionwould require a double sum across the kernel and image for each individual pixel:

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2015. OpenStax CNX. Jan 04, 2016 Download for free at https://legacy.cnx.org/content/col11950/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2015' conversation and receive update notifications?

Ask