Computer Vision – OpenCV

I started writing this program for my Master Degree Thesis using OpenCV libraries. The program is capable of performing motion analyses using Object Detection, Image Correlation and Edge Detection.

I used Python language to implement the mathematical model together with a lot of libraries that I used expecially to speed up GUI creation and plotting features. In this page I’m going to describe each Iris feature briefly. Below, I enclosed some videos showing how each technique works and the last test is about a real-world vibration analysis.

A great deal of effort was put in making the GUI friendly so that users don’t have to deal with the mathematical background. As a result, this page focus mainly on applications but you can read more about mathematical background in my thesis which is freely available here.

I implemented traditional methods from Image Correlation such as methods to compare two or more images in order to work out displacement fields. In addition, since I really like doing research, I developed some some new algorithms for a better use of this technique during vibration analyses. Besides this, I made some attempts to use the novel real-time Object Detection technique in place of Image Correlation and you can see some results below.

Recently I have developed a new module whose aim is to perform reliable vibration analyses in a non-intrusive way using Image Correlation as core method. The algorithm is a combination of the old and reliable Image Correlation technique and a novel algorithm which adjusts some parameters automatically. Image Correlation is not used for comparison between two images but it is used to locate a marker in each video frame. Moreover, users are only supposed to identify the marker in the first frame of the video. The rest is up to IRIS!

This new module is described below and it is being used to analyse the main frequency of a real wind turbine blade. It’s fair to mention some libraries I used to write the code because they simplified my work a lot. Many thanks to:

  • OpenCV: I used lots of Computer Vision routines from this very fast library as bricks to build the software;
  • NumPy: numerical analyses were written using Numpy and SciPy data types and rotutines;
  • Matplotlib: for data plotting;
  • PyQt4: the GUI (Graphical User Interface) was implemented in PyQT4;
  • Python’s Standard Library (of course)!

Real-world test: Vibration analysis of a wind turbine blade

You can see a vibration test made on a wind turbine blade. This test require an impulsive solicitation in order to detect the main frequencies of the structure. Classical analyses are performed by means of accelerometers which measure acceleration in some points. Afterwards, acquired data are worked out (for example, displacements are calculate by integration) and main frequencies arise from an harmionic analysis.
Besides classical sensors, a painted target was placed at the blade tip during this test. The whole test was filmed with a compact camera at 25 FPS in order to meet Nyquist theorem requirements. The point of view of the camera has been chosen to focus on the target.

By means of that target, IRIS is capable of analysing the motion. The technique is implemented in a new Python module since it is very easy to add a module to the modular structure of IRIS. Firstly, the user is asked to identify the target in few video frames and some images of the target are saved. Let me call samples these images. Secondly, IRIS analyses each frame of the recording in order to detect the target position.

This process seems very similar to the Object Detector described below but it is quite different. IRIS try to locate each sample in the first frame by means of Image Correlation. As a result, we have a confidence value connected to each sample and we are able to choose the sample with the best confidence value which is stored for later use. Following frames are analysed by using the same sample. If users want, IRIS can repeat the optimising process, trying to reach a greater confidence value with another sample before using the same one. In the video we can see how fast and accurate the analysis is and in particular how reliable the target identification is. The whole analysis takes 10 minutes, included data working out.

Data is exported to CSV format and a new video indicating the target is recorded. FFT analysis is performed by a separate module called “Data Analyser” which is in charge of graphical representations of results.

Object detection

You can use Object Detection to estimate speed and acceleration of macroscopic objects (large displacement field). First of all, Iris learn how to recognize an object (using Computer Vision techniques). Then, it can estimate speed and acceleration in real-time or perform a Fourier analsys for periodic motions (such as low speed vibrations). It uses images taken on-the-fly from webcams or recorded in a video, giving you a quick analysis.

Capturing images – Positives and negatives

The first step to use Object Detector is capturing images. You have to provide Iris with positive images (i.e. images containing the “target”) and negative images (i.e. images not containing the target). For example, the next video shows how to capure some images and mark each as a positive or negative with a little effort from user. The aim is let Iris detect and recognize the target to consecutively analyse its speed and acceleration. Capturing is done using OpenCV, GUI is written in PyQT4.

Training

After providing Iris with enough images, the program is ready to start the training. This step can last a lot: required time varies according to the positive and negative images number. The output of training step is shown in a separate part of main window. The process is optimized for muti-core CPU using parallel computing. Iris produces a single XML file describing the object.

Analysing in real-time!

Motion Analysis uses the XML file made by training process. Iris can plot translation, speed and acceleration in real time. In this video, you can watch Iris analysing the target motion. Analysis is stopped when the target is out of field of view. Data produced is automatically exported in CSV files to be worked out consecutively. Plots are made using Matplotlib library.

Image Correlation

Image Correlation is an older technique than Object Detection. Iris uses it for strain field estimation. For example, it can calculate the strain field of a metallic plate in a laboratory test. It needs at least two photos: one taken before the deformation and another taken after the deformation.

Stain Field Estimation

In this video, you can watch Iris estimating a strain field. For this, Image Correlation technique is used. First, I loaded two images of a metallic plate in Iris. On the left, there is the “undeformed” image (i.e. the photo of indeformed plate). On the right there si the “deformed” image. So, I chose Image Correlation paramters and started the analysis. I used parallel computing to speed up the process. Data can be exported to CSV file or analysed with “Data Analyser” module (like in this video).

Batch Analysis

Batch analysis can be used to match the same undeformed images against multiple deformed images with same analysis parameters. So, you can speed up strain field estimation because for each deformed image, Iris saves data automatically. Data Analyser Module can plot multiple results to compare different strain fields. In this video, you can see Batch Analysis in action!

Edge Detection

Edge Detection is the newest of these three techniques. Iris can recognize edges using a single photo and then export the detected  geometry in AutoCAD DXF format to help human users in speeding up virtual models building. Edge Detection is very useful for truss because it helps you in building a CAD model of structure starting from nodal points.

How about a truss?

Edge Detection is the newest technique I implemented in Iris. In this video, Iris detect the geometry of an homemade truss. I took a photo of this truss and used Edge Detector to export geometry to DXF format. Canny and Hough’s algorithms gave the “lines” in the picture using probabilistic methods. I developed an algorithm to identify nodes and simplify the detected geometry. Anyway, Edge Detector parametres have to be defined in first instance by a trial-error procedure.

Pin It on Pinterest