Vision & Control VCWin Pro

The professional version of VCWin is a GUI (Graphical User Interface), used in the field of artificial vision, with measurements in real time. All image processing functions are clearly represented and inserted by drag&drop without any programming knowledge.

The professional version of  VCWin  is a  GUI  (Graphical User Interface), used in the field of  artificial vision , with measurements in real time. All image processing functions are clearly represented and are inserted by  drag & drop  without the need  for programming knowledge .

In the next post, there will be a brief introduction to the  VCWin simulator , with which images can be loaded for further processing, that is, without the presence of the vision system.

How is VCWin Pro?

The image processing function is fast, so it can be tested and modified immediately. The test sequences are transparent and manageable for all users.

Graphic interface:

 

 

Default toolbar:

The default toolbar offers the following functions as buttons:

 

 

Window toolbar:

The window toolbar offers all the tools to visualize the results in real time. You can customize the graphic environment by coupling these viewing windows, anywhere in the program. The buttons that make up this bar:

 

 

 

Debug toolbar:

With the debug toolbar, it is possible to insert breakpoints in the program in order to control the flow of online debugging.

The debug toolbar provides the following functions as buttons:

 

 

Communication toolbar:

The communication toolbar provides quick access to functions to establish communication with the vision system.

The communication toolbar offers the following functions:

 

 

Video control panel:

 

 

The current image from the vision system is sent to the memory of the selected page. The number of cameras and pages of images available depends on the vision system in question.

Acquisition of image:

Acquisition of VCWin image

Graphic superposition:

Graphic overlay VCWin

Video mode:

VCWin video mode

Configuring the simulator:

1.   Click on the “Interface” button   that will open a window with the different communication protocols.

 

 

2.   Access the “Simulator” tab  :

In the “Name” field,   establish a name for the simulator that will be created (the name is used to classify the simulators, not relevant in the configuration).

 

 

3.   Click on the  “Edit …” button  to configure which camera / s will be added to the simulator.

 

 

4.   There is a list of predefined cameras in  “Model”  that are distributed by the manufacturer. To make a quick configuration, configure both the  “Width”  and the  “Height”  of the camera.

Click on  “Insert”  to add the camera to the list on the right. Multiple cameras can be added to the simulator.

“Close”  to close the window and return to the main window.

 

 

5.   With the simulator already configured, click on  “Insert” . Note that  the new configured simulator appears in  “Current Simulator” .

Click on  “Accept” .

 

 

6.   Click on the “Initialize” button   to connect to the new created simulator.

 

 

7.   If the following window appears, it means that the simulator has a correct configuration and you can work with it.

 

 

Loading an image to the camera:

1. [Utilities> Send image to Camera]  and select an 8-bit monochrome image in * .bmp or * .jpg format.

 

 

2.   Click on  “Monitor Window” , the window with the selected image already loaded will appear.

 

 

In this way we can start working on the image, with the desired selection of commands.

 

 

Commands:

In this section, you will find the  most frequent  commands for the  processing of images in artificial vision .

Configure shutter:

Add a command to set the shooting mode and exposure time in the program, use  [Image  > Configure Shutter]. The established values ​​will remain valid until they change actively.

 

 

Analysis of the Blob:                                                                            

 A blob is a group of adjacent and contiguous pixels that resemble the gray scale.

 

 

This command,  [Locate> Blob Analysis] , is used especially for counting objects, finding objects and for tracking position.

 

 

1.    Select the [Teach-in] tab  .

2.    The rectangle is the predetermined geomatrics.

3.    Adjust the parameters (X, Y) to track the position.

4.    Set here the position of the [Teach-in] window  .

On the video image (recommended)

– Activate the [Teach-in] mode   by double-clicking with the right mouse button.

– Change the size and position of the window, using the controls.

– Deactivate the [Teach-in] mode   by double-clicking with the right mouse button.

In the dialogue window

– Enter the values ​​of X and Y, also the dimensions for the starting point.

– Modify the values ​​as necessary with the arrows to the right of each field.

– Check all the entries of the video image

5.   Use the [Test] button   to check if the blobs are in the window with the default parameters.

 

 

6.   Select the [Parameters] tab  .

Set the parameters here:

a) Threshold, minimum and maximum area of ​​the objects found.

b) Color structures of the objects that are found.

c) If necessary, the roundness parameters.

d) Number of the result / -name, as well as the nominal value and tolerance for the number of objects that must be found.

e) Save the configuration and the coordinate system for the center of gravity and the area of ​​the found objects.

7. Test the command with the [Test] button  :

– In the  [Objects Found] area  the data of the determined objects are displayed.

– Modify the parameters in point (a) to (e) until the result is free of errors.

8. Use the [OK] button   to insert the command into the program.

Characteristics of the object

The binary detection threshold and the allowed area of ​​the objects that should be searched are defined.

 

 

Color

 

 

Count Pixels:

You can insert a command to count pixels in the test program using  [Locate> Count Pixels] . The command is an extension of  [Locate> Test Brightness Percentage] .

The command is used for monitoring and for the regulation of brightness in a lighting installation and for surface tests.

 

 

1.   Define the following parameters in the [Teach-in] tab  :

– Select the desired geometry for inspection.

– Position tracking: point X, point Y or the line phi.

2.   Activate the [Teach-in] mode   by double clicking on the right mouse button.

3.   Modify the size and position of the search window in the video image using the controls.

4.   Deactivate the [Teach-in] mode   by double clicking on the right mouse button.

 

 

5.   In the following tab  [Parameters]  specify:

– Grayscale range.

– Result

– Nominal value and tolerances (of pixels in the gray scale range)

6.   Test the command with the [Test] button   and modify the parameters set in point 1-5 until the result is free of errors.

7.   Insert the command in the test program with the [Ok] button  .

Locate Point

Transition points can be found in the video image using  [Locate> Locate Point] . The command is applied to a gray scale border cut by the detection point. The detection point is defined by its start and end position. The direction of sample detection by an arrowhead. The number of points you want to save depends on the vision system.

 

 

1.   Select the [Teach-in] tab  .

2.   Activate the [Teach-in] mode   by double clicking on the right mouse button.

3.   Modify the size and position of the detection point in the video image using the arrow controls.

4.   Deactivate the [Teach-in] mode   by double clicking on the right mouse button.

 

 

5.   Select the following tab  [Parameters] :

– Detection algorithm

– Coordinate system

– Point selection

– Edge transition

 

6.   Test the command with the [Test] button  :

– Modify the parameters established in point 1-5 until the result is free of errors.

7.   Insert the command in the test program with the [OK] button  .

 

 

Copyright 2019 © All rights reserved