Introducing HALCON 19.11 – This release will further improve machine vision processes with a number of new and revised features.
New HALCON 19.11 Features
Deep Learning Anomaly Detection
Automated surface inspection is an important task in many manufacturing industries and deep-learning-based solutions are becoming a standard tool which can be used to distinguish parts, detect and segment defects. However, it is often not easy to get enough images of the defect or the effort of labeling the available data is very high.
HALCON’s new Anomaly Detection feature gives you the possibility to perform an inspection using only a relatively low number of “good” images for the training. The inference results in the “anomaly” that was detected in the inspected image compared to the trained images. On the right, you can see an example of a defective bottleneck… Learn More
The code reader for EC C200 codes has been significantly accelerated for multi-core systems. The biggest improvement was achieved for codes that are particularly hard to detect and read. For such codes a speedup of about 200% can be achieved. This speedup also greatly increases the viability of embedded-based code readers by making optimum use of existing hardware capacities.
Generic Box Finder
A new functionality for pick and place applications will be available: The generic box finder allows the user to find boxes of different sizes based on 3D space, eliminating the need to train a model for each required box size. This makes many applications much more efficient – especially within the logistics and pharmaceutical industries, where usually boxes of a large variety of different sizes are used.
Halcon 19.05 improvements included:
Improved Surface-based Matching
Edge-supported surface-based matching is now more robust against noisy point clouds: Users can control the impact of surface and edge information via multiple min-scores. Additionally, in case that no xyz-images are available, a new parameter now allows switching off 3D edge alignment entirely. This enables users to eliminate the influence of insufficient 3D data on matching results, while keeping the valuable 2D information for surface and 2D edge alignment.
Enhanced Shape-based Matching
Users can now specifically define so-called “clutter” regions when using shape-based matching. These are areas within a search model that should not contain any contours. Adding such clutter information to the search model leads to more robust matching results, for example in the context of repetitive structures.
Semantic Segmentation & Object Detection with Deep Learning
Object- or error classes trained with deep learning can now be segmented pixel-precisely.
Combined with the multitude of possibilities that HALCON offers for further processing extracted regions, this semantic segmentation paves the way for an entirely new range of applications, which previously could not be realised, or only with significant programming effort. For example: recognising objects with a very heterogeneous texture (e.g., plants) or differentiating between different textures in an image, e.g., (un)treated wood, metal or stone.
Deep-learning-based object detection allows customers to localise trained object- or error classes in an image.
In contrast to semantic segmentation, objects are marked by a surrounding rectangle (bounding box). The object detection also separates instances of the same class, even if the objects touch each other or partially overlap. This is especially useful when the exact amount of objects is needed, e.g., when checking pill bags for correct filling.
To maximise its potential in industrial environments, HALCON’s semantic segmentation and object detection inference can both be performed on GPUs, as well as CPUs. For both approaches, MVTec provides pretrained networks highly optimised for industrial machine vision applications based on millions of images. These networks make it easier to train new objects by reducing the amount of training images customers have to provide themselves.
New Data Structure “Dictionaries”
HALCON 18.11 introduced a new data structure “dictionary”, which is an associative array that opens up various new ways to work with complex data.
For example, this allows bundling various complex data types (e.g., an image, corresponding ROIs and parameters) into a single dictionary, making it easier to structure programs when, e.g., passing many parameters to a procedure.
Dictionaries can also be read from and written to a file. This allows an engineer to bundle all information necessary to reproduce a certain application’s state (e.g., camera calibration settings, defective images, and machine parameters) into a single file. This file can then easily be shared with an machine vision expert for offline-debugging.
Handle Variable Inspect in HDevelop
HDevelop can display detailed information on most important handle variables.
This allows developers to easily inspect the current properties of complex data structures at a glance, which is extremely useful for debugging. Double-clicking a handle variable now returns all parameters associated with the handle and their current settings. For example, the user can now easily examine parameters of a data code handle, such as “polarity”, “symbol type” or “finder pattern tolerance”, as well as complex parameters that carry multiple key-value pairs, like for example the camera parameter of a 3D shape model handle.