Introducing HALCON 18.11 – Bringing new AI technologies, specifically from the fields of deep learning and convolutional neural networks (CNNs).
The latest release offers new and expanded options for embedded vision as well as updated USB3 Vision interfaces. In addition, core technologies have been improved.
For developers, this new version provides helpful innovations and valuable new features in HALCON’s integrated development environment HDevelop.
The Steady edition includes:
New Data Structure “Dictionaries”
HALCON 18.11 introduces a new data structure “dictionary”, which is an associative array that opens up various new ways to work with complex data.
For example, this allows bundling various complex data types (e.g., an image, corresponding ROIs and parameters) into a single dictionary, making it easier to structure programs when, for example, passing many parameters to a procedure.
Dictionaries can also be read from and written to a file. This allows an engineer to bundle all information necessary to reproduce a certain application’s state (e.g., camera calibration settings, defective images, and machine parameters) into a single file. This file can then easily be shared with an machine vision expert for offline-debugging.
Handle Variable Inspect in HDevelop
With HALCON 18.11, HDevelop can display detailed information on most important handle variables.
This allows developers to easily inspect the current properties of complex data structures at a glance, which is extremely useful for debugging. Double-clicking a handle variable now returns all parameters associated with the handle and their current settings. For example, the user can now easily examine parameters of a data code handle, such as “polarity”, “symbol type” or “finder pattern tolerance”, as well as complex parameters that carry multiple key-value pairs, like for example the camera parameter of a 3D shape model handle.
ECC 200 Code Reader Improvements
With HALCON 18.11, the data code reader for ECC 200 codes has been improved. The overall recognition rate could be increased by 5 % (data based on our internal ECC 200 benchmark consisting of more than 3,700 images from various applications). In addition, the ECC 200 reader is able to read codes with disturbed quiet zone now. Moreover, codes against complex backgrounds can be found and read faster and more robustly.
In addition HALCON Steady has the option in purchase a Deep Learning add-on, to include:
Semantic Segmentation & Object Detection with Deep Learning
With HALCON 18.11, object- or error classes trained with deep learning can now be segmented pixel-precisely.
Combined with the multitude of possibilities that HALCON offers for further processing extracted regions, this semantic segmentation paves the way for an entirely new range of applications, which previously could not be realised, or only with significant programming effort. For example: recognising objects with a very heterogeneous texture (e.g., plants) or differentiating between different textures in an image, e.g., (un)treated wood, metal or stone.
HALCON 18.11 also introduces deep-learning-based object detection, which allows customers to localise trained object- or error classes in an image.
In contrast to semantic segmentation, objects are marked by a surrounding rectangle (bounding box). The object detection also separates instances of the same class, even if the objects touch each other or partially overlap. This is especially useful when the exact amount of objects is needed, e.g., when checking pill bags for correct filling.
To maximise its potential in industrial environments, HALCON’s semantic segmentation and object detection inference can both be performed on GPUs, as well as CPUs. For both approaches, MVTec provides pretrained networks highly optimised for industrial machine vision applications based on millions of images. These networks make it easier to train new objects by reducing the amount of training images customers have to provide themselves.