Deep Learning with HALCON 17.12 – Training a CNN

Gain ‘Deep Learning’ out of the Box with HALCON 17.12.  Users will be able to train their own classifier using CNNs (Convolutional Neural Networks). After training the CNN, it can also be used for classifying new data with HALCON.

Training a CNN

Training a CNN in HALCON is done simply by providing a sufficient amount of labelled training images. E.g., to be able to differentiate between samples that show scratches or contamination and good samples, training images for all three classes must be provided: Images showing scratches must be labelled “scratch”, images showing some sort of contamination must carry the label “contamination”, and images showing a good sample must be in the category “OK”.

HALCON then analyses these images and automatically learns which features can be used to identify defective and good samples. This is a big advantage compared to all previous classification methods, where these features had to be “handcrafted” by the user – a complex and cumbersome undertaking that requires skilled engineers with programming and vision knowledge.

Using the Trained Network 

Once the network has learned to differentiate between the given classes, e.g., tell if an image shows either a scratched, a contaminated or a good sample, the network can be put to work. This means, users can then apply the newly created CNN classifier to new image data which the classifier then matches to the classes it has learned during training.

Typical application areas for deep learning include defect classification (e.g., for circuit boards, bottle mouths or pills), or object classification (for example, identifying the species of a plant from one single image).