Opto Engineering® - 20 years Opto Engineering® - 20 years Opto Engineering. Our first 20 years →
Opto Engineering® - 20 years
Home Software Graphical environments for Machine Vision
FabImage® Studio Professional
Overview
Related
Features
Capabilities
Application cases
Licensing model
Deep Learning
Quote
Resources

Key advantages

  • No low-level programming knowledge required.
  • Data-flow based software.
  • Fast and optimized algorithms.
  • 1000+ high performance functions.
  • Custom machine vision filters.
Discover more

Intuitive

Powerful

Adaptable

Intuitive

Drag & Drop

All programming is done by choosing filters and connecting them with each other. You can focus all your attention on computer vision.

You Can See Everything

Inspection results are visualized on multiple configurable data previews; and when a parameter in the program is changed, you can see the previews updated in real time.

HMI Designer

You can easily create custom graphical user interfaces and thus build the entire machine vision application using a single software package.

Powerful

Over 1000 Ready-for-Use Filters

There are over 1000 ready-for-use machine filters tested and optimized on hundreds of applications. They have many advanced capabilities such as outlier suppression, subpixel precision or any-shape region-of-interest.

Hardware Acceleration

The filters are aggressively optimized for the SSE technology and for multicore processors. Our implementations are ones of the fastest in the world!

Loops and Conditions

Without writing a single line of code, you can create custom and scalable program flows. Loops, conditions and subprograms (macrofilters) are realized with appropriate data-flow constructs in the graphical way.

Adaptable

GigE Vision and GenTL Support

FabImage® Studio is a GigE Vision compliant product, supporting the GenTL interface, as well as a number of vendor-specific APIs. Thus, you can use it with Opto Engineering® cameras and most cameras available on the market, including models from Matrix Vision, Allied Vision, Basler, Baumer, Dalsa, PointGrey, Photon Focus and XIMEA and more.

User Filters

You can use user filters to integrate your own C/C++ code with the benefits of visual programming.

C++ Code Generator

Programs created in FabImage® Studio can be exported to C++ code or to .NET assemblies. This makes it very easy to integrate your vision algorithms with applications created in C++, C# or VB programming languages.

There are over 1000 filters encompassing both basic transforms and specialized machine vision tools.

  • Image processing
  • Shape fitting
  • Barcode reading
  • Template Matching
  • Support vector machines
  • Blob analysis
  • Camera calibration
  • Data code reading
  • Measurements
  • GigE Vision and GenTL
  • Contour analysis
  • Fourier analysis
  • Corner detection
  • Histogram analysis
  • Planar geometry
  • Hough transform
  • 1D profile analysis
  • OCR

In this application, we need to sort nails amongst nuts and bolts. The image is thresholded and the resulting regions are split into blobs; finally, the blobs are classified by their elongation and the nails are easily found.

This example shows a basic ReadBarcodes filter. The tool automatically find the barcode and gives as output the decoded text.

License types

Development

Runtime

Quickstart guide to FabImage® part numbers

License types

There are two types of commercial licenses

Development

Assigned to a single engineer. It includes one year of technical support, which can be extended with an annual fee. Valid technical support also gives you the right to upgrade the software to newer versions and provides a discount on runtime licenses.

Product Type P/N
FabImage® Studio Professional Development FIS-PRO
  • license assigned to a single user includes 1 year technical support
  • includes 1 year technical support
  • delivered on a usb dongle
FabImage® Studio + Library bundle Development FIS-ADD
  • license for users who need both FabImage® Studio Professional and FabImage® Library Suite includes generating C++ code from programs in FabImage® Studio Professional
  • includes 1 year technical support
  • delivered on a usb dongle
Expand table

Runtime

Assigned to a single vision system. You can use one license for one multi-camera system, but multiple licenses are required to control multiple independent systems, even if run on a single physical computer.

Product Type P/N
FabImage® Studio Runtime Runtime FIS-RUN
  • can be used to control at most one vision system
  • the price for integrators / oem: requires a Professional (dev.) license with valid technical support
With dev. License and valid tech support
FabImage® Studio Runtime Runtime FIS-RTB
  • can be used to control at most one vision system
  • With dev. License and expired tech support
With dev. License and expired tech support
Expand table
Additional products Type P/N
Multithreading Add-on Development FI-PAR
  • applies to developer licenses. It allows the user to run several macrofilters (worker tasks) in parallel. Multithreaded projects require special runtime licenses.
1 Year Support Extension Development FIS-EXT, ADD-EXT
  • applies to all types of development licenses extends the rights of one development license for another year
With dev. License and valid tech support
USB License Dongle - USB-DONGLE-FI / USB-DONGLE-RUN
  • an alternative to the computer-id based licensing mechanism makes it possible to use the software on multiple computers can be used for both Development and Runtime licenses
USB Dongle for FabImage® Developer licenses / USB Dongle for FabImage® Runtime licenses
Expand table

Quickstart guide to FabImage® part numbers

  1. FabImage® Studio Professional (FIS-PRO) makes it possible to create complete machine vision applications, including HMI. FabImage® Studio Runtime (FIS-RUN/FIS-RTB) is required to run the applications on each inspection system.
  2. FabImage® Studio Professional (FIS-PRO) includes the feature of User Filters, which allows for embedding user’s C++ code within the graphical programming model. FabImage® Library Suite, FIL-SUI (or the Studio + Library bundle, FIS-ADD) is only required, if one needs to invoke the built-in image analysis tools as C++ functions.
  3. If you want to prototype applications in FabImage® Studio Professional (FIS-PRO) and then transform them into C++ code, then you need FabImage® Studio + Library bundle (FIS-ADD)
  4. If the graphical programming environment for fast prototyping is not needed, then FabImage® Library Suite (FIL-SUI) is enough for development.
  5. In general, there are four possible ways to work with the products:
    1. Programming in the graphical way - this requires an FabImage® Studio Professional (FIS-PRO) for each developer and a FabImage® Studio Runtime (FIS-PRO/FIS-RTB) for each system. One particular advantage of this method is the ease of introducing changes, even directly on the production line
    2. Programming in the graphical way and then generating C++ code – this requires a FabImage® Studio + Library bundle (FIS-ADD) for each developer and a FabImage® Library Runtime (FIL-RUN/FIL-RTB) for each system. This method allows to integrate the created solutions with bigger software projects.
    3. Programming in the graphical way and then generating .NET Macrofilter Interfaces – this requires FabImage® Studio Professional (FIS-PRO) for each developer and a FabImage® Studio Runtime (FIS-RUN/FIS-RTB) for each system. No library license is needed as .NET Macrofilter Interfaces use the same program execution mechanisms as Studio.
    4. Programming directly in C++ or .NET – this is for people who think in C++ or C# and do not want to do graphical programming. In this case an FabImage® Library Suite (FIL-SUI) is required for each developer and a FabImage® Library Runtime (FIL-RUN/FIL-RTB) for each system.

Introduction

Deep Learning Add-on is a breakthrough technology for machine vision. It is a set of five ready-made tools which are trained with 20-50 sample images, and which then detect objects, defects or features automatically. Internally it uses large neural networks designed and optimized for use in industrial vision systems.

Together with FabImage Studio Professional you are getting a complete solution for training and deploying modern machine vision applications.

Key Facts

Training Data

Learns from few samples

Typical applications require between 20 and 50 images for training. The more the better, but our software internally learns key characteristics from a limited training set and then generates thousands of new artificial samples for effective training.

Hardware Requirements

Works on GPU and CPU

A modern GPU is required for effective training. At production, you can use either GPU or CPU. GPU will typically be 3-10 times faster (with the exception of Object Classification which is equally fast on CPU).

Speed

The highest performance

Typical training time on a GPU is 5-15 minutes. Inference time varies depending on the tool and hardware between 5 and 100 ms per image. The highest performance is guaranteed by an industrial inference engine internally developed.

Training Procedure

1. Collect and normalize images

  • Acquire between 20 and 50 images (the more the better), both Good and Bad, representing all possible object variations; save them to disk.
  • Make sure that the object scale, orientation and lighting are as consistent as possible.

Training

  • Open FabImage Studio Professional and add one of the Deep Learning Add-on tools.
  • Open an editor associated with the tool and load your training images there.
  • Label your images or add markings using drawing tools.
  • Click “Train”.

Training and Validation Sets

In deep learning, as in all fields of machine learning, it is very important to follow correct methodology. The most important rule is to separate the Training set from the Validation set. The Training set is a set of samples used for creating a model. We cannot use it to measure the model’s performance, as this often generates results that are overoptimistic. Thus, we use separate data – the Validation set – to evaluate the model. Our Deep Learning Add-on automatically creates both sets from the samples provided by the user.

Execute

  • Run the program and see the results
  • Go to 1 or 2 until results are fully satisfactory

Feature detection

In the supervised mode the user needs to carefully label pixels corresponding to defects on the training images. The tool then learns to distinguish good and bad features by looking for their key characteristics.

Photovoltaics Inspection

In this application cracks and scratches must be detected on a surface that includes complicated features. With traditional methods, this requires complicated algorithms with dozens of parameters which must be adjusted for each type of solar panel. With Deep Learning, it is enough to train the system in the supervised mode, using just one tool.

Satellite Image Segmentation

Satellite images are difficult to analyse as they include a huge variety of features. Nevertheless, our Deep Learning Add-on can be trained to detect roads and buildings with very high reliability. Training may be performed using only one properly labeled image, and the results can be verified immediately. Add more samples to increase the robustness of the model.

Anomaly Detection

In the unsupervised mode training is simpler. There is no direct definition of a defect – the tool is trained with Good samples and then looks for deviations of any kind.

Package Verification

When a sushi box is delivered to a market, each of the elements must be correctly placed at a specific position. Defects are difficult to define when correct objects may also vary. The solution is to use unsupervised deep learning mode that detects any significant variations from what the tool has seen and learned in the training phase.

Plastics, injection moulding

Injection moulding is a complex process with many possible production problems. Plastic objects may also include some bending or other shape deviations that are acceptable for the customer. Our Deep Learning Add-on can learn all acceptable deviations from the provided samples and then detect anomalies of any type when running on the production line.

Object Classification

The Object Classification tool divides input images into groups created by the user according to their particular features. As a result the name of a class and the classification confidence are given.

Caps: Front or Back

Plastic caps may sometimes accidently flip in the production machine. The customer wants to detect this situation. The task can be completed with traditional methods, but requires an expert to design a specific algorithm for this application. On the other hand we can use deep learning based classification which automatically learns to recognize Front and Back from a set of training pictures.

3D Alloy Wheel Identification

There may be hundreds of different alloy wheel types being manufactured at a single plant. Identification of a particular model with such quantities of models is virtually impossible with traditional methods. Template Matching would need huge amount of time trying to match hundreds of models while handcrafting of bespoke models would simply require too much development and maintenance. Deep learning comes as an ideal solution that learns directly from sample pictures without any bespoke development.

Instance Segmentation

The instance segmentation technique is used to locate, segment and classify single or multiple objects within an image. Unlike the feature detection technique, this technique detects individual objects and may be able to separate them even if they touch or overlap.

Nuts Segmentation

Mixed nuts are a very popular snack food consisting of various types of nuts. As the percentage composition of nuts in a package shall be in accordance with the list of ingredients printed on the package, the customers want to be sure that the proper amount of nuts of each type is going to be packaged. Instance segmentation tool is an ideal solution in such application, since it returns masks corresponding to the segmented objects.

Package Verification

A typical set of soup greens used in Europe is packaged on a white plastic plate in a random position. Production line workers may sometimes accidently forget to put one of the vegetables on the plate. Although there is a system that weighs the plates, the customer wants to verify completeness of the product just before the sealing process. As there are no two vegetables that look the same, the solution is to use deep learning-based segmentation. In the training phase, the customer just has to mark regions corresponding to vegetables.

Point Location

The Point Location tool looks for specific shapes, features or marks that can be identified as points in an input image. It may be compared to traditional template matching, but here the tool is trained with multiple samples and becomes robust against huge variability of the objects of interest.

Bees Tracing

The task that seems impossible to achieve with traditional methods of image processing can be done with our latest tool. In this case we use it to detect bees. When it is done we can check whether they are infected by varroosis – the disease caused by the parasitic mites attacking the honey bees. The parasite attaches to their bodies and upon the basis of a characteristic red inflammation we can classify them according to their health condition. Not only does this example show that it is an easy solution for a complex task, but also that we are open to many different branches of industry e.g. agriculture.

Pick and Place

In these applications we need to guide a robotic arm to pick up items, most typically from a conveyor belt or from a container. A good example of such application is picking small stem cuttings and then placing them vertically in pots. Any inaccuracies in detection may result in planting them too deep or upside down, which will result in cuttings not forming roots. Our deep learning tools make it possible to quickly locate the desired parts of the plants and provide accurate results required for this operation.

Want to know more?

Fill the form below and we will get back to you promptly.
Required field
Required field
Required field
Required field
Required field
Required field
Required field
Minimun length: 6 characters
Required field
captcha
Required field

Press & marketing content

Software FabImage®
Brochure FabImage®
Application examples

External links

FabImage® user area