- No low-level programming knowledge required.
- Data-flow based software.
- Fast and optimized algorithms.
- 1000+ high performance functions.
- Custom machine vision filters.
FabImage® Studio Professional is data-flow based software designed for machine vision engineers. It does not require any programming skills, but it is still so powerful that it can win even with solutions based on low-level programming libraries.
Also, the architecture is highly flexible, ensuring that users can easily adapt the product to the way they work and to specific requirements of any project.
Drag & Drop
All programming is done by choosing filters and connecting them with each other. You can focus all your attention on computer vision.
You Can See Everything
Inspection results are visualized on multiple configurable data previews; and when a parameter in the program is changed, you can see the previews updated in real time.
You can easily create custom graphical user interfaces and thus build the entire machine vision application using a single software package.
Over 1000 Ready-for-Use Filters
There are over 1000 ready-for-use machine filters tested and optimized on hundreds of applications. They have many advanced capabilities such as outlier suppression, subpixel precision or any-shape region-of-interest.
The filters are aggressively optimized for the SSE technology and for multicore processors. Our implementations are ones of the fastest in the world!
Loops and Conditions
Without writing a single line of code, you can create custom and scalable program flows. Loops, conditions and subprograms (macrofilters) are realized with appropriate data-flow constructs in the graphical way.
GigE Vision and GenTL Support
FabImage® Studio is a GigE Vision compliant product, supporting the GenTL interface, as well as a number of vendor-specific APIs. Thus, you can use it with Opto Engineering® cameras and most cameras available on the market, including models from Matrix Vision, Allied Vision, Basler, Baumer, Dalsa, PointGrey, Photon Focus and XIMEA and more.
You can use user filters to integrate your own C/C++ code with the benefits of visual programming.
C++ Code Generator
Programs created in FabImage® Studio can be exported to C++ code or to .NET assemblies. This makes it very easy to integrate your vision algorithms with applications created in C++, C# or VB programming languages.
There are over 1000 filters encompassing both basic transforms and specialized machine vision tools.
- Image processing
- Shape fitting
- Barcode reading
- Template Matching
- Support vector machines
- Blob analysis
- Camera calibration
- Data code reading
- GigE Vision and GenTL
- Contour analysis
- Fourier analysis
- Corner detection
- Histogram analysis
- Planar geometry
- Hough transform
- 1D profile analysis
In this application, we need to sort nails amongst nuts and bolts. The image is thresholded and the resulting regions are split into blobs; finally, the blobs are classified by their elongation and the nails are easily found.
This example shows a basic ReadBarcodes filter. The tool automatically find the barcode and gives as output the decoded text.
There are two types of commercial licenses
Assigned to a single engineer. It includes one year of technical support, which can be extended with an annual fee. Valid technical support also gives you the right to upgrade the software to newer versions and provides a discount on runtime licenses.
|FabImage® Studio Professional||Development||FIS-PRO|
|FabImage® Studio + Library bundle||Development||FIS-ADD|
Assigned to a single vision system. You can use one license for
one multi-camera system, but multiple licenses are required to control
multiple independent systems, even if run on a single physical computer.
|FabImage® Studio Runtime||Runtime||FIS-RUN|
||With dev. License and valid tech support|
|FabImage® Studio Runtime||Runtime||FIS-RTB|
||With dev. License and expired tech support|
|1 Year Support Extension||Development||FIS-EXT, ADD-EXT|
||With dev. License and valid tech support|
|USB License Dongle||-||USB-DONGLE-FI / USB-DONGLE-RUN|
||USB Dongle for FabImage® Developer licenses / USB Dongle for FabImage® Runtime licenses|
Quickstart guide to FabImage® part numbers
- FabImage® Studio Professional (FIS-PRO) makes it possible to create complete machine vision applications, including HMI. FabImage® Studio Runtime (FIS-RUN/FIS-RTB) is required to run the applications on each inspection system.
- FabImage® Studio Professional (FIS-PRO) includes the feature of User Filters, which allows for embedding user’s C++ code within the graphical programming model. FabImage® Library Suite, FIL-SUI (or the Studio + Library bundle, FIS-ADD) is only required, if one needs to invoke the built-in image analysis tools as C++ functions.
- If you want to prototype applications in FabImage® Studio Professional (FIS-PRO) and then transform them into C++ code, then you need FabImage® Studio + Library bundle (FIS-ADD)
- If the graphical programming environment for fast prototyping is not needed, then FabImage® Library Suite (FIL-SUI) is enough for development.
- In general, there are four possible ways to work with the products:
- Programming in the graphical way - this requires an FabImage® Studio Professional (FIS-PRO) for each developer and a FabImage® Studio Runtime (FIS-PRO/FIS-RTB) for each system. One particular advantage of this method is the ease of introducing changes, even directly on the production line
- Programming in the graphical way and then generating C++ code – this requires a FabImage® Studio + Library bundle (FIS-ADD) for each developer and a FabImage® Library Runtime (FIL-RUN/FIL-RTB) for each system. This method allows to integrate the created solutions with bigger software projects.
- Programming in the graphical way and then generating .NET Macrofilter Interfaces – this requires FabImage® Studio Professional (FIS-PRO) for each developer and a FabImage® Studio Runtime (FIS-RUN/FIS-RTB) for each system. No library license is needed as .NET Macrofilter Interfaces use the same program execution mechanisms as Studio.
- Programming directly in C++ or .NET – this is for people who think in C++ or C# and do not want to do graphical programming. In this case an FabImage® Library Suite (FIL-SUI) is required for each developer and a FabImage® Library Runtime (FIL-RUN/FIL-RTB) for each system.
Deep Learning Add-on is a breakthrough technology for machine vision. It is a set of five ready-made tools which are trained with 20-50 sample images, and which then detect objects, defects or features automatically. Internally it uses large neural networks designed and optimized for use in industrial vision systems.
Together with FabImage Studio Professional you are getting a complete solution for training and deploying modern machine vision applications.
Learns from few samples
Typical applications require between 20 and 50 images for training. The more the better, but our software internally learns key characteristics from a limited training set and then generates thousands of new artificial samples for effective training.
Works on GPU and CPU
A modern GPU is required for effective training. At production, you can use either GPU or CPU. GPU will typically be 3-10 times faster (with the exception of Object Classification which is equally fast on CPU).
The highest performance
Typical training time on a GPU is 5-15 minutes. Inference time varies depending on the tool and hardware between 5 and 100 ms per image. The highest performance is guaranteed by an industrial inference engine internally developed.
1. Collect and normalize images
- Acquire between 20 and 50 images (the more the better), both Good and Bad, representing all possible object variations; save them to disk.
- Make sure that the object scale, orientation and lighting are as consistent as possible.
- Open FabImage Studio Professional and add one of the Deep Learning Add-on tools.
- Open an editor associated with the tool and load your training images there.
- Label your images or add markings using drawing tools.
- Click “Train”.
Training and Validation Sets
In deep learning, as in all fields of machine learning, it is very important to follow correct methodology. The most important rule is to separate the Training set from the Validation set. The Training set is a set of samples used for creating a model. We cannot use it to measure the model’s performance, as this often generates results that are overoptimistic. Thus, we use separate data – the Validation set – to evaluate the model. Our Deep Learning Add-on automatically creates both sets from the samples provided by the user.
In the supervised mode the user needs to carefully label pixels corresponding to defects on the training images. The tool then learns to distinguish good and bad features by looking for their key characteristics.
In this application cracks and scratches must be detected on a surface that includes complicated features. With traditional methods, this requires complicated algorithms with dozens of parameters which must be adjusted for each type of solar panel. With Deep Learning, it is enough to train the system in the supervised mode, using just one tool.
Satellite Image Segmentation
Satellite images are difficult to analyse as they include a huge variety of features. Nevertheless, our Deep Learning Add-on can be trained to detect roads and buildings with very high reliability. Training may be performed using only one properly labeled image, and the results can be verified immediately. Add more samples to increase the robustness of the model.
In the unsupervised mode training is simpler. There is no direct definition of a defect – the tool is trained with Good samples and then looks for deviations of any kind.
When a sushi box is delivered to a market, each of the elements must be correctly placed at a specific position. Defects are difficult to define when correct objects may also vary. The solution is to use unsupervised deep learning mode that detects any significant variations from what the tool has seen and learned in the training phase.
Plastics, injection moulding
Injection moulding is a complex process with many possible production problems. Plastic objects may also include some bending or other shape deviations that are acceptable for the customer. Our Deep Learning Add-on can learn all acceptable deviations from the provided samples and then detect anomalies of any type when running on the production line.
The Object Classification tool divides input images into groups created by the user according to their particular features. As a result the name of a class and the classification confidence are given.
Caps: Front or Back
Plastic caps may sometimes accidently flip in the production machine. The customer wants to detect this situation. The task can be completed with traditional methods, but requires an expert to design a specific algorithm for this application. On the other hand we can use deep learning based classification which automatically learns to recognize Front and Back from a set of training pictures.
3D Alloy Wheel Identification
There may be hundreds of different alloy wheel types being manufactured at a single plant. Identification of a particular model with such quantities of models is virtually impossible with traditional methods. Template Matching would need huge amount of time trying to match hundreds of models while handcrafting of bespoke models would simply require too much development and maintenance. Deep learning comes as an ideal solution that learns directly from sample pictures without any bespoke development.
The instance segmentation technique is used to locate, segment and classify single or multiple objects within an image. Unlike the feature detection technique, this technique detects individual objects and may be able to separate them even if they touch or overlap.
Mixed nuts are a very popular snack food consisting of various types of nuts. As the percentage composition of nuts in a package shall be in accordance with the list of ingredients printed on the package, the customers want to be sure that the proper amount of nuts of each type is going to be packaged. Instance segmentation tool is an ideal solution in such application, since it returns masks corresponding to the segmented objects.
A typical set of soup greens used in Europe is packaged on a white plastic plate in a random position. Production line workers may sometimes accidently forget to put one of the vegetables on the plate. Although there is a system that weighs the plates, the customer wants to verify completeness of the product just before the sealing process. As there are no two vegetables that look the same, the solution is to use deep learning-based segmentation. In the training phase, the customer just has to mark regions corresponding to vegetables.
The Point Location tool looks for specific shapes, features or marks that can be identified as points in an input image. It may be compared to traditional template matching, but here the tool is trained with multiple samples and becomes robust against huge variability of the objects of interest.
The task that seems impossible to achieve with traditional methods of image processing can be done with our latest tool. In this case we use it to detect bees. When it is done we can check whether they are infected by varroosis – the disease caused by the parasitic mites attacking the honey bees. The parasite attaches to their bodies and upon the basis of a characteristic red inflammation we can classify them according to their health condition. Not only does this example show that it is an easy solution for a complex task, but also that we are open to many different branches of industry e.g. agriculture.
Pick and Place
In these applications we need to guide a robotic arm to pick up items, most typically from a conveyor belt or from a container. A good example of such application is picking small stem cuttings and then placing them vertically in pots. Any inaccuracies in detection may result in planting them too deep or upside down, which will result in cuttings not forming roots. Our deep learning tools make it possible to quickly locate the desired parts of the plants and provide accurate results required for this operation.