- Highest performance
- Modern design
- Simple Structure
FabImage® Library Suite is a machine vision library for C++ and .NET programmers. It provides a comprehensive set of functions for creating industrial image analysis applications - from standard-based image acquisition interfaces, through low-level image processing routines, to ready-made tools such as template matching, measurements or barcode readers.
The main strengths of the product include the highest performance, modern design and simple structure making it easy to integrate with the rest of your code. The functions available in FabImage® Library closely correspond to the filters of FabImage® Studio. Therefore, it is possible to prototype your algorithms quickly in a graphical environment and then translate them to C++ or .NET, or even generate the C++ code automatically.
Fabimage Library Suite gives you instant access to the highest quality, well optimized and field-tested code that you need for your machine vision projects!
Three types of Licenses are available:
Developer Licenses: Licenses required to develop a vision program.
Runtime Licenses: Licenses required to run a vision program. To purchase a Runtime License, you must have purchased a Developer License.
Add-on Licenses: Additional Licenses that allow you to expand the functionality of the other two types of Licenses.
The Developer License is assigned to a single developer user and can be activated only via USB dongle. Free Technical Support services are included for the first 12 months after activation, such as:
- Most up-to-date version of the software with associated new features and documentation*
- Answers, via email, to technical questions related to the use of the software
After 12 months from the activation of the Developer License, it is necessary to purchase a Service License (FIL-EXT) in order to:
purchase additional Single Thread Runtime Licenses (FIL-RUN).
take advantage of the Technical Support services.
|* To obtain the most up-to-date version of the software, you must send Opto Engineering the WIBU file associated with the USB dongle of the License you wish to have the upgrade on. Learn more about how and where to download the WIBU file at https://docs.fab-image.com/stu...|
|FIL-SUI||FabImage® Library Suite (C++ and .NET)||Developer Basic License||Development environment (IDE) for direct programming in C++ or .NET. This type of License is suitable for those who prefer to program in C++ or C# and do not want to employ graphical programming. Allows multi-camera acquisition and development of processes (macrofilters) sequentially (single thread)|
|USB-DONGLE-FI||USB Dongle||Hardware||Required to activate the License via hardware USB dongle.|
ADD-ON Licenses** are additional Licenses that allow you to expand the functionality of the Basic License. To purchase ADD-ON Licenses, you must have purchased a Developer FabImage® Library Suite License (FIL-SUI) .
|** To order an ADD-ON License, you must also send the WIBU file associated with the USB dongle of the developer for which you wish to activate the add-on. Read more about how and where to download the WIBU file at this link.|
|*** It is not possible to build multiple macrofilters with Deep Learning that work in parallel.|
Parallel Processing ADD-ON for those who have purchased FabImage® Library Suite (FIL-SUI)
Developer ADD-ON License
Allows the user to:
FabImage® Deep Learning ADD-ON for those who have already purchased FabImage® Library Suite (FIL-SUI)
Developer ADD-ON License
Allows the user to use Deep Learning Tools ***. Visit the Deep Learning section for more information.
Runtime License SINGLE THREAD
The Runtime License is assigned to a single vision system and allows multi-camera acquisition and execution of processes (macrofilters) sequentially (single thread). It can be activated via two options:
USB Dongle (USB-DONGLE-RUN)
To purchase a Single Thread Runtime License, you must have purchased the FabImage® Library Suite Developer License (FIL-SUI). After 12 months from the activation of the Developer License, you are required to purchase the Service License (FIL-EXT) to continue purchasing Single Thread Runtime Licenses.
|* In case the License is lost through damage to the Computer, to which it is assigned by Computer ID, it cannot be recovered and a new one must be purchased. Opto Engineering recommends the option to purchase the license via USB dongle.|
FabImage® Library Runtime
Allows you to run an unlimited number of processes (macrofilters) sequentially
USB Dongle (Optional)
The License is activated via USB dongle
Runtime License MULTITHREADING
In order to run the Parallel Processing (FIL-PAR-ADD) features, you must purchase one of the following Runtime Licenses (these Runtime Libraries replace the FabImage® Library Single Thread Runtime (FIL-RUN)). To purchase a Multithreading Runtime License, you must have purchased a FabImage® Library Suite Developer License (FIL-SUI) and a Developer Parallel Processing ADD-ON License (FIL-PAR-ADD).
After 12 months from the activation of the Developer Parallel Processing ADD-ON License (FIL-PAR-ADD), you are required to purchase a Service License (FIL-EXT) if you wish to continue purchasing Multithreading Runtime Licenses.
|* To run an unlimited number of processes in parallel, it is recommended to purchase the Runtime License, which corresponds to the number of cores on the machine vision computer.|
|FIL-RUN-CL-4||FabImage® Library Runtime for a 4-core machine vision computer||ADD-ON Runtime License for Parallel Processing||Allows running an unlimited number of processes in parallel. Requires a PC with 4 cores.|
|FIL-RUN-CL-6||FabImage® Library Runtime for a 6-core machine vision computer||ADD-ON Runtime License for Parallel Processing||Allows running an unlimited number of processes in parallel. Requires a PC with 6 cores.|
|FIL-RUN-CL-8||FabImage® Library Runtime for an 8-core machine vision computer||ADD-ON Runtime License for Parallel Processing||Allows running an unlimited number of processes in parallel. Requires a PC with 8 cores.|
|FIL-RUN-CL-16||FabImage® Library Runtime for a 16-core machine vision computer||ADD-ON Runtime License for Parallel Processing||Allows running an unlimited number of processes in parallel. Requires a PC with 16 cores.|
|FIL-RUN-TL-2||FabImage® Library Runtime limited to 2 Threads||ADD-ON Runtime License for Parallel Processing||Enables the use of PCs with any number of Cores. The number of parallel processes is limited to 2 threads.|
|FIL-RUN-TL-4||FabImage® Library Runtime limited to 4 Threads||ADD-ON Runtime License for Parallel Processing||Enables the use of PCs with any number of Cores. The number of parallel processes is limited to 4 threads.|
|FIL-RUN-TL-6||FabImage® Library Runtime limited to 6 Threads||ADD-ON Runtime License for Parallel Processing||Enables the use of PCs with any number of Cores. The number of parallel processes is limited to 6 threads.|
|FIL-RUN-TL-8||FabImage® Library Runtime limited to 8 Threads||ADD-ON Runtime License for Parallel Processing||Enables the use of PCs with any number of Cores. The number of parallel processes is limited to 8 threads.|
|FIL-RUN-TL-16||FabImage® Library Runtime limited to 16 Threads||ADD-ON Runtime License for Parallel Processing||Enables the use of PCs with any number of Cores. The number of parallel processes is limited to 16 threads.|
ADD-ON Runtime License DEEP LEARNING
In order to run the Deep Learning ADD-ON features, the following Runtime License must be purchased in addition to the FabImage® Library Runtime (FIL-RUN).
To purchase the Deep Learning ADD-ON Runtime, you must have purchased a FabImage® Library Suite Developer License (FIL-SUI) and a Developer Deep Learning ADD-ON License (FI-DL-ADD). After 12 months from the activation of the Developer ADD-ON License (FI-DL-ADD), you will be required to purchase the Service License (DL-EXT), if you wish to purchase a Deep Learning ADD-ON Runtime License.
|* Multiple GPU cards cannot be used for inference|
|FIL-RUN-DL||FabImage® Library Deep Learning ADD-ON Runtime||Deep Learning ADD-ON Runtime License.||Enables the user to execute single-threaded Deep learning*|
After 12 months from the activation of the FabImage® Library Suite Developer License (FIL-SUI) or Developer ADD-ON Licenses (FIL-PAR-ADD and FI-DL-ADD), it is required to purchase one of the following Service Licenses for:
Purchasing Additional Single-Threaded Runtime Licenses (FIL-RUN) or Multithreading Runtime Licenses (FIL-RUN-CL-xx or FIL-RUN-TL-xx)
Purchasing Additional Deep Learning ADD-ON Runtime Licenses (FIL-RUN-DL)
Maintaining active technical support
FabImage® Library Suite EXTENSION
License required for:
FabImage® Deep Learning ADD-ON Extension
License required for:
In FabImage® Library Suite careful design of algorithms goes hand in hand with extensive hardware optimizations, resulting in performance that puts the library among the fastest in the world. Our implementations make use of SSE instructions and parallel computations on multicore processors.
All types of data feature automatic memory management, errors are handled explicitly with exceptions and optional types are used for type-safe special values. All functions are thread-safe and use data parallelism internally, when possible.
Simplicity & Consistency
The library is a simple collection of types and functions, provided as a single DLL file with appropriate headers. For maximum readability functions follow consistent naming convention (e.g. the VERB + NOUN form as in: SmoothImage,RotateVector). All results are returned via reference output parameters, so that many outputs are always possible.
Deep Learning Add-on is a breakthrough technology for machine vision. It is a set of five ready-made tools which are trained with 20-50 sample images, and which then detect objects, defects or features automatically. Internally it uses large neural networks designed and optimized for use in industrial vision systems.
Together with FabImage Studio Professional you are getting a complete solution for training and deploying modern machine vision applications.
Learns from few samples
Typical applications require between 20 and 50 images for training. The more the better, but our software internally learns key characteristics from a limited training set and then generates thousands of new artificial samples for effective training.
Works on GPU and CPU
A modern GPU is required for effective training. At production, you can use either GPU or CPU. GPU will typically be 3-10 times faster (with the exception of Object Classification which is equally fast on CPU).
The highest performance
Typical training time on a GPU is 5-15 minutes. Inference time varies depending on the tool and hardware between 5 and 100 ms per image. The highest performance is guaranteed by an industrial inference engine internally developed.
1. Collect and normalize images
- Acquire between 20 and 50 images (the more the better), both Good and Bad, representing all possible object variations; save them to disk.
- Make sure that the object scale, orientation and lighting are as consistent as possible.
- Open FabImage Studio Professional and add one of the Deep Learning Add-on tools.
- Open an editor associated with the tool and load your training images there.
- Label your images or add markings using drawing tools.
- Click “Train”.
Training and Validation Sets
In deep learning, as in all fields of machine learning, it is very important to follow correct methodology. The most important rule is to separate the Training set from the Validation set. The Training set is a set of samples used for creating a model. We cannot use it to measure the model’s performance, as this often generates results that are overoptimistic. Thus, we use separate data – the Validation set – to evaluate the model. Our Deep Learning Add-on automatically creates both sets from the samples provided by the user.
In the supervised mode the user needs to carefully label pixels corresponding to defects on the training images. The tool then learns to distinguish good and bad features by looking for their key characteristics.
In this application cracks and scratches must be detected on a surface that includes complicated features. With traditional methods, this requires complicated algorithms with dozens of parameters which must be adjusted for each type of solar panel. With Deep Learning, it is enough to train the system in the supervised mode, using just one tool.
Satellite Image Segmentation
Satellite images are difficult to analyse as they include a huge variety of features. Nevertheless, our Deep Learning Add-on can be trained to detect roads and buildings with very high reliability. Training may be performed using only one properly labeled image, and the results can be verified immediately. Add more samples to increase the robustness of the model.
In the unsupervised mode training is simpler. There is no direct definition of a defect – the tool is trained with Good samples and then looks for deviations of any kind.
When a sushi box is delivered to a market, each of the elements must be correctly placed at a specific position. Defects are difficult to define when correct objects may also vary. The solution is to use unsupervised deep learning mode that detects any significant variations from what the tool has seen and learned in the training phase.
Plastics, injection moulding
Injection moulding is a complex process with many possible production problems. Plastic objects may also include some bending or other shape deviations that are acceptable for the customer. Our Deep Learning Add-on can learn all acceptable deviations from the provided samples and then detect anomalies of any type when running on the production line.
The Object Classification tool divides input images into groups created by the user according to their particular features. As a result the name of a class and the classification confidence are given.
Caps: Front or Back
Plastic caps may sometimes accidently flip in the production machine. The customer wants to detect this situation. The task can be completed with traditional methods, but requires an expert to design a specific algorithm for this application. On the other hand we can use deep learning based classification which automatically learns to recognize Front and Back from a set of training pictures.
3D Alloy Wheel Identification
There may be hundreds of different alloy wheel types being manufactured at a single plant. Identification of a particular model with such quantities of models is virtually impossible with traditional methods. Template Matching would need huge amount of time trying to match hundreds of models while handcrafting of bespoke models would simply require too much development and maintenance. Deep learning comes as an ideal solution that learns directly from sample pictures without any bespoke development.
The instance segmentation technique is used to locate, segment and classify single or multiple objects within an image. Unlike the feature detection technique, this technique detects individual objects and may be able to separate them even if they touch or overlap.
Mixed nuts are a very popular snack food consisting of various types of nuts. As the percentage composition of nuts in a package shall be in accordance with the list of ingredients printed on the package, the customers want to be sure that the proper amount of nuts of each type is going to be packaged. Instance segmentation tool is an ideal solution in such application, since it returns masks corresponding to the segmented objects.
A typical set of soup greens used in Europe is packaged on a white plastic plate in a random position. Production line workers may sometimes accidently forget to put one of the vegetables on the plate. Although there is a system that weighs the plates, the customer wants to verify completeness of the product just before the sealing process. As there are no two vegetables that look the same, the solution is to use deep learning-based segmentation. In the training phase, the customer just has to mark regions corresponding to vegetables.
The Point Location tool looks for specific shapes, features or marks that can be identified as points in an input image. It may be compared to traditional template matching, but here the tool is trained with multiple samples and becomes robust against huge variability of the objects of interest.
The task that seems impossible to achieve with traditional methods of image processing can be done with our latest tool. In this case we use it to detect bees. When it is done we can check whether they are infected by varroosis – the disease caused by the parasitic mites attacking the honey bees. The parasite attaches to their bodies and upon the basis of a characteristic red inflammation we can classify them according to their health condition. Not only does this example show that it is an easy solution for a complex task, but also that we are open to many different branches of industry e.g. agriculture.
Pick and Place
In these applications we need to guide a robotic arm to pick up items, most typically from a conveyor belt or from a container. A good example of such application is picking small stem cuttings and then placing them vertically in pots. Any inaccuracies in detection may result in planting them too deep or upside down, which will result in cuttings not forming roots. Our deep learning tools make it possible to quickly locate the desired parts of the plants and provide accurate results required for this operation.