Sightech Vision Systems Home Page





Getting Started

What happens if there is a power failure? Does it retain the settings?

Yes. The settings are saved on an internal hard-drive. They may be easily backed up via LAN access as well.

What are some successful applications realized with the Eyebot system?

Some examples are on www.Sightech.com. In short, if you can see (image) the defect through a video camera, then Eyebot should be able to see the defect. Please consult Sightech with your specific needs.

What is the principle of operation of the learning algorithm (self-teaching method through neuronal network classification?)

Eyebot learns up to 13 million features per second. The basic principle is that Eyebot learns small features in relationship with other features. During the learning process, Eyebot learns all the features it sees. In inspection mode during the RUN process, Eyebot complains when it sees features or characteristics that it has never seen before.

What is the maximum pixel memory size that can be stored into memory?

Eyebot does not store pixels – it learns to inspect by understanding features instead.

What is the tolerance for a moving object? What is the speed variation?

Eyebot can learn moving objects at a pace of over 1,400 per minute. It helps to train Eyebot at a similar speed that it will inspect the object. It doesn't have to be exactly at the speed, but the more similar, the better. The faster the shutter speed of the camera, the faster objects can move.

What is the best way to demonstrate the features of the Eyebot? Any recommendation?

Look on our web page under Applications, Eyebot Application Notes, and Eyebot Documentation to get an idea of some of the problems we solve and demo. Eyebot has reasonable default settings to begin with. Also, under Eyebot you will find some more info. Do a demo that shows off the self-learning ability of Eyebot. A bottle demo works well, or have it learn and inspect raisin boxes. We learn some good bottles/boxes, then put it in the inspect mode, and it will see defects on the bottle/boxes. It will take just a few minutes of learning.

When will Eyebot make the decision?

Eyebot makes up to (Ithink something is missing here?) decisions every second. In other words, it can make a decision on every video frame. On the other hand, it can carry forward a moving average of n video frames and bass its frame-to-frame decision on this moving average. This speed can be selected by the user as a choice between Strobe, Instant, Fast, Medium, and Slow in Decision àSpeed.

If there is no strobe (trigger input), how does the Eyebot learn as the shampoo bottle passes by the camera?

If there is no strobe, then Eyebot will learn EVERYTHING on the screen, including your hand (if you stick it in the screen), the background, and everything in the viewing area.

It takes about 1-3 minutes to learn the shampoo bottles, assuming low UVT (see manual) and decent lighting (we use $19 halogen lamps).

Eyebot can spot very small defects.

What is the minimum distance between two consecutive objects?

Depending on mode, none. We can learn a continuous stream (such as fruits and vegetables).

Is Eyebot suitable for mark inspection? Say inspect marks on IC?

As you probably know, Eyebot does not do OCR. It does not do Bar Code reading either.

The general rule of thumb with Eyebot is: if you can see the defect on the screen (through the eyes of the camera), then Eyebot should be able to see it too. If the difference between a good mark and a bad mark is really obvious, then Eyebot should definitely be able to do it. If Eyebot is not able to see the gross defect, then you should probably troubleshoot another part of your setup.


Copyright © 2005 Sightech Corporation — All rights reserved
Privacy Statement