Our last blog is an odd tangent of some sorts as it isn’t really about Non Destructive Testing, although it’s something I really wanted to include in our blog series: Computer Vision.
In a way it is related, everything is being automated and N.D.T. is no exception to this rule.
If you’re a reader from our academic institute (and studying electro mechanics), you’re currently receiving lessons and lab sessions about vision systems. In my own opinion these lessons only touch the subject very superficially on a fundamental soft-/hardware level and don’t provide a true reflection of what’s currently possible.
Within industry you might immediately think of quality control, checking for visual defects on cookies or if all bottles have a bottle cap. On the other side Hollywood might have you think anything is possible with AI and vision systems, like Iron Man’s “Jarvis” or what the terminator is capable of. So, what is the state of the union within vision systems today?
From our own experience; it’s not all that easy and straightforward as you might think. We’ve been messing around with basic object recognition and boy, that stuff is all very, very young and in full development. If you want to make a complex vision system today, prepare to deal with a lot of scattered open source projects and basic binary kits because there’s no true easy and straightforward package out there, yet.
Last year Google (who else) made some huge strides forward in object recognition which they showcased at “ImageNet’s visual recognition challenge”. By using something they call a “neural network” they can now swiftly and accurately recognize random (unknown) objects within various scenes. Which is pretty amazing since most vision systems out there today require you to “teach” them the specific object to look for first. Even then scaling, rotation and lighting can mess everything up. So in some sorts they’ve achieved the basic object recognition skills of a 3-year old for AI systems, which might sound a little low brow but certainly isn’t! You can read about the competition and what it really implies here: http://googleresearch.blogspot.be/2014/09/building-deeper-understanding-of-images.html . With the prospect of self-driving cars you kind of expect them to be experts in vision systems anyway right?
The video below is a TED talk that discusses this neural network a little more in detail. Basically they fed some supercomputer a HUGE load of labeled image data from which it can now determine what it sees in an image. Of course object recognition is only a small part of what vision systems truly hold within, discussing everything would require a blog of its own. But observing that this is another leap towards a ‘seeing’ AI that in the future can truly understand and see relations is a pretty amazing thing.
Paraphrased From the talk:
Like the little child the computer doesn’t just say cat and bed but says it’s a cat laying on a bed
So back to NDT: if QC is fully automated and you can’t even see if the AI was wrong after the check, would you still trust it?