Computers doing it for you.

Our last blog is an odd tangent of some sorts as it isn’t really about Non Destructive Testing, although it’s something I really wanted to include in our blog series: Computer Vision.
In a way it is related, everything is being automated and N.D.T. is no exception to this rule.

If you’re a reader from our academic institute (and studying electro mechanics), you’re currently receiving lessons and lab sessions about vision systems. In my own opinion these lessons only touch the subject very superficially on a fundamental soft-/hardware level and don’t provide a true reflection of what’s currently possible.

Within industry you might immediately think of quality control, checking for visual defects on cookies or if all bottles have a bottle cap. On the other side Hollywood might have you think anything is possible with AI and vision systems, like Iron Man’s “Jarvis” or what the terminator is capable of. So, what is the state of the union within vision systems today?

t2%20%2816%29From our own experience; it’s not all that easy and straightforward as you might think. We’ve been messing around with basic object recognition and boy, that stuff is all very, very young and in full development. If you want to make a complex vision system today, prepare to deal with a lot of scattered open source projects and basic binary kits because there’s no true easy and straightforward package out there, yet.


Last year Google (who else) made some huge strides forward in object recognition which they showcased at “ImageNet’s visual recognition challenge”.   By using something they call a “neural network” they can now swiftly and accurately recognize random (unknown) objects within various scenes. Which is pretty amazing since most vision systems out there today require you to “teach” them the specific object to look for first. Even then scaling, rotation and lighting can mess everything up. So in some sorts they’ve achieved the basic object recognition skills of a 3-year old for AI systems, which might sound a little low brow but certainly isn’t! You can read about the competition and what it really implies here: . With the prospect of self-driving cars you kind of expect them to be experts in vision systems anyway right?

The video below is a TED talk that discusses this neural network a little more in detail. Basically they fed some supercomputer a HUGE load of labeled image data from which it can now determine what it sees in an image. Of course object recognition is only a small part of what vision systems truly hold within, discussing everything would require a blog of its own. But observing that this is another leap towards a ‘seeing’ AI that in the future can truly understand and see relations is a pretty amazing thing.
Paraphrased From the talk:

Like the little child the computer doesn’t just say cat and bed but says it’s a cat laying on a bed

So back to NDT: if QC is fully automated and you can’t even see if the AI was wrong after the check, would you still trust it?


One thought on “Computers doing it for you.

  1. I believe this is something that is closely related to my own thesis, which is about augmented reality. The possibilities of technology being able to detect things on their own is pretty much endless.

    A topic we made a lot of posts about the recognition is that this technology can also be used to identify people. What if people do not want to be recognized, then you are invading their privacy rights.

    The same can be said for things these people own. While the technology is really great, one should really be aware of the laws concerning these things.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s