Skip to main content

<span>Roll-2-Roll® Sensors: How it works and what are the applications?</span>
June 19, 2020

The webinar will cover sensing and measurement technology that are used in Roll-2-Roll® Sensors.

The presentation will cover:

  • the fundamental working principle of the patented fiber optic technology
  • how it differs from the conventional sensors
  • benefits of the fiber optic technology
  • application of the fiber optic sensor technology for sensing and measurement applications such as edge detection, width measurement, registration mark detection, flag detection, void/hole detection, tear detection, etc.

Transcript

Show full transcript (9115 words)

talk a little bit about the different sensors that are available in the market and how our sensor is different and then we'll also talk about what are the applications for our sensor in this industry so to start off with just to give you an idea on some of the topics that we cover most often when we are looking at edge sensing a lot of the sensors that we use are going to be optical sensors but the some of the concepts that we have covered here doesn't really only apply to just optical sensors it can also be sound or any kind of a wave so some of the basic concepts that we are going to be looking at and is what is reflection what is absorption what is transmission and how the conventional sensors use these concepts to determine the web position and while we are going through this presentation you will also have some polls that will be administered you should be able to see the pulse and if you can answer those that would be really helpful for us so that it cannot guide us through this process I also have Carlo and Pedro Manning the polls in the Q&A section so any time there is some question or things like that feel free to put your questions there and they will be able to address that so in terms of concepts let's take a look at these so reflection so we have a light source that is being incident on an object and what can that light do well basically there are three main things that can happen to the light source the light can be transmitted through the object most of the time if the material is optically transparent or translucent there is some amount of the light or radiation for that matter which actually goes through the object some amount of light would be absorbed when the light is absorbed this is mainly because the photon the photons interact with the object and they would get absorbed into the object and this may be dissipated in the form of heat or even fluorescence then some amount of light would be reflected and basically the reflection is whatever the light that is incident on the object that is returned back to the same medium that is called as reflection here the the object is usually a different medium it could be a solid it could be a liquid whatever it may be and then where this incident light is that's another different medium it could be air or it could be vacuum or any of those so the reflection is basically returning the light back to the original medium transmission is basically passage of light through the medium an absorption is actually absorption of the light into the medium the reflection and transmission typically do not change the light frequency when we say frequency it's more like color so if you have a red light and the light goes through an object or it gets reflected the color typically does not change there are special cases when it will change but usually it doesn't change and then absorption is basically the light is absorbed into the material and sometimes the absorbed light will actually change the state of the material and it can cause further emissions and those are called fluorescence these are something that you would see with ultraviolet light source and when they fall on a certain type of object the object actually absorbs the light in the ultraviolet frequency range then that absorbed material gets to an excited state and then that emits a light in a different frequency usually called fluorescence so most sensors use or most optical sensors use these these three phenomena to kind of figure out how to detect the web so actually how it happens is is based on the amount of light that is transmitted or reflected or absorbed and there is a term for those and that's called reflectance which is the ratio of the light reflected to the incident light transmittance is the ratio of the transmitted to the incident radiation and an absorption is the ratio of the light absorbed to the incident radiation the reflection reflectance and transmittance or usually used to define optical properties of material so if a material is optically transparent that means that it transmits a lot of light through the object or it also reflects a little bit of light through the object that's what it means to be an optically transparent material while on the other hand if a material is opaque that means that the the object is going to reflect most of the light and it's going to transmit very little if not none of the light that is falling on the object so most often I mean almost all cases the color that we perceive on an object is because of selective absorption so if you see an object to be green that means that that object absorbs all bay links of light except green which is reflected out and that's why we are able to see the object at screen now all of these depend upon the wavelength of light certain objects behave differently based on the wavelength of light a great example is x-ray which is also a radiation x-ray may pass through opaque objects which are opaque the visible spectrum but are actually not opaque in the x-rays wavelength like wise and transparent object may not really be transparent in a different wavelength such as an infrared that's one of the reasons why our infrared sensor can even work with transparent webs transparent materials so these are the basic concepts in terms of how light interact with an object or what happens when light falls on an object most sensors use this principle so this is one of the most common sensor used in edge detection and it's called an opposing beam sensor essentially what it is is that the sensor has got two sides to it a transmitter and a receiver the transmitter sends some kind of a signal could be light sound or it could even be air just for our case we will just limit it to light and sound and then the receiver and in in the end and that signal is being transmitted or absorbed by the material and then the remaining signal falls on the receiver so this this technology or this sensing principle works well when the material absorbs the light or the signal like what we have shown here so none of the light sources that is falling on this web material is actually transmitted so you can basically infer the position by looking at how much light that I get or let's say if the receiver gets 50% of the light then it means that the web is at 50% of the viewing area that's where it is it's really simple sensor technology and usually it's also referred with different names it's called fork-style or a u-shape sensor or a blocking and unblocking sensor technology now like I said most often this sensor technology works but there are some cases when it does and work well and as you can imagine whenever the web material starts to have high transmittance then the signal is going to leak through it and the sensor will provide a different output now if you compare these two web materials they are exactly at the same location but because of the properties of the material you're going to have different transmittance and that really affects the sensor output and that's the main shortcoming of these types of material now how does it how does it involve how does it relate to our cases basically if the material is porous that means that actually there is no solid there to prevent that light to go through or sound to go through then that's a problem obviously if it is opaque sorry um the opacity is lower then that's a problem and this means that the sensor requires calibration and since this is an inferred measurement let's say you are you get 50 percent signal that means you assume you're at 50 percent location this is not an absolute measurement so you really don't know unless you calibrate it what the actual position is and it's not accurate that's the main thing with it there's also ambiguity in spatial ambiguity with this kind of a sensor sensor technology I'll talk a little bit about it in the next slide about what what I mean by that now there are other types of sensors that try to avoid the issue with transmittance and the way they do it is what is called as a retro-reflective sensors you have a transmitter and receiver on the same side there is no receiver on the opposite side the light is now incident on the reflector the reflector is a special material so it reflects the light back to the receiver and when we do that the portion of the light that goes through the material is attenuated here and then it falls on the reflector and now all the light that is falling on the reflector is reflected back to the receiver and the light that actually goes through the material it's actually attenuated twice as much as if you had a transmitter and receiver on these two sides so the main advantage with this kind of a system is that you are essentially doubling the or essentially reducing the transmittance by half by making your light go back and forth twice through the same material so if you have like an clear material you might be able to do this where you can increase the sensitivity of your system and then you can you can use the retro-reflective technique to detect the web the other advantage of this kind of system is that the transmitter and the receiver are on the same side and then the reflector could be much farther away from the transmitter and receiver so that essentially make this like a one-sided system and this has other advantages in terms of like you don't have to have a constrainted fork where where the the throat of the fork you don't have to worry about that if the web plane changes and things like that and it's usually compact most often these retro-reflective sensors are used for in presence/absence detection and there are some sensors that are used for nonwoven applications and they actually use a special mirror kind of a thing so that it can go through low basis weight nonwoven a couple of times before we can get the signal back so that's a retro-reflective sensor now the main thing with the conventional ed sensors is that it's a simple sensor technology when you have low transmittance but as soon as you start looking at materials that have special properties and things like that these sensors have issues especially with engineered materials like nonwovens and and some really clear material the material properties affects transmissions and absorption and hence the accuracy of the sensor is lost now I talked about spatial ambiguity let's say the receiver is a single element maybe it's an ultrasonic receiver with just a one element or maybe a photocell if that's the case then if you have a small object like this which is much smaller compared to the viewing area of the sensor and if the object is anywhere within this range there's no way the sensor can tell you what the position is it can tell how much signal attackin teammates but it cannot tell what the position is and that's what is called as spatial ambiguity with these kind of sensors these are especially important when we look at threads and things like that with our sensor that's not an issue the other thing with these sensors is it cannot detect surface characteristics so it the the whole web either blocks or transmits and since there is no way we can figure out if there's a feature on the web or a structure on the web there's no way this sensor technology can detect what it is the other issues that these technologies have is that image saturation issues basically what it means is that most often the receiver is saturated by light and then you have a let's say you have a small object that is there that photo cell or the pixal for that matter might be saturated and it would have some kind of a saturation issue with that so when you are trying to detect a small object or a feature there is a problem then finally the sensor resolution and depends upon the range of the sensor so if it is a single receiver element and you want to make this sensor 2 inches wide then the resolution that it can detect is going to be the same as set a solution of the A to D converter or whatever it may be so irrespective of the width of the sensor or the range of the sensor the resolution would change and that's an issue now instead of using transmittance can we use reflection of the web to measure the web position and the answer is yes we can and that's what we do with our sensor and in order to look at that let's take a look at a few concepts about basically reflection or scattering or diffuse reflection and a couple of different things that we use in our sensor technology and then our sensor is based on optical fibers and optical fiber properties or fiber optics so we'll look at a couple of concept there as well so again we have an incident light that is falling on an object and we talked about reflection transmittance and absorption actually this reflection was just like a simpler way of representation of the light the light source is going to be it's not it's not going to be like a ray it's going to be a wave in fact so what really happens is that the light falling on the object it's actually scattering everywhere and the light goes off everywhere and then there's a portion of the light that is reflected now this phenomenon where it scatters off everywhere it's called scattering or the actual technical term for that is diffuse reflection and then this ray right here which is following the typical ray optics is called specular reflection so you have specular reflection and then diffuse reflection specular reflection is usually seen in smooth surfaces polished surfaces mirrors and things like that where the light will bounce at the same angle as the incident angle and scattering and diffuse reflection is going to be seen in most materials that are not polished or smooth so almost every material that we deal with is doing the few scattering and the reason why we are able to see a color in an object on an object is because of scattering or diffuse scattering so the the reason why this is scattering in all these direction is because the surface has got irregularities on it and those irregularities have different angles and when the incident light falls on that microscopic irregularity its reflecting it back at the same angle with respect to that small microscopic surface but because we have a whole bunch of those the light is scattered in all the different direction just like scattering and reflection there's also diffuse reflection and regular sorry diffuse transmittance and regular transmittance when the light goes to it and that is not of importance for us the main thing for us is this reflection now there is another thing that is important when you're installing the sensor we talk about the angle this is mainly saying that if you have a light source that is coming at a certain angle then the maximum amount of scattering is going to happen close to the normal when we do our sensor installation there are some sense of installation recommendation that we provide that's that's mainly based on this so if you have if you want to view the object from this angle then as you can see here there's going to be a lot less light than if you view from here and likewise the light source as well this is called as Lambert's cosine law and basically you want to keep that angle as close to normal as possible so that we can get the maximum so if we use scattering how do we make a sensor that can do this again and just like a retroreflector one we have a light source and a transmitter and a receiver the the light goes through here falls on the web if the web material was perfectly reflective then all the light that goes here will exactly reflect back at the same angle which is normal to the surface and then we will have a really good image and we'll be happy but that's not what happens the light when it falls on the material it's just chaotic it's going to reflect everywhere or scatter everywhere and we're going to have the same issue like an opposing beam sensor technology where we have scattering and which will be dependent upon the material and how far the material is from the web and and the spatial ambiguity and accuracy and things like that so how do we fix that basically we add a filter to it and that's our proprietary technology so we add a filter that filters the light and the way it works is that when the light falls on the web it scatters everywhere but the filter is going to selectively couple the light coming in falling on it at a certain angle and now the light goes back now we know where the web is if this receiver is a pixel array or some kind of imaging element then we would know all these elements would receive light and all these elements would be dark and if we do an edge detection algorithm we can figure out where the web is that's the underlying principle of that and how do we create a filter that's based on fiber optics so optical fibers have special properties in that they allow the light to go in only at a certain angle like the cone that is shown here so if you have the web if you have the light falling on the fiber at an angle that is steeper than the scone it will not go in so it's basically gonna filter out all the light and allow light only coming at a certain angle to go in and that's how we do it so that's the basic fiber optic technology that we have so how does it work just to give you an illustration there's a light source that is illuminating the web and the light scatters everywhere but then the light that is directly underneath the fiber is the one that gets coupled into the fiber all the other light that is coming at an angle doesn't get coupled into the fiber and that's how we we we didn't mind what the web position is again this is a true and absolute web position measurement because the fiber has to align with the web edge for it to be able to pick up the light and just just an illustration of how we do it now essentially the light source angle can be at any angle as we saw before but we want to keep the scattering to the maximum so we're going to keep that light source as close to the receiver as possible to take advantage of the Lambert's in my inverse Lambert's cosine law so we have a light source it could be laser LED or anything like that it falls on the surface it scatters everywhere this is a side view of the optics and this is the front view of the same thing so the light falls onto the fiber optics and then the photo diode or a pixel which is position behind the fiber optics is going to pick up that light that's how we determine where the position of the web is now I mentioned that we have a pixel or a photodiode behind the fiber basically what we do is we capture the image and then we do real-time image processing to determine the edge of the material this is an important part of our sensor technology technology not just that not just the the way the sensor works but also how do we process the image is important the algorithm adapts to a few other things that we will talk about later in terms of how do you adjust for intensity changes how do you adjust for focal lengths and stuff like that so essentially a component the components of our sensor or four main components one is the light source we typically use LED light source it could be visible or a broad light spectrum or infrared or even UV light and then there is an optics that is filtering the light that is coming at all different angles that is our patented fiber-optic technology and then there is an imaging element behind that optics which is usually a linear pixel array and it's anywhere from 768 pixels to about 14,000 pixels and these pixels are arranged really close to each other at 63 and 1/2 micron or about five thousandth of an inch and then finally the image processing algorithm which actually does the edge detection so let's see how that works now we have a different samples of web that we can present to our sensor as you can see the some of them are opaque some of them are completely optically transferring some of them are porous they have voids in them and irrespective of any of those our sensor can provide a really reliable and highly accurate web position measurement I think this is a published article that you can find in converting but the but dearest the accuracy was about ninety over ninety nine point three percent with any of those materials and we didn't do any calibration that's the main thing with our sensor technology now just to give you an idea we saw the different images as you can see the light that is falling on to the imaging element is going to have varying intensity depending upon the material that we have so if we have a light scale where blue is completely dark and red is completely light or white and then there's a spectrum going through that you can see that all of these edges are are not as sharp as what we have illustrated in the working principle in spite of that this is the output of the edge detection algorithm they did the edge detection algorithm goes in and says oh this is the edge for this black nonwoven web this is the edge for the transparent web this is the edge for the load GSM nonwoven and this is the edge for the perla now we don't really just have to look at the edge of the material we can actually look at features or patterns on the material as long as the pattern is parallel to the machine direction then we can do the same thing so here for example we had some die cut edge of a label so we wanted to detect the edge right here edge right here edge right here on the edge right here so and again we can do that with our sensor technology and it provides a pretty good measurement so in terms of the the image processing feature that we have the key things there are we do what we call as dynamic threshold that means that every single image that we take every sample we do run our edge detection algorithm and it automatically to the lighting it adjust automatically to the focus it adjust automatically to the material and figure out figures out where the edges and just to give you an example again this is an image from our four 440 sensor with infrared light source black just means that there is no light coming back and then white means that it's coming back now this is the raw image on the top we presented a burlap web right there and if we just say I don't care about what's in between I just need the edge of the web tell me the left edge in the right edge then we can give an output which is going to be like this so the edge detection algorithm goes in and picks up the left edge in the right edge it doesn't care about anything in the middle now what happens if we have a lower brightness it's the same web same location we have a lower brightness as you can see these two images are pretty different with low intensity again no problem no setup no calibration the edge detection algorithm will automatically adapt to that and pick that and what happens if you have a really bad image where it's not only low brightness but it's also further away from our sensor no problem we can still do that I'm not sure if you can see this on your screen but this is really faint gradient here but we can still do that now the other things that we can also do is background suppression what it means is that we have special way of adjusting automatically so that if there is a background in the image which is at a certain distance from the web we can still adjust for it and that's what is shown here so here this is an example of a clear web with an infrared light source on it and there's some background there and we were able to automatically adjust it and provide the edge positions there now when we do this we also have additional information that we provide from our sensor we can provide an edge quality feedback that means we can say how good the edges we call it as a quality factor we can provide it for each edge that we measure and then we can also provide a flutter feedback for each edge again the quality factor is based on instantaneous measurement while a flutter is based on temporal it's a function of time so but anyway we can provide that flutter information as well now other things that we could do is one of the things with the with the conventional sensor is that it has issues with spatial ambiguity this is going to be common when you have like really small feature our sensor obviously does not have that issue and just to give you an example here is an image of a single thread this thread is like two or three millimeters wide and we can pick up a single thread like that with our sensor and and what if you have multiple threads and let's say you are running some kind of an operation where you have to guide or bring in multiple threads into your process and you want to be able to count the threads well if you present something like that we can still do that as you can see in the raw image there are some threads which are kind of out of focus or farther away some really in focus and a it doesn't matter our algorithm is able to go in and pick that up and tell you process the image and tell you the edges now when we do edge detection like this we can essentially provide you with every single edge position where they are within the sensor or most often because it's a lot of data that is going through we can actually provide you some statistics of this whole measurement so we can say well what is the average width of these threads that we picked up and what is the average gap between them so on and so forth what is the minimum what is the maximum so we can compress that in the data and provide some statistics that you can use to kind of figure it out like for example here these there are actually two threads here these two threads are really close to each other and when they are close to each other then this kind of the thread kind of became like double so if you're able to monitor that then we can figure that out and tell you that hey something is coming too close to each other or overlapping and things like that again we can with low brightness it's not an issue we can still pick up that edge right there now I talked about edges but it doesn't have to be an edge here this is basically a printed pattern that we are looking at different colors different colors are going to reflect light at different intensity again an infrared light source is illuminating a colored web here and you can see that it can pick up these contrasts so when we do contrast application what we can do is we can not only tell you the location of the edge where the the contrast is different but we can also tell you the intensity of that so that's how we can teach our sensor to follow a particular color or a line or something like that now we we have done sub-pixel edge detection so that means that we can go to one twenty thousandth of an inch or sixteen micron in terms of the resolution our standard pixels or the standard imaging element has a resolution of 63 and a half micron and we can do 1/4 or 1/8 of that to get sub pixel resolution so those are the features of our sensor in terms of the sensing principle how we do image processing and how the raw image looks like and how the process image looks like and things like that now let me just quickly go over some of the applications that we have with our sensor first and foremost we do edge detection that's one of the basic things if we want to do edge guiding then we need to do edge detection the other things that we do are with the measurement contrast position measurement contrast position detection and contrast our pattern it's pretty much the same kind of thing then we do thread counting and I showed a little bit about that we can also detect a mark on the web we can also detect a flag on the web we can also detect what we call as a coverage or the for lack of a better term optical density of the web and then finally the stack measurement so let's go over some of these really quick edge detection since our sensor is one-sided you can have a web that is coming from left to right like in this case or from right to left it doesn't matter all we just need to know is what direction we need to scan for the edge if if if the sent if if the sensor if we're looking at the right edge of the web which is this one then we're going to scan from right to left if we're looking at the left of the left edge of the web then wicker we're going to scan for from left to right now what we are going to look for is in an edge detection is the first transition from a dark to any kind of a light and that's how we would do the edge detection now I've got a short video here and hopefully it can play properly and these videos are on our website but the idea here is to show that we can put any web material and this is showing the position of the web we can pretty much put any web material and without any setup or calibration we can get the true measurement or the absolute measurement of the web this is like a highly porous non woven that we used the other extreme example is like a really porous mesh kind of a web even if we have a web like that when we do edge detection we can accurately get the leftmost edge of the web or whatever the edge that we want we can get that accurately so that's what this video is showing so some application example where we provide the most value to our customers is that whenever you are doing edge detection with frequent material change over like if you are going to detect a paper web all day every day yeah we can do that too but that's not where our value is this would be like nonwovens or any any kind of engineered material that you are changing quite frequently where you don't know the opacity and the porosity and stuff like that our sensors work great in those application or if we do width changes like I mentioned we can go from about 48 millimeters up to about 900 millimeters in sensing range without losing any resolution that means that you can run different webs without different width webs without needing to move the sensor that's another place where we have a pretty good advantage challenging materials of course that's that's our specialty is if you have something that's really complex then we can do that and then challenging environments specifically vacuum that's a place where we have got a lot of success especially when we are trying to detect clear film inside a vacuum environment now the advantages are that we don't need any setup for calibration and it's pretty cost effective in terms of connectivity we have industrial Ethernet this could be Ethernet IP or pro finet will also have EtherCAT and other things that we could do as well and some unique applications like I said clear web inside a vacuum this is pretty unique just because of the fact that most current technologies for detecting clear film or ultrasonic and ultrasonic doesn't work inside vacuum so that's pretty unique for us and then again metals that's another place where metals inside vacuum is another one since our sensor is single sided we have successfully used our sensor in abrasive environments where a web flutter or something like that they typically knock off a fork style sensor and we can position our sensor farther away from the web because our algorithm can adapt to that and we were able to do that with an abrasive web environment and then you can also do width measurement on low basis weight non-roman this is usually a problem with current sensors because they are not they're really really you need to do a lot of filtering with that or it could be some kind of an edge that is kind of ragged and we can also do like detecting ridges on a web so this could be some kind of an extruded plastic or something that has got a three-dimensional structure on it and you want to follow or see that some of those things we could do just to give you an idea the the the the sensor itself has just the optics the imaging element and the light source it doesn't have the processing so you need a sensor and a controller to do that it shows and then different output options at the hint we can have for edge sensing usually we can to analog output if you want to connect it to some legacy systems that use analog sensors otherwise we prefer to provide Ethernet IP or Pro finet for that matter option for H as output now the other common application that we do is width measurement now width measurement with two sensors or one sensor obviously this sensor is too small and if the WebWork changes then you're going to have an issue that's where we have our value where we can provide a wider sensor and what you don't have to have the sensors mounted on some kind of a linear positioners mechanical positioners these are again cost prohibitive and have inaccuracies and all those kind of things so you can avoid all of those by using a wider sensor and when the web which changes the sensor doesn't move so it can still detect the web width now if the sensor is wide enough then we can measure both edges of the web with a single sensor again this having a single sensor that does all of those then you don't have to worry about alignment the distance between the two sensors and all those kind of things this is the reason why we are moving to wider and wider sensors every year and we are about to release our 900 millimeter wide sensor this summer anyway if you have a wide sensor then you can see both edges of the web and without having any issues now most of the time these are applications in shrink sleeve so this is a one of the video from our distributor in Mexico but as you can see one of the things apart from providing accuracy and resolution in width measurement it's a safety thing so you don't really have to have somebody go in and measure a web while it's moving to make sure that it's at the site the correct width you can have an automatic automated system that can do that so here we're showing our for 40 mm sensor that is looking at both edges of the web in a shrink sleeve or a folding web application now we can do the same thing for others and some of the common applications are if you want to look at a blown film and you want to measure the width of the tube lay-flat width of the tube of the blown film a lots of different applications again this is one of our videos from our distributor in Mexico so like I said width measurement where we see it where where it's used like any time you are slitting and let's say you're slitting something that is uh that that requires you to measure the width to verify that we can use that folding the web again it could be in swing sleeve it could be other folding web applications and sometimes you can also look at the trim width just to make sure that you don't run too far out to the trim I mentioned about blown film and then extrusion like you can use it for monitoring extruded webs like resealable zippers or things like that that's where we we have our sensors installed and then you can also look at like width of tire rubber and in that case we can we actually have it in a back roller and we can do that as long as there is a contrast difference between the edge of the web and the background it doesn't have to be a clear background or a free space it can be anything then we can do that again different output options we typically prefer ethernet/ip or or or some kind of a industrial Ethernet apart from looking at the edge of the web we can look at contrasting features so basically we're taking advantage of the fact that different colors are gonna scatter light at different intensity in this case we are showing a white light a white light is typically used when we want to bring out visible contrast it doesn't have to be white light it can be UV so that you can have a fluorescence or can be IR as long as you are able to see the contrast difference and the light source is mainly used to extenuate the contrasting feature so either depending upon the frequency of the light source or sometimes the angle as well so some applications would be like if you want to look at a coated edge and you want to see how far the coating is to the edge lithium ion battery or any kind of coating that you can do even just regular paper coating you can do that and in paper coating case you would probably use a UV light so that most often that coating on the paper has some UV fluorescence to it or it can also be like looking at maybe tabs or things like that on a conveyor we can do that too and then not only are we detecting the contrast we can also provide the position of the contrast so you can use it for control purposes as well I've got a couple of videos here but I'm gonna skip that and you might be able to see that in our in our website I think I talked about this quickly so anytime you are guiding in a slitter where you want to slit to a printed line that's an application that could be in a doctor machine as well if you want to follow a UV line then cording width measurement and then some other simple inspection applications here's a quick video where we have a application where our sensor is installed to look at this bag which has an extrusion which is clear and then the bag itself is opaque the the idea is to measure the width of the green part of it and disregard the clear part of it and we are able to do that in this case it was installed with the free space at the back but it doesn't have to be free space you can actually install it with the back roller on it here we have an example of our sensor used to detect the width of the web where the web is supported on a roller and there is a contrast difference between the roller and the web and it's it's not clear here but we use a white light source in this one and we're actually looking for the edge of the clear film and not the edge of the printing and with the light source and we can do that and this is made this one is for width measurement for detecting wrinkles so whenever you have a wrinkle on the web that's going to reduce the width wrinkle or a fold over that's going to reduce the width and we can use our sensors to measure that there or like I said in the other application would be like if you have a tire width measurement application you have two sensors that are mounted and it can look at that the tire and in this case they they were kind of a little bit creative about it and they put a background that is white and then the tire by itself is black and the reason why it's black it's because it's absorbing all the light so in order for us to detect this we need to have a white background and then and then a clear a black foreground which is our web then we were able to detect that so that's our common width measurement application I briefly went over this this is another thing that we do is like string thread fiber monitoring so it could be like things that you use in textile and waist bands or or carbon fiber applications where you want to accurately measure multiple webs and in this case we would be able to do measure and count and provide statistics on what is the minimum what is the maximum what is the gap so on and so forth and we can also provide additional information like that again I'm going to skip this video for the sake of time and like I mentioned it's elastic thread or textile or carbon fiber any of those the main advantage here is that we're replacing some camera based system with our sensor and then for thread detection we are replacing mechanical pulleys and and pressure-sensitive sensors and we can actually stitch together multiple sensors over Ethernet our 440 sensors we have installed it up to 2 meters wide where we have connected four of them together which is possible the next application is marked detection so again we're trying to look for a contrasting mark which is a which is going to have different scattering and we can do that the main difference here is that the mark is along the machine direction sorry if it's it goes faster it's not a consistent mark that comes and as in a printed pattern or things like that but it doesn't matter we can still detect that and it can be a visible mark or UV fluorescent mark any of those and it's not only that we can detect the mark we can actually track the mark in terms of position that means that we're not just saying giving you an on-off signal we can also provide an analog output that tells you where the mark is with respect to the sensor the main advantage there is that we have a wider sensor so if the web moves we can still track the mark we can also detect flags on the web so let's say you have a web and it's got multiple flags you are going into a slitter now you're going to remove all the flags and then you're going to another process and then you want to put the flag back in how do you do that then you can detect a flag and then have some kind of a thing that would put the flag back in again after you complete it this is a little bit even though that you can see that the flags are different colors different weights different sizes the web may be different we can do all of those without any need for teaching I can adapt to different flags and different webs and it is not affected by web wander or web moving back and forth then we can also provide a quality signal in that case we can also do tear detection or coverage this is just an illustration or example of let's say there is a tear in the web and then the web actually splits we can detect that this is an exaggeration but if the tear is small we can still do that now if the web separates and moves the left and the right edge moves then that's an easier one to detect but if the web's doesn't move and you just need to see a wide or a hole again we can detect that again all of this is based on how much light we are getting when there's nothing when there's a wide or a hole or a tear if this region is going to be dark and this region is going to be white and then we can basically integrate how many pixels that are white and how many pixels that are black and that gives us the coverage and by monitoring the coverage you can detect if there's a hole or tear or anything like that one other application that we do is we could do is also product linked measurement basically what we do here is we just rotate the sensor 90 degrees and when the product is moving on a conveyor it can track the edge and it can basically say how wide the product is the main thing with this is that the sensor doesn't have to be longer than the product and then the product can run at different speeds and you don't need an external synchronization signal to be able to pick up that so that's a quick summary of some of the applications that we have just to summarize the advantage of our sensor technology is that it's accurate it's got it provides an absolute measurement it's very simple to use when you compare to like a traditional fork style sensor or maybe even a camera based sensor the resolution is not affected by range in a camera based sensor if you want to have a bigger range you have to install the camera at a certain working distance and when you do that you lose the resolution we don't have to worry about that in our case and it's a one-sided sensor so like in a camera based system you would have to install the camera or the light source the gantry all of those here it's all enclosed fully contained and because the sensor is going to be close to the web we can have a pretty good lighting control and then it's also compact in terms of installation and we have done a lot of customizations for our sensor and these would include processing speeds of up to about thousand Hertz and then we can track multiple edges there's really no limit to the number of edges we can track it's just that the information how do we send it and then sub-pixel approximations I mention that before two to eight X we can do that and we can also control the edge detection through Ethernet industrial Ethernet and then light source customization based on the application and then essentially the customisation is bringing us more and more close to a line scan camera based system but at a much lower cost and the difference is that we actually do the customization and we don't just sell the the the sensor and the imaging we actually do the processing as well unlike other line scan based systems so that's essentially my presentation and here is the contact info if you have any questions or any application or any need for sensors please let us know and I do have some questions in the Q&A section so let me just quickly go over some of those and see can your sensor technology by the way please use the Q&A section to ask any questions if you have can your sensor technology also measure a webs thickness across a sheet of material while in motion if if that's true what when a number of threads are being read in addition to placement of threads in their respective spacing can you also read the thickness yeah the the thickness we can't do that right now not in the current state we can look at what we are doing is we are taking a projection of the image a actually it's a 1d projection of the image and since thickness is going to be in the other dimension there's not a way for us to detect both at the same time so we cannot do that what types of industrial Ethernet do you utilize yeah we can use Ethernet IP Pro finet and we can also do either cat and then other other things other protocols okay what is the minimum distance between slit webs that would allow the sensor to measure the slit width of multiple webs parallel to each other okay that's a great question so if the if the if the web can scatter a lot of light then I would say we could do probably one or two millimeters and we can easily have one or two millimeter gap between the web's and be able to detect the slip web width that's something that we could do and then what is the working distance so the working distance for the sensor is really depend upon the the amount of light that is being scattered our normal working distance that we specify is about 10 millimeters it's pretty close to the web and by putting the sensor so close to the web we have a pretty good control of the lighting and that way we can we can have a pretty good image now there are applications where we have installed it more than two inches or maybe even three inches from the web surface so it really depends upon the amount of scattering but for anything that we want to do like width measurement and things like that we would we would want that distance to be within our specification so that we can provide a pretty accurate measurement and then the other thing that I forgot to mention about the width measurement is that you can measure the width in a free span or a free space but as we showed if you measure the web width on a roller if you can get the contrast difference and you can measure on a roller then that provides the most accurate measurement because you don't have to worry about focusing and things like that even though our algorithm can compensate for it but it's it's better if we can have the web stabilized let's see um gee so that's probably all the questions that I have I hope you have a had a chance to fill out our polls that be organized and if you haven't done that you can use the tab on the left to be able to see the polls and that would be really helpful for us so that we have a better understanding about what your needs are and how we can help again my name is are Arvind Seshadri and we really appreciate your time today to join us during our webinar if you go onto our website you should you should be able to see most of the stuff that I talked about today in terms of videos they should be on our website and then if you have an application and you would like to talk to us please give us a call or contact one of our distributors or sales reps in your respective region and they would be able to help you with any of your needs if there are no other questions then I will conclude this webinar once again thank you so much for your time

A new sensor for web flutter measurement is proposed in this paper. The sensor is based on the principle of scattering of light and directional properties of optical fibers. A collimated beam of light is incident on the web edge and scattered light from the web edge is collected using a linear array of optical fibers. As the web flutters the point of scattering moves. Due to the directional property of the optical fibers, each fiber collects scattered light that is incident on it at certain angles. The motion of the scattering point as the web flutters is directly related to which fibers are being illuminated within the fiber array. The other end of the fiber array is terminated onto a linear array of photodiodes (pixels). Based on which fibers in the array are receiving scattered light and the amount of light received, the transverse displacement (web flutter) of the web can be determined. This paper describes the construction and working of the new sensor for web flutter measurement. Experiments conducted on a web platform show that the sensor is capable of accurately measuring web flutter. The frequency response of the sensor is limited only by the scanning rate of the pixel array and not by the flutter measurement method. A dedicated signal processing circuit can be used to obtain a desired scanning rate, thus, a desired frequency response.
Tags:
Back to Articles
A novel sensor to measure web flutter is proposed in this letter. The sensing principle is based on scattering of light and directionally sensitive coupling properties of optical fibers. A linear array of optical fibers, oriented appropriately, is used to collect light scattered from a web. The flutter amplitude is determined by observing the amount of light transmitted by the fibers in the fiber array. Experiments were conducted to demonstrate the ability of the proposed sensing strategy in measuring web flutter with different kinds of web material.
Tags:
Back to Articles