Teach
With Bigshot |
Geometric WarpingWarping [1] is any process that changes the shapes portrayed in an image. A warp could be as simple as a stretch or a rotation, or it could be a complex distortion. Nevertheless, the basic idea of warping is to relocate pixels in the input image to different positions in the output image. The pixel relocation, or re-mapping, is usually done according to some mathematical equation.
Warping could be applied either optically or digitally. An example of an optical warp is the bulgy-looking image captured by a wide-angle lens (see third image from left in Figure 9). An example of a digital warp is the panoramic image produced from the wide-angle image. Let us now look at how one can digitally undistort a wide-angle image.
Consider a scene photographed using a regular lens, as shown in Figure 10a. The image of scene point P is formed at image pixel P1. Let the distance between P1 and the image center O be r1. Now, let us replace the regular lens with a wide-angle lens, as shown in Figure 10b. Since the wide-angle lens has to squeeze in a larger field of view onto the same sensor, it "squishes" the light rays toward the center of the image. This means that the point P, which would normally have been imaged at pixel P1, will be now be imaged at a different pixel P2 , at a distance of r2 from the center, but along the same direction from O as pixel P1. This type of distortion is known as radial distortion [2]. To correct for this distortion (i.e., to move P2 back to P1), we need to first know the mathematical relation between P1 and P2. A common way to do this is by using a polynomial equation: r2 = r1 + c × r13 where c is a parameter controlling the amount of distortion. When c=0, we get a regular perspective image (without distortions), and for positive values of c the image appears bulgy (distorted). If we know the value of c, correcting the distortion is straightforward. For every pixel in the desired (distortion-free) output image, we know its distance r1 from the image center. By plugging this value in the above equation, we get the distance r2 of the corresponding pixel in the distorted (input) image. The color values are then simply copied from the pixel at distance r1 in the distorted image to the pixel at distance r2 in the output image. This process is repeated for all the pixels in the output image.
The demo in Figure 11 shows distortion correction in action. The slider controls the distortion parameter c. When c=0, the output image is the same as the input image. As we increase the value of c, the amount of distortion is reduced.
References
|