Fisheye Optic Center Calibration







Wei-Siang Wang  |  EnGenius Technologies







ABSTRACT — With the rapid evolution of the IP camera industry, one of the more popular IP cameras is the 360° camera. It enables a complete, surround view of an area while fisheye lenses provide very large wide-angle views. However, the fisheye lens is a wide-angle lens that captures warped images with distorted appearance. The images produced suffer from severe distortion as a result of the warped scene being projected onto the flat scene. It is important that optic parameters be found to relieve the distortion. The method for finding out the optic center and the radius of the sphere is discussed in this paper. Based on experimental results, the optic center and the radius can be found effectively by the proposed scheme.



I – Introduction

The fisheye lens is a wide-angle lens that captures warped image with distorted appearance. Users are also able to flatten or dewarp the image into a rectilinear or panoramic view. The viewing modes available with the chip include:

“O” for “Original” view: This is the original, warped image captured by the camera.

“P” for “Panoramic” view: This is the basic, panoramic view which has been dewarped.

“R” for “Regional” or “Rectilinear” view:
This view allows for a single view, roughly equal to one quadrant of the overall image, which can make use of pan, tilt, or zoom operations using the camera’s PTZ feature.

For example, a common usage of 1O dewarping is shown in Fig.1, the 1O image is dewarped to the 1P image. It is critically important for the chip to obtain the suitable optic parameters for dewarping.

There are three optic parameters in the chip setting: RADIUS, HSHIFT and VSHIFT. We can derive these parameters from the circle position which is fetched by the proposed scheme.



Figure 1: 1O dewarping. (a) 1O mode, (b) 1P mode.









Figure 2: Fisheye image circle. (a) perfect case, (b) practical case.






II – The Proposed Scheme

Based on the above-mentioned situations, in order to fetch thecircle position, there are several stages which are described as follows:

A. Generate an image that has a clear boundary in 1O mode

In order to generate an image that has a clear boundary in 1O mode, we can cover the camera lens with a semi-opaque mask. It is noted that we need to provide enough light source in the top of semi-opaque mask. Fig.3 exhibits a simulated installation for this stage.



Figure 3: Cover the camera lens with a semiopaque mask for simulate installation.






As can be seen in Fig. 4, we obtained an image that has a clear boundary in 1O mode. Using this feature, we can select a suitable threshold to detect the circle boundary.



Figure 4: An image that has a clear boundary in 1O mode.






As can be seen in Fig.5, we need to obtain the coordinates of point a and point b so that we can calculate the center of the circle.



Figure 5: The position of point a and point b.






B. Smoothing the target region

The Gaussian smoothing operator [1] is a 2-D convolution operator that is used to blur images and remove detail and noise. Fig.6 shows a suitable integer-valued convolution kernel that approximates a Gaussian with standard deviation of the distribution = 1. After revealing the unadulterated form of the pixel, we can further improve the image processing effect and reduce false positives.



Figure 6: Discrete approximation to Gaussian function with standard deviation of the distribution = 1.






As can be seen in Fig.7, we set up the regions where we want to search for the boundary point. The smoothing process can only be applied to these regions.



Figure 7: The regions that we want to search the boundary point.






C. Search the boundary point

The resolution of the image is 640×480, the distance between the circle boundary point and image boundary is quite different in horizontal and vertical situations. We can set a proper offset to address this issue. As can be seen in Fig. 8, we obtain the pixel value by raster scan. If the current pixel value is greater than the selected threshold value (i.e., brighter than the threshold), the current position can act as boundary point.



Figure 8: The search direction in each region.






The coordinates of the point a and point b can be retrieved from the following regions:

Region A → ay

Region B → ax

Region C → bx

Region D → by

Once we get the position of point a and point b, the center of circle and the radius can be calculated.

D. Writing the parameters to flash memory

Owing to the requirement of the chip parameter during the boot process, we need to execute this program in the manufacturing process. When the parameters are fetched from this program, we can write the parameters to flash memory that can be prepared for further usage (e.g., chip initial process and web UI).



III – Experimental Results

The proposed scheme has been implemented in Linux platform. Fig. 9 illustrates four different devices; the circumference of circle and the center of circle are shown in black color, respectively.



Figure 9: A visual presentation in four different devices. (a) device A, O(309, 234), radius = 221, (b) device B, O(321, 244), radius = 224, (c) device C, O(316, 250), radius = 222, (d) device D, O(320, 244), radius = 221.






IV – Conclusion

In this paper, we propose a method for finding out the optic center and the radius of the sphere. Based on experimental results, the optic center and the radius can be found effectively by the proposed scheme.



References

1. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed., Prentice Hall, 2007.



Download PDF >





See all Technical Papers >