Fisheye Optic Center Calibration
I – Introduction
The fisheye lens is a wide-angle lens that captures warped image with distorted appearance. Users are also able to flatten or dewarp the image into a rectilinear or panoramic view. The viewing modes available with the chip include:
“O” for “Original” view: This is the original, warped image captured by the camera.
“P” for “Panoramic” view: This is the basic, panoramic view which has been dewarped.
“R” for “Regional” or “Rectilinear” view:
This view allows for a single view, roughly equal to one quadrant of the overall image, which can make use of pan, tilt, or zoom operations using the camera’s PTZ feature.
For example, a common usage of 1O dewarping is shown in Fig.1, the 1O image is dewarped to the 1P image. It is critically important for the chip to obtain the suitable optic parameters for dewarping.
There are three optic parameters in the chip setting: RADIUS, HSHIFT and VSHIFT. We can derive these parameters from the circle position which is fetched by the proposed scheme.
II – The Proposed Scheme
Based on the above-mentioned situations, in order to fetch thecircle position, there are several stages which are described as follows:
A. Generate an image that has a clear boundary in 1O mode
In order to generate an image that has a clear boundary in 1O mode, we can cover the camera lens with a semi-opaque mask. It is noted that we need to provide enough light source in the top of semi-opaque mask. Fig.3 exhibits a simulated installation for this stage.
As can be seen in Fig. 4, we obtained an image that has a clear boundary in 1O mode. Using this feature, we can select a suitable threshold to detect the circle boundary.
As can be seen in Fig.5, we need to obtain the coordinates of point a and point b so that we can calculate the center of the circle.
B. Smoothing the target region
The Gaussian smoothing operator [1] is a 2-D convolution operator that is used to blur images and remove detail and noise. Fig.6 shows a suitable integer-valued convolution kernel that approximates a Gaussian with standard deviation of the distribution = 1. After revealing the unadulterated form of the pixel, we can further improve the image processing effect and reduce false positives.
As can be seen in Fig.7, we set up the regions where we want to search for the boundary point. The smoothing process can only be applied to these regions.
C. Search the boundary point
The resolution of the image is 640×480, the distance between the circle boundary point and image boundary is quite different in horizontal and vertical situations. We can set a proper offset to address this issue. As can be seen in Fig. 8, we obtain the pixel value by raster scan. If the current pixel value is greater than the selected threshold value (i.e., brighter than the threshold), the current position can act as boundary point.
The coordinates of the point a and point b can be retrieved from the following regions:
Region A → ay
Region B → ax
Region C → bx
Region D → by
Once we get the position of point a and point b, the center of circle and the radius can be calculated.
D. Writing the parameters to flash memory
Owing to the requirement of the chip parameter during the boot process, we need to execute this program in the manufacturing process. When the parameters are fetched from this program, we can write the parameters to flash memory that can be prepared for further usage (e.g., chip initial process and web UI).
III – Experimental Results
The proposed scheme has been implemented in Linux platform. Fig. 9 illustrates four different devices; the circumference of circle and the center of circle are shown in black color, respectively.
IV – Conclusion
In this paper, we propose a method for finding out the optic center and the radius of the sphere. Based on experimental results, the optic center and the radius can be found effectively by the proposed scheme.
References
1. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed., Prentice Hall, 2007.