images geometric transform
V.R. Krasheninnikov, A.D. Kadeev
While solving different practical questions there is a problem of image registration and a closely related problem of image segment binding. If the model of interframe geometrical transformation (GT) of the two images is given, the problem reduces to estimating the parameters of this model. A large number of papers is devoted to the development of algorithms for GT parameters estimating. But most of these algorithms require fairly accurate approximation of the GT parameters as initialization of values. Thus, it is necessary to obtain rough GT parameters estimation, which can be taken as an initial approximation for other algorithms. A method of determining the parameters of geometric transformation with large working area is proposed in this paper. This method is based on finding a fixed point of mapping.
The algorithm is considered for two-dimensional images. Suppose that a type of geometric transformation between two images is given as a combination of rotation and shift. In this case the only goal is to find GT parameters. One should find the coordinates of the fixed point of mapping to determine these parameters. It is necessary to rotate one image on around the center; otherwise a fixed point might not exist. The main difficulty of this method is to find a fixed point.
Consider the case where GT consists only of shift and angle of rotation is zero. After extra rotation has been made, count the difference between the images. It is obvious that this difference image has central symmetry around a fixed point. So this point could be determined by using statistics that compares opposite points in some range near the candidate point. Each point on the image should be checked on. The point with the least value of the statistics is most likely to be a fixed one.
If transformation rotation is small this method gives almost exact coordinates of the fixed point and the result is not distorted. Using this feature you can find a fixed point in the case of transformation with a significant rotation value. To do this, make a number of additional turns of one image with a certain step. Thus in one case, the total value of the rotation will be small and the fixed point will be found successfully. After that, iteratively precise the value of the rotation angle using lower values of step and additional rotation range.
Thus, this method allows us to reduce the complex problem of determining the parameters of GT to the case of zero rotation. Experiments have shown that the algorithm works efficiently even in the case of significant shifts and turns, and also almost insensitive to the strong additive normal noise. In the case of moderate noise, the average error of the determination of the angle of rotation does not exceed 0.5 degrees. The same error for shift is about 1 pixel. The proposed method is suitable for the initial rough estimation of the parameters and acceptable in the case of large distortion. These features make this algorithm useful in many practical problems where the transformation consists of large values of rotation and shift.