Python,OpenCV——对齐和覆盖多个图像,一个接一个

Python, OpenCV -- Aligning and overlaying multiple images, one after another(Python,OpenCV——对齐和覆盖多个图像,一个接一个)
本文介绍了Python,OpenCV——对齐和覆盖多个图像,一个接一个的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我的项目是对齐航拍照片以制作马赛克地图.我的计划是从两张照片开始,将第二张与第一张对齐,然后从两张对齐的图像中创建一个初始马赛克".完成后,我将第三张照片与初始马赛克对齐,然后将第四张照片与结果对齐,依此类推,从而逐步构建地图.

My project is to align aerial photos to make a mosaic-map out of them. My plan is to start with two photos, align the second with the first, and create an "initial mosaic" out of the two aligned images. Once that is done, I then align the third photo with the initial mosaic, and then align the fourth photo with the result of that, etc, thereby progressively constructing the map.

我有两种技术可以做到这一点,但更准确的一种是使用 calcOpticalFlowPyrLK(),它只适用于双图像阶段,因为两个输入图像的大小必须相同.因此我尝试了一个新的解决方案,但它不太准确,而且每一步引入的错误都会堆积起来,最终产生一个荒谬的结果.

I have two techniques for doing this, but the more accurate one, which makes use of calcOpticalFlowPyrLK(), only works for the two-image phase because the two input images must be the same size. Because of that I tried a new solution, but it is less accurate and the error introduced at every step piles up, eventually producing a nonsensical result.

我的问题有两个方面,但如果您知道其中一个问题的答案,则不必同时回答两个问题,除非您愿意.首先,有没有办法使用类似于 calcOpticalFlowPyrLK() 的东西,但有两个不同大小的图像(这包括任何潜在的解决方法)?其次,有没有办法修改检测器/描述符解决方案以使其更准确?

My question is two-fold, but if you know the answer to one, you don't have to answer both, unless you want to. First, is there a way to use something similar to calcOpticalFlowPyrLK() but with two images of different sizes (this includes any potential workarounds)? And second, is there a way to modify the detector/descriptor solution to make it more accurate?

这是仅适用于两个图像的准确版本:

Here's the accurate version that works only for two images:

# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")

# convert to grayscale
base_gray = cv2.cvtColor(base, cv2.COLOR_BGR2GRAY)

# find the coordinates of good features to track  in base
base_features = cv2.goodFeaturesToTrack(base_gray, 3000, .01, 10)

# find corresponding features in current photo
curr_features = np.array([])
curr_features, pyr_stati, _ = cv2.calcOpticalFlowPyrLK(base, curr, base_features, curr_features, flags=1)

# only add features for which a match was found to the pruned arrays
base_features_pruned = []
curr_features_pruned = []
for index, status in enumerate(pyr_stati):
    if status == 1:
        base_features_pruned.append(base_features[index])
        curr_features_pruned.append(curr_features[index])

# convert lists to numpy arrays so they can be passed to opencv function
bf_final = np.asarray(base_features_pruned)
cf_final = np.asarray(curr_features_pruned)

# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)

# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)

这是适用于多个图像的不准确的一个(直到错误变得太大):

Here's the inaccurate one that works for multiple images (until the error becomes too great):

# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")


# convert to grayscale
base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGR2GRAY)

# DIFFERENCES START
curr_gray = cv2.cvtColor(self.curr_photo, cv2.COLOR_BGR2GRAY)

# create detector, get keypoints and descriptors
detector = cv2.ORB_create()
base_keys, base_desc = detector.detectAndCompute(base_gray, None)
curr_keys, curr_desc = detector.detectAndCompute(curr_gray, None)

matcher = cv2.DescriptorMatcher_create("BruteForce-Hamming")

max_dist = 0.0
min_dist = 100.0

for match in matches:
     dist = match.distance
     min_dist = dist if dist < min_dist else min_dist
     max_dist = dist if dist > max_dist else max_dist

good_matches = [match for match in matches if match.distance <= 3 * min_dist ]

base_matches = []
curr_matches = []
for match in good_matches:
    base_matches.append(base_keys[match.queryIdx].pt)
    curr_matches.append(curr_keys[match.trainIdx].pt)

bf_final = np.asarray(base_matches)
cf_final = np.asarray(curr_matches)

# SAME AS BEFORE

# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)

# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)

最后,这里有一些我正在使用的图片:

Finally, here are some images that I'm using:

推荐答案

Homographies compose,所以如果你有 img1img2 之间以及 img2 之间的单应性img3 那么这两个单应性的组合给出了 img1img3 之间的单应性.

Homographies compose, so if you have the homographies between img1 and img2 and between img2 and img3 then the composition of those two homographies gives the homography between img1 and img3.

您的尺寸当然不正确,因为您正试图将 img3 与包含 img1img2 的拼接图像匹配.但你不需要这样做.在您获得每对连续图像之间的所有单应性之前,不要缝合它们.然后您可以通过以下两种方式之一进行;从后面工作或从前面工作.我将用于例如h31 是指将 img3 扭曲成 img1 坐标的单应性.

Your sizes are off of course because you're trying to match img3 to the stitched image containing img1 and img2. But you don't need to do that. Don't stitch them until you have all the homographies between each successive pair of images. Then you can proceed in one of two ways; work from the back or work from the front. I'll use for e.g. h31 to refer to the homography which warps img3 into coordinates of img1.

从前面(伪代码):

warp img2 into coordinates of img1 with h21
warp img3 into coordinates of img1 with h31 = h32 @ h21
warp img4 into coordinates of img1 with h41 = h43 @ h31
...
stitch/blend images together

这里的@是矩阵乘法运算符,它将实现我们的单应性组合(注意,最安全的方法是除以单应性中的最后一项,以确保它们都缩放相同).

Here @ is the matrix multiplication operator, which will achieve our homography composition (note that it is safest to divide by the final entry in the homography to ensure that they're all scaled the same).

从后面(伪代码):

...
warp prev stitched img into coordinates of img3 with h43
stitch warped stitched img with img3
warp prev stitched img into coordinates of img2 with h32
stitch warped stitched img with img2
warp prev stitched img into coordinates of img1 with h21
stitch warped stitched img with img1

这个想法是要么从前面开始,将所有内容变形到第一个图像坐标框架中,或者从后面开始,变形到前一个图像并缝合,然后将缝合的图像变形到前一个图像,然后重复.我认为第一种方法可能更容易.在任何一种情况下,您都必须担心单应性估计中错误的传播,因为它们会在多个组合单应性上累积.

The idea is either you start from the front, and warp everything into the first images coordinate frame, or start from the back, warp to the previous image and stitch, and then warp that stitched image into the previous image, and repeat. I think the first method is probably easier. In either case you have to worry about the propagation of errors in your homography estimation as they will build up over multiple composed homographies.

这是将多个图像与单应性混合在一起的幼稚方法.更复杂的方法是使用捆绑调整,它考虑了所有图像的特征点.然后为了获得良好的混合,步骤是增益补偿以消除相机增益调整和渐晕,然后是多波段混合以防止模糊.请参阅 Brown 和 Lowe 的开创性论文 这里 以及一个出色的示例和免费演示软件 这里.

This is the nave approach to blend multiple images together with just the homographies. The more sophisticated method is to use bundle adjustment, which takes into account feature points across all images. Then for good blending the steps are gain compensation to remove camera gain adjustments and vignetting, and then multi-band blending to prevent blurring. See the seminal paper from Brown and Lowe here and a brilliant example and free demo software here.

这篇关于Python,OpenCV——对齐和覆盖多个图像,一个接一个的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Reading *.mhd/*.raw format in python(在 python 中读取 *.mhd/*.raw 格式)
Count number of cells in the image(计算图像中的单元格数)
How to detect paragraphs in a text document image for a non-consistent text structure in Python OpenCV(如何在 Python OpenCV 中检测文本文档图像中的段落是否存在不一致的文本结构)
How to get the coordinates of the bounding box in YOLO object detection?(YOLO物体检测中如何获取边界框的坐标?)
Divide an image into 5x5 blocks in python and compute histogram for each block(在 python 中将图像划分为 5x5 块并计算每个块的直方图)
Extract cow number from image(从图像中提取奶牛编号)