代写Computer Vision编程、代做Java/Python程序设计
后台-插件-广告管理-内容页头部广告(手机) |
Homework 7
Computer Vision, Spring 2024
Due Date: April 26, 2024
Total Points: 20
This homework contains two programming challenges. All submissions ar...
Homework 7
Computer Vision, Spring 2024
Due Date: April 26, 2024
Total Points: 20
This homework contains two programming challenges. All submissions are due at
midnight on April 26, 2024, and should be submitted according to the instructions
in the document “Guidelines for Programming Assignments.pdf”.
runHw7.py will be your main interface for executing and testing your code.
Parameters for the different programs or unit tests can also be set in that file.
Before submission, make sure you can run all your programs with the command
python runHw7.py with no errors.
The numpy package is optimized for operations involving matrices and
vectors. Avoid using loops (e.g., for, while) whenever possible—looping can
result in long running code. Instead, you should “vectorize” loops to optimize
your code for performance. In many cases, vectorization also results in more
compact code (fewer lines to write!).
Challenge 1: In this challenge you are asked to develop an optical flow system. You
are given a sequence of 6 images (flow1.png – flow6.png) of a dynamic scene. Your
task is to develop an algorithm that computers optical flow estimates at each image
point using the 5 pairs (1&2, 2&3, 3&4, 4&5, 5&6) of consecutive images.
Optical flow estimates can be computed using the optical flow constraint equation
and Lucas-Kanade solution presented in class. For smooth motions, this algorithm
should produce robust flow estimates. However, given that the six images were
taken with fairly large time intervals in between consecutive images, the brightness
and temporal derivatives used by the algorithm are expected to be unreliable.
Therefore, you are advised to implement a different (and simpler) optical flow
algorithm. Given two consecutive images (say 1 and 2), establish correspondences
between points in the two images using template matching. For each image point in
the first image, take a small window (say 7x7) around the point and use it as the
template to find the same point in the second image. While searching for the
corresponding point in the second image, you can confine the search to a small
window around the pixel in the second image that has the same coordinates as the
2
one in the first image. The center of the 7x7 image window in the second image that
is maximally correlated with the 7x7 window in the first image is assumed to be the
corresponding point. The vector between two corresponding points is the optical
flow (u,v).
Write a program computeFlow that computes optical flow between two gray-level
images, and produces the optical flow vector field as a “needle map” of a given
resolution, overlaid on the first of the two images.
result = computeFlow(img1, img2, win_radius, template_radius,
grid_MN)
You need to choose a value for the grid spacing that gives good results without
taking excessively long to compute. (6 points)
For debugging purposes use the test case in debug1a. In this synthetic case, the flow
field consists of horizontal vectors of the same magnitude (translational motion
parallel to the image plane). Note that in the real case, foreshortening effects,
occlusions, and reflectance variations (as well as noise) complicate the result.
(2 point)
Challenge 2: Your task is to develop a vision system that tracks the location of an
object across video frames. Object tracking is a challenging problem since an
object’s appearance, pose and scale tend to change as time progresses. In class we
have discussed three popular tracking methods: template-based tracking,
histogram-based tracking and detection-based tracking. In this challenge, we will
assume the color distribution of an object stays relatively constant over time.
Therefore, we will track an object using its color histogram.
A color histogram describes the color distribution of a color image. The color
histogram that you will need to compute is defined as follows. Each bin of the color
histogram represents a range of colors, and the number of votes in each bin
indicates the number of pixels that have the colors within the corresponding color
range.
Be careful, in the initialization of your program, you should generate a color map
from the region of interest (ROI), and compute all subsequent color histograms
based on the same color map. It is only meaningful to compare two histograms
computed based on the same color map. Use the provided function chooseTarget
to drag a rectangle around a tracking target.
3
Write a program named trackingTester that estimates the location of an object in
video frames.
trackingTester(data_params, tracking_params)
trackingTester should draw a box around the target in each video frame, and
save all the annotated video frames as PNGs into a subfolder given in
data_params.out_dir.
After generating the annotated video frames, use the provided function
generateVideo to create a video file containing all the frames.
(12 points)
Include all the code you have written, as well as the resulting video files, but
DO NOT include the three tracking datasets and the individual output frames
in your submission.
请加QQ:99515681 邮箱:99515681@qq.com WX:codinghelp
后台-插件-广告管理-内容页尾部广告(手机) |