COMPSCI117-图形学代写
时间:2023-06-02
5/31/23, 1:20 PM Final Project Guidelines and Resources: COMPSCI117: PROJECT IN COMPUTER VISION
https://canvas.eee.uci.edu/courses/55164/pages/final-project-guidelines-and-resources 1/3
Final Project Guidelines and Resources
Default Project Outline
The goal of the final project is to produce a high quality 3D reconstruction of an object from a
collection of structured light scans. It is your project so you are welcome to carry this out in whatever
manner you choose. The following is a suggested set of steps to get started.
NOTE: So far this quarter we have been doing everything in notebooks in order to make it easy to
interactively debug your code. For the final project, I recommend that you spend some time to
organize your code into functions and even encapsulate some of it in .py files that you import. Since
you will need to run the same code on may different scans, it will definitely be worth your time to
make this modular and streamlined, rather than just having one giant script/notebook.
Basic (bare minimum) steps you will likely need to carry out:
1. Utilize the calibrate.py script from assignment 2 to get accurate intrinsic parameters for the
scanner cameras using the images in the calib_jpg_u directory. Also use your calibration method
(or possibly the opencv calibration function) to determine the extrinsic parameters.
2. Modify your reconstruct function from assignment 4 to make use of the two color images
collected from each camera (i.e., color_C0_00/01). You should include two modifications :
(a) Compute an object mask by taking the difference of the object color image and the
background image and thresholding the difference to determine the set of pixels that belong to
the foreground object. You should combine this mask along with your decoding mask in order to
avoid triangulating pixels that are part of the background
(b) For each point that you triangulate you should record the color of the corresponding pixel in
the color image. You can store the RGB values for the points in a 3xN array where N is the
number of points. You will want to keep this array synchronized with the pts3/pts2L/R during any
point or triangle pruning that you do
3. Since we are going to generate meshes for many different scans, encapsulate your mesh
generation code in a function. That function should take as input the directory of scan images
along with any threshold / cleanup parameters which you might want to adjust differently for
different scans. You probably want to store the results of this computation (e.g.,
pts2L/R,pts3,colors) in a pickle file so the simplest approach may be to also pass in the name of
the file in which to store the results
5/31/23, 1:20 PM Final Project Guidelines and Resources: COMPSCI117: PROJECT IN COMPUTER VISION
https://canvas.eee.uci.edu/courses/55164/pages/final-project-guidelines-and-resources 2/3
4. To get nice mesh results, I recommend implementing some form of mesh smoothing. The
simplest approach is to compute for each 3D point the average location of that points neighbors
in the mesh (i.e., those which are connected to it by some triangle). You can then replace the
point coordinate with this average. You may opt to repeat this process multiple times, each time
the mesh will get smoother. However, you don't want to "oversmooth" and remove too many
details.
5. To align the different scans you need to identify some corresponding 3D points. A reasonable way
to do this is to click on corresponding points in the color images of two scans. For each point that
you click in the color image, you can find the nearest 2D point (e.g., in pts2L) and then look up
the corresponding entry in pts3. Once you have the corresponding 3D points between the two
scans you can use the algorithm we covered in lecture using SVD to find the translation and
rotation (pseudocode in lecture slides). An alternative approach is to use the alignment tools in
Meshlab in which case you will need to save out each individual scan as a .ply file and load them
all into Meshlab.
6. Save out your aligned scan data and use a 3rd party tool to perform Poisson surface
reconstruction. There are pointers below to two possible tools you can use for this purpose. For
Meshlab you will want to save your data as a .ply file (see meshutils.py in project_code.zip for
guidance on the file format). For the other reconstruction tool you will need to compute surface
normals for each vertex and store the points and normals in a simple ascii file format (see
description here: https://github.com/mkazhdan/PoissonRecon#USAGE
(https://github.com/mkazhdan/PoissonRecon#USAGE) ).
7. Once you have merged all the scans into a final mesh, generate some nice renderings for your
final writeup. The minimal option is to visualize them in Jupyter or Meshlab and export some
screen shots. Alternately you may want to use Maya or Blender which can provide more
advanced options for rendering, lighting and animating a flyby of the final result.
Project Resources
Reference code snippets implementing the core functions from the first four assignments are
available here:
[project_code.zip] (https://canvas.eee.uci.edu/courses/55164/files/22247586?wrap=1)
(https://canvas.eee.uci.edu/courses/55164/files/22247586/download?download_frd=1)
Provided scans are available here:
scanning data 2020 [google drive]
(https://drive.google.com/drive/folders/1yTPLZVa3B1Jtz18f_qL1cem8-UD6G5tq?usp=sharing)
5/31/23, 1:20 PM Final Project Guidelines and Resources: COMPSCI117: PROJECT IN COMPUTER VISION
https://canvas.eee.uci.edu/courses/55164/pages/final-project-guidelines-and-resources 3/3
[additional scans to appear]
Meshlab
A tool for editing and cleaning meshes:
http://meshlab.sourceforge.net/ (http://meshlab.sourceforge.net/.)
It is acceptable to use Meshlab to do the alignment of your different scans but you should implement
mesh cleanup and smoothing in your own reconstruct script.
https://www.instructables.com/id/Using-Meshlab-to-Clean-and-Assemble-Laser-Scan-Dat/
(https://www.instructables.com/id/Using-Meshlab-to-Clean-and-Assemble-Laser-Scan-Dat/)
https://www.youtube.com/channel/UC70CKZQPj_ZAJ0Osrm6TyTg
(https://www.youtube.com/channel/UC70CKZQPj_ZAJ0Osrm6TyTg)
Poisson Surface Reconstruction
You can find the Possion surface reconstruction tool here:
http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version9.01/
(http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version9.01/)
There is a python wrapper for this code which can allow you to run poisson reconstruction directly
from python but you will need to build the package in your local enviornment.
https://github.com/mmolero/pypoisson (https://github.com/mmolero/pypoisson)
There is also a Poisson reconstruction plugin inside Meshlab which is another option.
trimesh : https://trimsh.org/index.html (https://trimsh.org/index.html)
You can install this library in anacoda via "conda install -c conda-forge trimesh".
One useful thing is that it has a much better 3D visualizer than the default matplotlib visualization we
used in assignment 4. It will interactively visualize the mesh quickly inside a python notebook.
https://trimsh.org/examples/quick_start.html (https://trimsh.org/examples/quick_start.html)
essay、essay代写