程序代写案例-COM3503
时间:2022-01-12
COM3503
COM3503 3
1. Figure 1 shows a quadcopter model made
entirely from spheres, where each sphere
is transformed into the required shape for
the particular part. The central body of the
quadcopter has four arms attached to it.
Within each arm, at the end of the
horizontal piece, the darker-coloured
piece contains an engine to rotate the
rotor blade assembly attached above it. A
rotor blade assembly consists of a sphere
and two rotor blades, each of which is
tilted slightly in its direction of travel.
a) The transformations required to create the relationships between the pieces is the
focus of this part of the question, rather than the actual size of the pieces.
(i) Draw a scene graph for the model. Include transformation nodes. Assume that
a method exists to draw a sphere with its origin at its centre – this will affect
the transformations used. Where branches or subbranches are similar, use a
brief statement describing this rather than drawing the whole branch.
[35%]
(ii) State where you would place transformations in your scene graph to control a
vertical ascent (take-off). As the quadcopter ascends, the rotor blades should
rotate around their respective vertical axes. In your answer, briefly discuss any
need to link different transformation nodes in the scene graph.
[15%]
(iii) Discuss the transformation(s) necessary to control the length of one of the
arms of the quadcopter. The engine and rotor blade assembly should not
change size and should remain positioned at the end of the arm. In your
answer, state where the transformation(s) would be positioned in the scene
graph and briefly discuss how this would affect the different parts of the
hierarchy.
[15%]
b) Assume the quadcopter is modelled using a polygon mesh model. One of the
triangles in the mesh is composed of the vertices p0 = (2, 0, 0), p1 = (0, 2, 0), and
p2 = (0, 0, 2), listed in anticlockwise order. The camera is at the world coordinate
position (4, 5, 6) looking at the world origin. Using relevant vectors and
calculations, determine whether or not the triangle should be back face culled. Full
working out must be shown. Note: The cross product for vectors a = (ax, ay, az) and
b = (bx, by, bz) is given by a × b = (aybz−azby, azbx−axbz, axby−aybx).
[20%]
c) Consider that one of the quadcopter’s engines is on fire and producing a lot of
smoke. A solution based on particle systems and billboards is proposed. A free-
roaming camera is being used to view the scene. Outline three possible visual
problems when rendering the smoke effect in conjunction with the quadcopter
model.
[15%]
Figure 1. A quadcopter toy. The grid
of lines is the xz plane.
COM3503
COM3503 4
2. This question makes use of Figures 2 and 3 (on the next page), which show a vertex
and a fragment shader, respectively. Consider that these shaders are used in the
rendering of a polygon mesh object. Line numbers are included in Figures 2 and 3 to
make it easy to refer to particular lines of code in this question and in your answers.
a) Describe two visual effects that result from setting the calculation in line 29 of the
fragment shader to a value of 0.
[10%]
b) State both changes required to the fragment shader code if the light source was a
directional light source, i.e. effectively positioned at infinity.
[10%]
c) Describe the purpose of line 15 in the vertex shader.
[10%]
d) The vertex and fragment shaders implement a particular local reflection model.
Describe two omissions in the model, i.e. factors that occur in reality that the model
does not deal with.
[10%]
e) Consider a fragment with aPos=(0,0,7) and aNormal=(0,0,1). The light (fragment
shader lines 10-17) has position=(12,0,23), ambient=(0.3,0.3,0.3), diffuse=(1,1,1)
and specular=(1,1,1). The material (fragment shader lines 19-26) has
ambient=(0,0.5,0), diffuse=(0,0.8,0), specular=(0,0,0), shininess=1. Thus the
fragment has no specular component. Calculate the diffuse intensity for the
fragment. You must show your full working out in your answer.
[20%]
f) Consider the fragment in 2(e). The material’s specular value is changed to (1,1,1),
with a shininess value of 128. Where would a viewer have to be positioned to see
a specular highlight on the fragment and what direction should they be looking in?
[10%]
g) A simple hidden surface removal method that deals with complete polygons is
called the Painter’s algorithm. A polygon list is built up in memory that contains for
each polygon a single depth value (for example the depth of the centroid). Polygons
are then rendered into screen memory in order of depth, the furthest polygon being
dealt with first. Compare and contrast the z-buffer approach to hidden surface
removal with the Painter’s algorithm, using the following criteria:
(i) Memory required and scene complexity;
[15%]
(ii) Depth correctness of final rendering. Consider both static and animated
scenes in your answer.
[15%]
COM3503
COM3503 5

Figure 2. A vertex shader
01: #version 330 core
02:
03: layout (location = 0) in vec3 position;
04: layout (location = 1) in vec3 normal;
05:
06: out vec3 aPos;
07: out vec3 aNormal;
08:
09: uniform mat4 model;
10: uniform mat4 mvpMatrix;
11:
12: void main() {
13: gl_Position = mvpMatrix * vec4(position, 1.0);
14: aPos = vec3(model*vec4(position, 1.0f));
15: aNormal = mat3(transpose(inverse(model))) * normal;
16: }
01: #version 330 core
02:
03: in vec3 aPos;
04: in vec3 aNormal;
05:
06: out vec4 fragColor;
07:
08: uniform vec3 viewPos;
09:
10: struct Light {
11: vec3 position;
12: vec3 ambient;
13: vec3 diffuse;
14: vec3 specular;
15: };
16:
17: uniform Light light;
18:
19: struct Material {
20: vec3 ambient;
21: vec3 diffuse;
22: vec3 specular;
23: float shininess;
24: };
25:
26: uniform Material material;
27:
28: void main() {
29: vec3 ambient = light.ambient * material.ambient;
30:
31: vec3 norm = normalize(aNormal);
32: vec3 lightDir = normalize(light.position - aPos);
33: float diff = max(dot(norm, lightDir), 0.0);
34: vec3 diffuse = light.diffuse * diff * material.diffuse;
35:
36: vec3 viewDir = normalize(viewPos - aPos);
37: vec3 reflectDir = reflect(-lightDir, norm);
38: float spec = pow(max(dot(viewDir, reflectDir), 0.0),
39: material.shininess);
40: vec3 specular = light.specular * spec * material.specular;
41:
42: vec3 result = ambient + diffuse + specular;
43: fragColor = vec4(result, 1.0);
44: }
Figure 3. A fragment shader (split into two pieces to
fit everything on one page)
COM3503
COM3503 6
3. a) Two common approaches used to produce shadows in polygon mesh rendering
are shadow maps (i.e. the shadow z-buffer approach) and shadow volumes. Briefly
contrast the two approaches with respect to each of the following statements,
making sure you include comments on each approach in each of your answers:
(i) The approach allows objects to cast shadows on themselves (i.e. self-
shadowing);
[10%]
(ii) The approach renders the scene geometry from the viewpoint of the light;
[10%]
(iii) The approach generates extra geometric primitives;
[10%]
(iv) The resolution of the intermediate representation used in the approach can
result in problems with aliasing.
[10%]
b) An animation sequence is required for a very old alien’s face represented as a
polygon mesh model. The skin of the face is green and also has blotches of
different colours. Some of these blotches are yellow and some are a metallic, shiny
colour that reflect objects in the surrounding scene. In the animation sequence, the
face makes a series of expressions such as smiling and surprise, which produce
time-varying wrinkles on the skin. Choose three different texturing approaches that
would satisfy the requirements in this scenario. Discuss the reasons why these
approaches are appropriate, how combinations of these would be used to realise
the full requirements of the scenario, and discuss any issues caused by the fact
that the skin is moving.
[60%]
COM3503
COM3503 7
4. a) Figure 4 shows a set of numbered rays that are produced when tracing a ray from
the eye (or camera) through a single screen pixel in a 2D world for a standard naïve
ray tracing approach.
(i) Consider that a second light source is introduced in Figure 4. How would this
change the illustrated ray tracing process?
[5%]
(ii) Consider that s2 is an opaque sphere in Figure 4. How would this change the
illustrated ray tracing process? State any relevant ray numbers from Figure 4
in your answer.
[5%]
(iii) Ray 11 intersects object p1 in Figure 4. Give two reasons why a vector normal
to the object’s surface is required at the intersection point.
[10%]
b) Consider a room in an art gallery that contains a number of statues. An angle-poise
lamp jumps through the gallery stopping to view each statue in turn. Unfortunately,
it bumps into a statue, knocking it over and breaking it into pieces. Compare and
contrast the following two approaches for representing this scene and increasing
ray tracing speed: (i) a combination of bounding spheres and space partitioning;
(ii) a solution that only uses bounding spheres. Sketches can be used to help
illustrate your answer.
[35%]

1
A slice of the screen
showing 5 pixels
eye
light source
s1 – an opaque sphere
s2 – a transparent sphere
p1 – a
single
polygon
p2 – a
single
polygon This box defines
the extent of the
world and is treated
as the background.
2
3
4
5
6 7
8
9
10
11
12
13
Figure 4: The standard naïve ray tracing process in 2D
COM3503
COM3503 8
c) Consider a ray tracer applied to a three-dimensional scene containing n teapots,
where each teapot is a polygon model composed of m polygons. The scene is
illuminated with 2 point light sources and the program is rendering onto a screen
of dimensions width w and height h. The program is to render ‘geometric’ or hard-
edged shadows. You must give your reasoning in each of your answers to the
following questions.
(i) In terms of n, m, w and h, how many polygon intersection tests are invoked
for the first generation (i.e. initial) rays for the set of screen pixels (not including
shadow calculations)?
[5%]
(ii) Starting at 4(c)(i), assume 40% of the initial rays hit a teapot and then spawn
second generation rays. Assuming 70% of the second generation rays hit a
teapot, what is the minimum and maximum number of polygon intersection
tests that are invoked for shadow calculations for the second generation of
rays?
[10%]
(iii) Starting at 4(c)(i), the ray tracing approach is now changed so that up to 5
rays are spawned for each pixel. In terms of n, m, w and h, what is the
maximum number of polygon intersection tests that could be invoked for the
first generation (i.e. initial) rays for the set of screen pixels including rays for
shadow calculations?
[10%]
(iv) Starting at 4(c)(i), assume half of the teapots are now enclosed in bounding
spheres. Assume 60% of first generation rays hit bounding spheres. What are
the totals of bounding sphere and polygon intersection tests that are invoked
for the first generation rays (not including shadow calculations)?
[20%]

essay、essay代写