COSC3000-python代写
时间:2024-03-27
Due
05/06/2015
Daya Kern
43135123
COSC3000 PROJECT REPORT –
COMPUTER GRAPHICS
The following report documents the design processes and choices behind basic computer graphic
visualisations. These visualisations are extended from the concept of TV energy efficiency established
in a previous report and aim to put a new spin on the data analysed.
i
TABLE OF CONTENTS
LIST OF FIGURES ................................................................................................................... ii
1 INTRODUCTION .............................................................................................................. 1
2 METHODS ......................................................................................................................... 2
2.1 Implemented Methods ............................................... Error! Bookmark not defined.
2.2 Discarded Methods .................................................... Error! Bookmark not defined.
3 RESULTS/DISCUSSION .................................................................................................. 7
4 CONCLUSION .................................................................................................................. 9
5 REFERENCES ................................................................................................................. 10
6 APPENDIX ...................................................................................................................... 11
6.1 PERSONAL CONCLUSION ................................................................................... 12
ii
LIST OF FIGURES
Figure 1 Modelling process....................................................................................................... 3
Figure 2 In-demo objects with the same frame origin .............................................................. 3
Figure 3 Reflective surface of the black television model ........................................................ 4
Figure 4 Window from the virtual lounge room ....................................................................... 4
Figure 5 (left) the envisioned texture mapping for the plant (right) the final version of the
plant ........................................................................................................................................... 5
Figure 6 Example of odd lighting ............................................................................................. 6
Figure 7 Main screen of Demo1 ............................................................................................... 7
Figure 8 Instructions for changing TV size .............................................................................. 7
Figure 9 (left) Intial view of demo (right) Camera angled upwards to display the blackTV is
shiny not white ........................................................................................................................... 8
Figure 10 (left) .......................................................................................................................... 8
Figure 10 Program controls for navigating the environment .. Error! Bookmark not defined.
Figure A1 Inspiration for lounge room design........................................................................ 11
Figure C1 Experimentation on reading in data from csvs and displaying the results. ........... 14
Figure C2 Original GUI layout for Demo 1 ........................................................................... 14
Figure D1 An initial idea of a CG visualisation for a TV Constructor ..................................... 0
1
1 INTRODUCTION
Computer Graphics (CG) –
The science and practice of creating or manipulating images with computers1
It is through this manipulation and generation that we are able to effectively communicate
and see concepts that are outside the scope of our limitations (for example, aspects of the
universe or DNA strands). CG has been integrated in a variety of environments ranging from
scientific to video games and is the central process of further illustrating and extending upon
the data collected and visualised graphically in Project 1.
While the previous preliminary investigation looked into influential factors and how they
affected TV energy efficiency–through graphical visualisations–this report documents the
paths taken to create an interactive ‘Television Constructor’. A variety of CG visualisation
methodologies used to create this demo are achieved through utilising the Python
programming language2 and various associative libraries (Python Imaging Library (PIL)3 and
Open Graphics Library (OpenGL)4).
This idea of a ‘Television Constructor’ originates from the concept of our throw-away
society. As mentioned by the government, greenhouse gas emissions are becoming more
problematic with the increase of more “affordable electrical appliances” (McGee, 2013).
This is evident when browsing through electronic goods stores. The amount of choice
available makes it a challenge for consumers to make the right purchasing decision for a TV
which suits their lifestyles. The ‘Television Constructor’ aims to solve this problem in a
hypothetical universe by letting the consumer see what sort of television is available based on
customisable options (namely CEC, Star Rating, Screen Size, and Screen Tech). The
television is displayed in a virtual lounge room so consumers can see how it looks in a virtual
environment setting before purchasing it (if it happens to be the right TV for them).
The original design of this CG visualisation is highlighted in Appendix D Figure D1.
However due to feedback, knowledge and time constraints a less ambitious version was
extracted and reformed to become the following demo:
Demo 1 The data displayed to the consumer can be dynamically updated to provide user
feedback as they change the size of the TV.
Demo 2 An observable virtual lounge room with the option of modifying the size of the
TV (no data is displayed as the screen size is changed) can be explored to some
degree.
1 (Hobson, 2015)
2 A widely used general-purpose, high-level programming language (Wikipedia, 2015)
3 Adds image processing capabilities to your Python interpreter (Python Imaging Library (PIL), 2005)
4 The computer industry’s standard application program interface (API) for defining 2D and 3D graphic images
(Gumbel & Yasko, 2011)
2
2 METHODS
While there are other programs with the capability to create and render the demo in a more
realistic and efficient manner, the following points further explain the choices behind why the
final decision was to utilise the Python language with associative PIL and OpenGL libraries.
 The hardware the CG programs were created on had NVIDIA OpenGL driver support
and Python software already installed.
 The weekly learning labs could be conducted in either Python or MATLAB.
Therefore it seemed logical to pick either one of those options. The conclusive factor
was the level of previous programming experience. The knowledge of the Python
language was stronger than that of MATLAB.
 The OpenGL Graphics API for Python and the PIL library were referenced in the
examples the weekly labs provided. As a result, it was thought the logical decision to
choose the APIs that were exemplified.
To successfully create this project fit for demonstration purposes, the aforementioned
applications were used to visualise the following key areas:
 3D rendered graphics (including textures, models)
 Camera
 Transformations (models, camera)
 Lighting
2.1 3D Model Creation & Texturing
Model Creation
The inspiration for the design of the lounge room and its contents came from Figure A1 in
Appendix A. It was thought to be a design which could encapsulate many of the techniques
learnt during the weeks following the start of this project. The geometric shapes of the
environment objects together with the variety of specularity (shininess) and textures were key
factors. However the implementation of this environment changed pathways many times
along the course of its creation. Because of the simplistic design of the environment, a low
polygon design was seen as the main visual choice. While a complex environment has a more
appealing visual ‘look’, it can consume a lot of computational time rendering high detailed
textures/many vertex points which was not wanted in this demo scenario.
The original course of action–after modelling the environment in Blender–was to export each
individual model as an obj file and import it using an existing program5 into the main Python
generated space. This program was modified to just import and load the models into the
python file without textures. The textures would have been added afterwards using
commands from the OpenGL library. The reason behind this choice was whilst there was
previous experience in modelling, UV-unwrapping and texturing, it was decided on a
personal level that the lessons learned through this project would have been lessened if the
comfort zone was not left.
The decision to model in Blender and texture using OpenGL became a problem as the project
advanced. The existing program was not modified to draw the texels as each vertex of the
model was created. In addition, lag occurred when rotating the imported object in the world.
5 Reading a wavefront .obj file with PyOpenGL: http://youtu.be/di343umywFk
3
As a consequence, all of the models that appear in the demo are created in the python file
with hard-coded vertex points (glVertex3f) using GL_QUADS and GL_POLYGON to
specify the type of primitive shape drawn. Appendix B contains snapshots of original model
created in Blender.
Even though Blender was not used to directly create the demo models, it was still utilised (in
the first half of production) to model the objects. Each vertex was then selected and its
position hard coded in the main program as shown in Figure 1.
Figure 1 Modelling process
Towards the final stages of production, models used in class were transformed and duplicated
to create different objects as a method to save time (Figure 2). The texture coordinates were
the only dimensions which needed to be updated.
Figure 2 In-demo objects with the same frame origin
4
Model Texturing
The PIL library was used to display image textures. This process was demonstrated in class,
and the same code was utilised and modified to enable textures in the demo (there are image
name and image id variables which are used to identify and bind the right texture to the right
model). 2D texture mapping6 was chosen over 3D texture mapping (namely focused on
implementing bump mapping/displacement mapping) as time and knowledge constraints
limited the texture type to just 2D.
The glColor3f function takes Red Green Blue values (RGB) to set the current object’s
colour. This function in conjunction with glMaterialfv (a method to specify material
parameters for the lighting model) was used as the final technique for the television model. A
television usually has reflective properties since TV screens are manufactured from glass.
Figure 3 Reflective surface of the black television model
It was later discovered that if an object has a texture enabled, the specularity of the material is
lost, so no texture was applied to the television, which was populated using glutSolidCube.
The glColor4f and GL_BLEND function also takes RGB values, with an added value for
Alpha. This was used for the window model, and attempted for the pot plant. The original
idea for the glass was to be able to refract the environment behind it. However due to a lack
of knowledge and time, this effect was ‘faked’ and the glass texture contained a frosted glass
texture so that when the transparency level was increased, the glass looks like it has more
depth and distortion.
Figure 4 Window from the virtual lounge room
6 glEnable(GL_TEXTURE_2D)/ glDisable(GL_TEXTURE_2D) / glTexCoord2f/ glBindTexture are
methods used to successfully map a 2D texture.
5
Furthermore, looking into utilising the transparency the Alpha channel provides, the idea was
to use Chroma keying to make part of the texture for the pot plant transparent. However it
was found that shaders were required to get this sort of transparency to happen. The solution
was to model using GL_POLYGON to create a more organic shape for the plant.
Figure 5 (left) the envisioned texture mapping for the plant (right) the final version of the plant 7
All the 2D textures were either modified from existing images, or created from scratch. The
text present in the first demo was taken and modified from the code provided in a weekly lab
and used glutBitmapCharacter, glColor4f and glRasterPos3f to render.
2.2 Camera
The camera of the demos was interactable in Demo 2 and statically positioned in Demo 1.
This was because the focus of Demo 1 was the television and the data displayed, rather than
exploring the environment in Demo 2. The camera formed the main user interactivity of
Demo 2, with the user being able to look up, down, left and right on key press events.
2.3 3D Transformations
As mentioned in Section 2.1.1 models were transformed and duplicated to create other
elements. Every asset in both Demo 1 and Demo 2 had at least one transformation applied.
Scaling the object, then rotating and translating it altered the asset to the desired position and
size. This provided a quicker method to change the size of the modelled object; which was
modelled as close to a 1 by 1 by 1 dimension as possible and then expanded to a 4 by 8 by 4
as an example.
Another method of interactivity encompassed the user being able to press a key and have the
entire demo world rotate around the camera. This global transformation utilised hierarchical
transformations (Push/Pop) to apply distinct transformations to certain groups i.e. camera not
moving, demo environment moving. Moreover, this technique was also implemented to
resize the TV model (on key press also) separate to the rest of the assets.
7 Sourced from http://opengameart.org/sites/default/files/preview_meshes.jpg
6
2.4 Lighting
Local shading was used to illuminate the environment to bring further depth and realism.
Similar to setting the material properties of the TV, the same was done for the lighting8.
White light was chosen to maximise the colours of the textures reflected back. Multiple lights
were also positioned around the scene to maximise the areas illuminated when the lounge
room entity was interacted with (i.e. rotated). Conversely this aspect of the demo was not as
successful as was hoped. The lighting hit the planes of the models at odd angles, and no
amount of light position transformations would change the illumination oddity (Figure 6). A
theory could be that because the majority of the elements were duplicated, it might have
effected how the light interacted with the model. Another supposition could be that the lights
were not placed as effectively as they could have been.
Figure 6 Example of odd lighting
Global shading to cast shadows and reflections was a desired effect which could not be
reached without prior knowledge of shaders. These operations are also expensive. The idea
was for the user to interact in the environment in real-time.
8 Setting Diffuse, Ambient, Specular, Position and Spot Cut off values to try and cast the right sort of light.
7
3 RESULTS
3. 1 Demo1
This visualisation displayed to the user is a front-on view of a television model with text
positioned vertically descending as highlighted in Figure 7. The user is able to increase or
decrease the size of the television using specific key presses (Figure 8). The text on the
screen also updates as the keys are pressed. These values were taken arbitrarily from the
original data set used in Project 1. The scaling of the TV is also not consistent with the details
presented. However for the purpose of this visualisation it was thought to be enough. There
are two lights, one which provides the floor with a fading light source and one which
illuminates the top right of the television and part of the wall. Out of the two demos, this one
had the more successful lighting. There was material distinction (i.e. shiny TV, matte floor)
and the lighting illuminated the scene is a more natural manner. It was curious to see
however, that while the lighting was specified to be a white light (which can be seen on the
floor) the colour reflecting off the black TV was red.
Figure 7 Main screen of Demo1
Figure 8 Instructions for changing TV size
The original version of Demo 1 was imagined in a different way. This version relied on
making use of Python’s GUI Tkinter9 as an additional package. This implementation was
dropped in favour of a less time consuming and more visual route. Appendix C shows
snapshots of this failed attempt.
9 Python's de-facto standard GUI (Graphical User Interface) package (Athanasias, 2014).
8
3.2 Demo 2
Demo 2 starts off with the user centred in the middle of the room facing the television. This
position was determined as the TV is the central hub of this CG visualisation. The entirety of
the demo is contained inside a modelled and textured environment. The lighting on the floor
in this instance is the only occurrence where lighting looks right. The television looks white
at first, but this colouring changes as the user either initiates the world to rotate or rotate the
camera themselves. This white colour is just the light illuminating a shiny surface. It was not
possible during production time to successfully have the light illuminating a section of the
TV if it shines directly on it. However the rotation shows what is considered to be a
successful shiny surface.
Figure 9 (left) Intial view of demo (right) Camera angled upwards to display the blackTV is shiny not white
Figure 10 (left)
There are three lights in this scene positioned in different places to cast light in different
corners of the room. The aim of this was to illuminate the room in a better way, but was
unsuccessful.
In this demo, the user may also alter the size of the TV. However the main focus is on
displaying the texturing of the models and lighting of the rest of the demo world.In regards
looking around the ‘room’ the user cannot move around the room, rather the camera enables
them to look up or down to a certain degree as horizontally around.
9
4 CONCLUSION
In an ideal situation, the models and environment would have been created in Blender10. It is
a faster process to model the objects in a program where the vertices and faces are drawn for
you dynamically as you create the shape; as opposed to manually drawing each set of vertices
in a contained 3D space. The TV would have been a scaled model coinciding to an existing
screen size in the data set, and then as the user resizes, the data that is presented is accurate.
Global shading and the use of shaders would have been the best way to improve the lighting
problem, and to create a more realistic scene. There were so many ideas which if time and
knowledge permitted would have been carried out. These aspects just mentioned would be
the direction of where this project would head for further refinement.
In summary, the chosen methods which were used to recreate this idea of a TV constructor
were executed in a manner fit for the level of current level of knowledge known and
programs available. While the final product may not have implemented different techniques
to the desired level, the personal growth and worthwhile experience far outweigh the
limitations. This project opened up a whole new area of exploration for which further projects
may benefit from the experience gained from the process behind the demo.
10 A professional free and open-source 3D CG software product used for creating animated films, visual effects,
art, 3D printed models, interactive 3D applications and video games (Wikipedia, 2015).
10
5 REFERENCES
Athanasias, D. (2014). TkInter. Retrieved from Python Wiki:
https://wiki.python.org/moin/TkInter
Gumbel, M., & Yasko, G. (2011). OpenGL (Open Graphics Library). Retrieved from Tech
Definition Web site: http://whatis.techtarget.com/definition/OpenGL-Open-Graphics-
Library
Hobson, T. (2015). Computer Graphics An Introduction week one notes [PowerPoint slides].
McGee, C. (2013). Energy. Retrieved from Your Home: http://www.yourhome.gov.au/energy
Python Imaging Library (PIL). (2005). Retrieved from pythonware:
http://www.pythonware.com/products/pil/
Wikipedia. (2015). Blender (software). Retrieved from Wikipedia The Free Encyclopedia:
http://en.wikipedia.org/wiki/Blender_%28software%29
Wikipedia. (2015). Python (programming language). Retrieved from Wikipedia The Free
Encyclopedia: http://en.wikipedia.org/wiki/Python_%28programming_language%29
11
APPENDIX A
Figure A1 Inspiration for lounge room design
Sourced: http://exeoinc.com/basic-living-room-apartment/awesome-basic-living-room-
apartment-on-living-room-ideas/
12
APPENDIX B
Original models created in Blender, not all of these assets made it to being reconfigured for
Python GL_QUADS GL_POLYGON
13
14
APPENDIX C
The following images highlight a pathway originally thought to be the best way to create
Demo 1. These ideas were primarily dropped because of time constraints.
Figure C1 Experimentation on reading in data from csvs and displaying the results.
Figure C2 Original GUI layout for Demo 1
The original type layout for the TV constructor was to have a 3D rotational TV with a light
above it somewhere (in white screen) and to have the values (next to ‘TV Options’)
15
dynamically change as the user changes the size of the TV. This interaction would be by
either typing it in the text field or resizing the TV using keyboard events.
Figure C1 was the code created to populating the empty text fields with the corresponding
data i.e. if the TV Size equals 203:
 Search through list of TV sizes to find 203
 Get the index
 For the same index for the other options display the details.
This method to display details presents a display of more accurate data values than the ones
in the final Demo 1.
17
APPENDIX D
Figure D1 An initial idea of a CG visualisation for a TV Constructor


essay、essay代写