程序代写案例-CIV4100-Assignment 2
时间:2022-05-03
CIV4100: Autonomous Vehicle Systems

Assignment 2
This summative assignment, consisting of two parts
Part 1 (25%) and Part 2 (15%),
is due by Friday Week 12 of the semester.
Overview
This assignment includes the development, integration and testing of the perception modules in
automated driving systems (ADS). The assignment makes use of the accumulated knowledge and
experiences learnt in the course, especially in Weeks 6, 7, 8 and 10.
Tasks: There are two parts of tasks in this assignment: (i) Developing a deep learning system
for perception of signboards; (ii) Testing the developed perception system.
Time: These tasks can be commenced at any time during the semester and should be
submitted in the Moodle DropBox before or by the end of Week 12.
Resources
Dataset: The dataset traffic_dataset.zip will be loaded automatically in the Jupyter code or can
also be obtained separately (see Appendix A1).
Reference: Convolutional Neural Network, and Google Colab

Submissions
You must submit the followings:
■ Your complete report using the Microsoft Word template provided (Appendix A3).
■ Your complete Jupyter (Python) scripts (Appendix A2).
■ The video recording of showing your codes, main results and comments (Appendix A4).
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 2 of 14

Important Note
This assignment is an individual assignment.

Assignment Information
There are 2 Parts in this assignment:
● Part 1: Developing a deep learning system for perception of signboards
● Part 2: Testing the developed perception system
The supplement information is provided in the Appendix:
● A1: Information on the dataset used.
● A2: The Jupyter code template for all tasks.
● A3: The report template for all tasks.
● A4: Note on the video preparation
● A5: Instructions to complete the assignment.





CIV4100: Autonomous Vehicle Systems Assignment 2

Page 3 of 14

PART 1: DEVELOPING A DEEP LEARNING SYSTEM FOR PERCEPTION OF SIGNBOARDS

In this Part, you will develop a working Convolutional Neural Networks (CNN)-based system for the
perception of Automated Driving System (ADS).
Refer to the instructions (Appendix A5), the code template (Appendix A2) and the report template
(Appendix A3) to complete this task.

Task 1.1: Data preparation for the CNN (5% marks)
Data preparation is the first step to build a machine learning or deep learning model. In data
preparation, you need to collect, combine, and organize data so that it can be effectively used to
train the models. In this task of assignment, you are required to perform data visualization and
image processing to obtain a new dataset as the input for the CNN system.
Task 1.2: Developing a CNN-based system (10% marks)
With a new training dataset prepared from Task 1.1, you are required to build and train a CNN-
based model underpinned by Keras and TensorFlow. The input is the dataset on traffic signs that
you have just prepared. Please follow the instructions when developing your model.
Note that if you are unable to perform Task 1.1, you can use the given processed dataset
(traffic_dataset_processed.zip) to continue the Assignment and perform this task. No marks will be
given to Tasks 1.1.
Task 1.3: Model assessment (10% marks)
You are required to evaluate the performance of your trained model on the testing dataset. First,
you need to record the testing accuracy, and then use your trained model in Task 1.2 to generate
predictions for some images in your test dataset and visualise the results.
You also need to discuss: (i) the performance of your CNN model; and (ii) the difference between
this model (in Step 7 of Task 1.2) and your mini VGG model (in Step 8 of Task 1.2).
Task 1.4: Developing report and recording video
You need to develop the report (using the template Appendix A3) that includes the outcomes of
Task 1.1, Task 1.2 and Task 1.3.
You are required to record a video (no more than 2 minutes) running through your code to
demonstrate its working and generate the results that you have included in the report and combine
it with the video in Task 2.4 (see Appendix A4).
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 4 of 14

PART 2: TESTING THE PERCEPTION SYSTEM

After developing a deep learning system for perception of signboards, you are required to
undertake the following tasks.
Refer to the instructions (Appendix A5), the code template (Appendix A2) and the report template
(Appendix A3) to complete this task.

Task 2.1: Testing planning, execution and reporting with traceability matrix (5% marks)
Traceability matrix is a useful tool supporting good test planning as well as efficient test
management and control. You are required to create and maintain a traceability matrix (in a
tabular format) to map and trace requirements with test assets (including test condition, test cases,
and result) created in the whole testing process.
Task 2.2: Functional testing with known test oracle (5% marks)
You are required to undertake all activities for the dynamic testing, that is testing the system in
running for a given test case.
In software testing, a test oracle is a mechanism for determining whether a test has passed or
failed. With a known test oracle, you will be able to determine whether the system is faulty by
comparing the system actual result with the expected result. For example, if the test case is an
image containing the “Stop” sign board, the classification model should predict it with a “stop” label
(or some number in the class that is linked to this label). Otherwise, the model/system is faulty or
inaccurate.
In this task, you will be given 20 unique images as 20 source test cases, each with a known label
(that is, test oracle). Now you are required to create follow-up (new) test cases by applying some
transformation to these images. Their labels can be reused as the test oracle for the new test cases.
If the deep learning model/system predicts wrongly against these labels, we know that it is faulty
or inaccurate. You are required to create 3 different sets of test cases using 3 different
transformations:
● Rotating the image (by 20 degrees)
● Zooming (1.3 times) into the centre of images
● Adding noise (normal distribution with mean 0 and sigma 0.6)
Once done, all test cases should also be put into the traceability matrix created in Task 2.1 to map
with test conditions as well as requirements. You are required to schedule and execute all test
cases designed. All test execution results (pass and/or fail) should be analysed, summarised, and
reported. They should also be recorded in the traceability matrix created in Task 2.1.
Task 2.3: Functional testing without test oracle by Metamorphic Testing (5% marks)
You are required to adopt the Metamorphic Testing (MT) approach for testing the system. MT is
the method that alleviates the need for a test oracle in the software.
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 5 of 14

You will be given 20 unique images as 20 source test cases (which are different from the source
ones in Task 2.2). The challenge is that you are not given the source labels. You are required to
create 20 follow-up (new) test cases by applying some transformations to these images. Since
we do not have the labels, we are unable to directly verify whether the outputs of 20 follow-up test
cases are correct. As a result, you will have 3 sets of Metamorphic Relations (MRs) as follows
● MR1: The ADS system should be robust against small misalignment of images.
● MR2: The ADS system should be robust against a slight cropping of images.
● MR3: The ADS system should be robust against noisy images.
Once done, all test cases should be put into the traceability matrix as what has been done in Task
2.3.
You are required to discuss the difference between methodology and outcomes of Task 2.2 and
Task 2.3.
Task 2.4: Developing report and recording video
You need to develop the report (using the template Appendix A3) that includes the outcomes of
Task 2.1, Task 2.2 and Task 2.3.
You are required to record a video (no more than 2 minutes) running through your code to
demonstrate its working and generate the results that you have included in the report and combine
it with the video in Task 1.4 (see Appendix A4).
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 6 of 14

APPENDIX

A1. Information on the dataset used
The dataset used in this assignment is a Traffic Dataset in JSON format. Dataset consists of images
in *.jpg format and a corresponding *.json format file with same names. These *.json include class
name and annotations of bound boxes of traffic signs in the following format:

Note that
● class_index: An integer value which takes one value from {0, 1, 2, 3}. Each value would
present a traffic sign category:
● 0: prohibitory sign
● 1: danger sign
● 2: mandatory sign
● 3: other
● _: The normalized x coordinate of the center of bounding box, which is
calculated by
_ =



● _: The normalized y coordinate of the center of bounding box, which is
calculated by
_ =

ℎℎ

● ℎ: This is the normalized width of bounding box, which is calculated by
ℎ =



● ℎℎ: This is the normalized height of bounding box, which is calculated by
ℎℎ =
ℎℎ
ℎℎ

{
“class_index”: “1”
“x_center_norm”: “0.6856617647058824”
“y_center_norm”: “0.469375”
“box_width_norm”: “0.05073529411764706”
“box_height_norm”: “0.08375”
}
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 7 of 14

The dataset traffic_dataset.zip will be loaded automatically in the Jupyter code. Alternatively, you
can also be downloaded directly from one of the links:
● The original dataset (for Task 1.1 + Task 1.2): https://tinyurl.com/2a2vz2bs
● The processed dataset (for Task 1.2 only if needed): https://tinyurl.com/4ntv3vtw

A2. The Jupyter code template for all tasks.
Please refer to the Jupyter code template at the link here.
1. Open the Jupyter code template link
2. Select File -> Save a copy in your Drive

3. Complete your Assignment 2 using the code template you just copied in your Drive.

A3. The report template for all tasks.
You are required to use the provided template for the completion of this assignment. Please refer
to the CIV4100-Assignment2-ReportTemplate.docx file in the Moodle.


A4. Note on the video preparation
You need to combine two videos (from Part 1 and Part 2) into a single video for submission. You
are required to submit the combined video (no more than 4 minutes) running through your code to
demonstrate its working and generate the results that you have included in the report.

CIV4100: Autonomous Vehicle Systems Assignment 2

Page 8 of 14

A5. Instructions to complete the assignment.

You can follow the following steps to complete the assignment.
Task 1.1
Step 1: Write a Python function displayImages(), which displays the first 5 images from the
traffic_dataset folder. For each image, display its shape as well. An output of this display may look
like the following:

Step 2: Write a Python function loadLabels(), which loads the first five label files in json format.
Extract x_center_norm, y_center_norm, box_width_norm, box_height_norm (please refer to the
Appendix A1). An output of this display may look like the following:

Step 3: Write a Python function displayImagesWithBoundingBox(), which displays the first 5 images
with bounding boxes of traffic signs on images. For each bounding box, display the corresponding
traffic sign class name on top. Please refer to the Appendix A1 for more information. An output of
this display may look like the following:
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 9 of 14


Hint: From the normalized bounding box information, you need to compute the original x_center,
y_center, box_width, box_height for each bounding box and use cv2 to add annotation and draw a
rectangular for each bounding box.
Step 4: Write a Python function createDataSet() to extract the traffic sign image from the dataset
by only keeping the image section inside the bounding box. Resize the traffic sign images to 32 x
32 pixels. Name the traffic sign images in ascending order, starting from 0.png. Save the
corresponding image file to the exact subfolder inside the traffic_dataset/traffic_sign folder.
Note that: If you are unable to complete Task 1.1, you can continue with Task 1.2 using the
processed dataset. The details are given in the Jupyter code template.

Task 1.2
Step 1: Import necessary packages and modules (Tensorflow, Keras, Matplotlib, NumPy). Set a
fixed NumPy and TF random seed with your student ID.
Step 2: Use cv2 to load all traffic sign images dataset one by one and append them to a
training_data list. For each image, append its corresponding label index to a training_labels list.
Step 3: Convert both training_data list and training_label list to NumPy array, namely X_train_full
and y_train_full. Reshape the training set to make sure X_train_full has shape [total_images,
image_height, image_width, color_channel] and y_train_full has shape [total_images, ]
Step 4: Perform the image normalization to make sure all pixels are within the range from 0 to 1.
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 10 of 14

Step 5: Split the training data and training labels to 80% for training, 10% for validation and 10%
for testing.
Step 6: Use the provided visualise_data() Python function to visualise the first 20 images in the
training set, validation set and testing set. The expected output may look like the following:

Step 7: Run the provided code to get the result for a very simple CNN model
Step 8: Build a mini VGG model and train the model using the training set. Validate the model
performance with validation dataset.
Step 9: Plot the training accuracy and validation accuracy versus the number of epochs in 1 plot.
The expected output may look like the following:


CIV4100: Autonomous Vehicle Systems Assignment 2

Page 11 of 14

Task 1.3
Step 1: Evaluate the performance of your mini VGG model on the testing dataset using the
evaluate() function of keras.
Step 2: Generate predictions for the first 20 images in your testing dataset and visualise the results
using the visualise_data() function.
Step 3: Discuss the fitting of CNN model, that is, whether it is overfitted, underfitted or appropriately
fitted – and why? The discussion should be inputted in Section 1.3 of the provided template.
Step 4: Compare the CNN model (in Step 7 of Task 1.2) and your mini (VGG model in Step 8 of
Task 1.2), e.g., accuracy, training time, quality of input training dataset.

Task 2.1
Step 1. Prepare the traceability matrix table (as given in the task template).
Step 2. In the matrix, you need to fill in or create the new information for the testing that you are
performing.

Task 2.2
Step 1. You are required to create 3 different sets of test cases using 3 different transformations.
Each set contains 20 new test cases.
(a) Rotating the image (by 20 degrees)
(b) Zooming (1.3 times) into the centre of images
(c) Adding noise (normal distribution with mean 0 and sigma 0.6)
For each set, the tasks are
+ Create a function for transforming the test case
+ Execute and evaluate the test cases
Step 2. You are required to execute the system under test with a total number of 60 test cases.
The system under test is the final system (after tuning). Each test case will consist of an image as
an input and a label as the output. As a result, you will have 60 different labels as the output. You
can put these outputs into the traceability matrix.
The sample code is given in the Jupyter template. Sample processed images are given as follows.
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 12 of 14


Source (original) images and their correct labels (oracle).

Follow-up (new) images and their labels. Texts in red colour highlight the mistaken labels after
zooming into the image.
Step 3. You are required to prepare the violation summary table. You can enter “Failed” in the table;
otherwise, put “Passed”. Describe the insight of your findings using the template provided.

Task 2.3
You are required to test the system without knowing its original (correct) labels. You can compare
the follow-up labels against the source labels to determine whether the program is faulty or
inaccurate.
CIV4100: Autonomous Vehicle Systems Assignment 2

Page 13 of 14

Step 1. You are required to create 3 different sets of test cases using 3 different transformations.
In total, you will have 20 pairs of test cases for each MR:
(a) MR1: 20 metamorphic groups (i.e., pairs) of the source (original) and follow-up (rotated) images.
(b) MR2: 20 metamorphic groups (i.e., pairs) of the source (original) and follow-up (cropped)
images.
(c) MR3: 20 metamorphic groups (i.e., pairs) of the source (original) and follow-up (noisy) images.
Step 2. You are required to execute the system under test with a total number of 80 test cases (20
source + 60 follow-ups). For each group (pair), you need to compare each source image against
the follow-up image to see whether its label is the same or not. If it is not the same, the MR is
known to be violated, and hence, the program is faulty or inaccurate.
The sample code is also given in the Jupyter code template.
Step 3. You are required to prepare the violation summary table. You can enter “Failed” in the table;
otherwise, put “Passed”. Describe the insight of your findings using the template provided.

Once completed, remember to submit all required documents for assessment.

End of Assignment 2

CIV4100: Autonomous Vehicle Systems Assignment 2

Page 14 of 14

UPLOAD YOUR ASSIGNMENT VIA THE UNIT MOODLE WEBSITE

Students MUST upload their Assignment(s) via the Unit Moodle website (not via email to the
lecturer). Please visit the following Moodle website for CIV4100:
http://moodle.vle.monash.edu/my/

and locate the section “Assignment Submissions via Moodle”.

Click the “Add submission” button.

Upload your Assignment consisting of
● one report file named CIV4100_Assignment2_Report_YourSURNAME.docx/pdf
● one video file named CIV4100_Assignment2_Video_YourSURNAME.mp4
● one zip file containing your final (working) Jupyter code
CIV4100_Assignment2_Code_YourSURNAME.zip

(Submission under different file name formats will NOT be accepted and marked!!)

Save Changes
Click on “Submit Assignment” button
The Plagiarism/Collusion Student Statement will then appear.
If you agree to the Student Statement tick the *box in red towards the lower part of the page.

Click the “Continue” button.

Your Assignment will be successfully submitted, and you should receive a confirmation email (that
the Assignment has been submitted).

essay、essay代写