CS5567 -python代写
时间:2024-12-04
UMKC SSE
CS5567 Deep Learning Project 3 -- FS 24


IMPORTANT: (please read and follow all steps!)
Instructions:
1. Take this step-by-step.
2. Don’t get overwhelmed.
3. Remember that we’re here to support you, so please ASK for assistance!

❖ FILES:
➢ Download “DL Project 3.zip” – it contains (.ipynb) files.
➢ Where appropriate load the scripts given for each experiment in your Kaggle (preferred) / Colab
Or local python workspace. If you need help setting up a local workspace, we’ll be happy to assist!
➢ Most of the data files you’ll need will be loaded as part of the scripts, but some will require a bit
of integration (many examples and links are provided).

❖ Loading the dataset into the notebook:
➢ Copy the URL of the dataset provided
➢ Access the notebook, on the right panel → Add Input → Paste the URL → Click the ‘+’ button at
the bottom → Cool! It’s There!

❖ Instructions:
➢ There are 2 Parts A & B
➢ The Experiments table (below) includes guidelines/instructions.

❖ Results:
➢ Save a record of the values you achieved along with EACH experimental configuration you tested.
➢ Both the TRAIN and TEST metrics are important to the assessment of each task. There are a variety
of ways these data may be displayed and reported, but always remember that a VISUAL
representation {plot} will be superior to a simple numerical representation {table}.
➢ You will provide 1-2 pages of your formal REPORT on your findings from the experiments in PARTS
A and B using appropriate terms, and references* (where applicable).
*Reminder: All generated material must provide reference to the source model.

❖ Project 3 Report: Generate an explanation of your efforts and findings as well as the responses to
the questions requested under “Discussion” at the end of the document.
This will be in the form of 2 documents:
➢ ONE report document (.docx or .pdf)
▪ Notes, observations, Results, and Responses to Discussion – your observations when
implementing the requested processes.
▪ Accessible Links to all 5 Notebooks and Accessible Link to the Panopto Video, Please Re-Check
the accessibility of the links before submitting to get your efforts graded.
▪ This video highlights all your RESULTS, Efforts and understanding on all the topics. Do not
exceed 7 minutes (5 points penalty for every 10 sec under 6:30 min and over 7:30 min)
➢ ONE report file (.xlsx)
▪ Save the output of your training & testing processes as part of this file.
Experiments



Part A – Object Detection and Segmentation
Experiment 1 Localization [ROI]
Data Set Dataset used in the notebook - Dataset (similar format for new datasets)
For new datasets use - https://public.roboflow.com/object-detection
Look for the file format → "YoloV8"
Modifications Modify the hyperparameters
➢ Epochs
➢ Batch size
➢ Optimizer
➢ Learning rate
➢ Regularization techniques
Process Utilize the supplied (Object_detection_using_yolov8.ipynb)
❖ Modify the existing code from the notebook and replicate the process and
results on the dataset. Target validation mAP50-95 value is 0.75.
Reporting Add the results of your training &/or testing (output) to your .xlsx file
➢ Save the images and video generated from your experiments
➢ Save the metrics provided for final performance (IoU, etc.)

Experiment 2 UNet Segmentation
Data Set ❖ Dataset used in the notebook - Dataset
Process Utilize the supplied (UNet Segmentation.ipynb)
❖ Modify the existing code from the notebook and replicate the process and
results on the dataset. Target Test IOU - 0.67 and Dice - 0.75
Modifications Modify the hyperparameters
➢ Batch size
➢ Optimizer
➢ Learning rate
➢ Regularization techniques
Reporting Add the results of your training &/or testing (output) to your .xlsx file
➢ Save the images generated from your experiments
➢ Save the metrics provided for final performance (IoU, Dice_coef, etc.)

Experiment 3 SegFormer Segmentation
Data Set Included
❖ Feel like testing SegFormer on custom dataset? Feel free to import any
image dataset and generate results.
Process Utilize the (Fine_tune_SegFormer.ipynb)
Follow the steps indicated in the script to load and refine the model.
Modifications Select 3 new sample images from the dataset (at random) and assess the
performance of the task optimized network on those images.
Reporting Add the results of your training &/or testing (output) to your .xlsx file
➢ Save the images generated from your experiments
➢ Save the metrics provided for final performance (IoU, etc.)
Experiments



Part B: Transfer Learning

Experiment 1 & 2 Transfer Learning
Data Set CIFAR-100  colab import example
Pre-Trained
Models
❖ Included (vgg19_transfer_learningipynb) Reference → VGG_19
❖ Included (resnet50_transfer_learning.ipynb) Reference → RESNET
Parameters Utilize the Keras API TRANSFER LEARNING documentation:
https://keras.io/guides/transfer_learning/
➢ Look first at the section under the heading →
“The typical transfer-learning workflow”
➢ Load the model and weights for each pre-trained model above.
➢ Add the classification portion for the dataset referenced above.
➢ Train your classification layers using your preferred topology and
optimization functions.
Modifications You should “freeze” the model after importing e.g. →
base_model.trainable = False
Get familiar with Resizing → Resizing (important for model training)

Make whatever hyperparameter changes you need to arrive at a reasonable level
of performance [10-15 % < industry standard] in classification for the listed Data
Set. Make sure the model performance (testing accuracy) is not depleted beyond
60% and 65% for vgg19 and resnet50 respectively.

❖ Utilize knowledge gained thus far to determine which topology and
hyperparameters are likely to provide reliable performance for this task and
don’t forget about regularization methods.
PROCESS For Each Pre-Trained Model → “Train the top layer” (search term in reference)
[add the DENSE elements and build the model appropriate for each Data Set]
(2 Models x 1 Dataset) = 2 total outcomes/models
Reporting Add the results of your training &/or testing (output) to your .xlsx file
➢ Track each of your model’s performance during training and testing.
➢ Plot Training and Testing performance (accuracy)
➢ Plot Metrics (ROC, mAP, etc.)

Discussion:
1. Compare the results of your experiments for Object Detection.
a. Display and discuss the results of object detection on images and video using YOLOv8.
b. Which setting (images or videos) provides the best object localization and classification
performance?
c. What challenges did you encounter in detecting partially visible objects, and how did the
model perform in these cases?
d. In which scenarios did you observe false positives or false negatives, and what might be
the reason behind these misclassifications?
Experiments



2. Compare the results of your experiments for Segmentation.
a. Display and discuss the results of the image segmentation using the UNet model.
b. Display and discuss the results of the image segmentation using SegFormer.
c. Which model do you feel provides the best structural understanding of geometric
structures?
d. Did you observe any issues with overlapping objects?

3. Compare the results of your experiments for Transfer Learning.
a. Display and discuss the selected classification layer training performance.
b. Display and discuss the results of the test performance for each experiment.
c. Which model do you think provided the better set of features for training between the
VGG and RESNET? How did you come to this conclusion?

4. Discuss in a few sentences your observations with each of the experiments. How do the things
you observed relate to the things we’ve covered in this course? Do not simply include the
definition of these topics – instead speak about how you feel your efforts relate to the outcomes.



学霸联盟
essay、essay代写