matlab代写-ENG6
时间:2022-05-29
ENG6 Final Project
Up to now in ENG6, we have focused on teaching you the “how” of programming. In the team
project, you will be taking your own ideas and bringing them to fruition through your knowledge
of computer programming. Software development is rarely a one-person effort, so the project
will be team-based. Beyond the core part of the final project, we ask you to implement several
special features, called Reach elements, so as to make your project different from your
classmates.
Important: We discourage using code written by someone else, however, if you do so, you
must first obtain permission from the teaching staff, and you must reference the code creator.
Project Topic 1: Image Classifier
Project Description: Deep learning has become a recent hot topic in the world of computing.
Various disciplines have leveraged this tool to open up a plethora of new research and
applications. Deep learning is a subfield of machine learning, which often uses labeled data to
teach or “train” a program to make predictions. As a simple example, a set of images labeled as
either a cat or a dog can be used to train a model. After training, the model can be shown a
picture it has never seen before and predict whether the picture depicts a cat or a dog. In
particular, what separates deep learning from other forms of machine learning is its use of
artificial neural networks (ANNs) An ANN is a collection of connected nodes called artificial
neurons, which operate similarly to biological neurons. These neural networks are designed to
mimic the behavior of the human brain, giving them the ability to “learn” from numerous sources
of information. Within the interior of the network, large amounts of hidden layers serve to create
more accurate models.
Figure 1. Deep Learning neural networks use multiple layers of artificial neurons.
Image recognition and classification are mature fields that have successfully leveraged deep
learning. There are many potential applications. A self-driving car can recognize obstacles and
traffic signs. A farmer can use a robot to visually survey and detect diseases in crops. A security
system can verify identity with facial recognition. Within the specific field of image classification,
convolutional neural networks (CNNs) have made great strides in increasing the accuracy of
predictions. CNNs learn abstract features and concepts from raw image pixels. An image is fed
into the network in its raw form (pixels). The image goes through many convolutional layers, and
within each layer, the network learns new and increasingly complex features.
Figure 2. Each higher-level layer of a CNN can recognize higher-level features.
Transfer Learning: One drawback of CNNs is that they can take a long time to train, especially
when the network consists of many layers. To avoid long training times, transfer learning is a
practical way to use deep learning by modifying an existing deep network (usually trained by an
expert) to work with a new set of data. Transfer learning, in the context of image classification,
operates on the notion that if a model is trained on a set of images that is large and
representative, this model can have a generic, visual understanding of all objects in the world.
One can exploit these existing, learned feature maps without having to spend the time training a
completely new model from a new large dataset. To adapt the model to a new set of data, one
can choose to retrain only the top layers and the last layers of the existing model. Doing so
saves time while maintaining the accuracy of the existing model.
Augmentation: Another difficult task is collecting enough images to use for training. For certain
applications, it can be impractical for users to collect the volume of labeled images needed to
generate a highly accurate classifier. One way to solve this problem is to augment the dataset
with images you currently have. Each image can be altered in some way to become a new
datum. For example, an image rotated 90 degrees is now a different image that can be added to
the training set. More training data can be added without collecting more images. In addition,
training a model with such augmented images can make it “smarter” and have the ability to
recognize an object regardless of its orientation in different images.
Project Description: In this project, you will use App Designer to implement an interactive
program that trains a neural network and classifies images based on categories selected by the
user. You will implement options that the user can select to potentially increase the accuracy of
their models. These include image preprocessing, augmentation, and various parameters that
are passed to the training function. The goal is to not just implement an image classifier but a
tool that allows the user to experiment in different approaches for increasing the accuracy of a
trained network. Your UI must be done using App Designer and not other UI tools.
In the next part, you will use your tool to solve an image classification problem of your choosing.
This is an opportunity to apply your knowledge to a field you are interested in. Finally, your
classification tool must support remote classification. That is, you should be able to run your
program with a fully trained network in “server” mode. Then, using a device in a remote location,
you can communicate the image to your program, which will then send back its prediction of the
category for that image. In this part of the project, you will be using ThingSpeak and the
Dropbox API.
Transfer Learning Provided Code: To help you get started, we provide you with a transfer
learning example. This example can be found in Canvas->Final Project->Spring 2022->Image
Classifier Project->TransferLearningExample.zip. If you are undecided about whether or not to
do this project, we encourage you to download and run this code to gain a quick understanding
of the general project goals. While you may be new to deep learning and it might sound
complicated, you can see from the provided example that transfer learning can be implemented
in just a few lines of Matlab code (excluding the code used to read the images) using the Deep
Learning Toolbox. In fact, most of the programming you will do in this project will instead be
developing the UI, manipulating images, and implementing remote functionality. The example
code was developed from code written by Mathworks employee Joe Hicklin. In the example, we
use the famous trained CNN, AlexNet, for transfer learning. AlexNet is a CNN with 25 layers
and has been trained to recognize 1000 categories. We will “re-train” some parts of AlexNet to
recognize 5 categories of our choosing, namely different junk foods: hamburgers, hot dogs, ice
cream, apple pie, cupcakes. AlexNet expects images to be 227x227 pixels, and your program,
like the example, will need to resize images before they can be used by the network. For each
category, we provide a set of 30 images. Typically, for good accuracy, each category should
have at least 1000 images. We do not go to this extreme in this example, and we do not expect
you to for this project either. The example shows how AlexNet can be adjusted to recognize
new categories. The example, using the default training parameters, takes around 5 minutes to
finish training on one of the lab machines.
To do this project, you will need Matlab’s Deep Learning Toolbox. To check if the toolbox is
installed, go to the Home tab in Matlab->Add-Ons->Manage Add-Ons. If the toolbox is not
installed, click Get Add-Ons in the top right, and enter “Deep Learning” in the search bar. The
toolbox should appear with a link to download and install. Another toolbox you will need is the
Image Processing Toolbox. This toolbox is necessary to run the example code, and you will
likely find it useful for your project as well, since it contains many functions for performing image
operations. Like the previous step, you can obtain the Image Processing Toolbox by searching
Matlab Add-Ons. Finally, you will need to download and install the pretrained AlexNet model,
which you can obtain from here: https://www.mathworks.com/matlabcentral/fileexchange/59133-
deep-learning-toolbox-model-for-alexnet-network
Grading Criteria: The project requires certain criteria to be met but is open-ended in other
aspects. When it comes to designing a GUI, there is certainly some level of subjectivity in terms
of ease of use. However, some solutions are certainly more acceptable and attractive than
others. The final project will be graded in multiple parts:
Project proposal:
Each team submits a 1-3 page project proposal via Canvas describing the project they have
selected, a general description of how they will implement the main components of their
project. Furthermore, it should state the chosen problem domain for classification.
Essentially, the scope of the project should be both challenging enough to merit full credit
and doable within the timeline. An Appendix should contain a breakdown of programming
tasks, and who will be responsible for what, along with a timeline that will meet the
submission deadline (we suggest you make use of a Gantt chart). The expectation is that
each team member must take responsibility for a specific aspect of the project and grading
for each member will be adjusted according to how the project tasks were delegated and
who was responsible for what aspects of the project. The more specific you can be in
defining the programming tasks, what functions should exist, and what each function should
accomplish, the better.
Project Core Component: 40% of grade.
Complete the basic project as outlined in the project description. This is the basic
functionality of your program and will be graded earlier as a milestone.
Train a new Neural Network with user-defined options: This is part of the main core
of your program. Your GUI should provide options to a user to create new categories.
These will be used to define a new image classifier that is created by retraining parts of
AlexNet. The training is done via transfer learning, which is demonstrated in the example
code. The user should be allowed to specify the directories containing the category
names and the labeled images. They should also be allowed to specify the subset of
images that will be used as test images to later measure accuracy, as is done in the
provided example. A user will set certain options before hitting a “train network” button
that retrains AlexNet into a new CNN that recognizes their specified categories. For
training options (the Matlab trainingOptions struct), the user can choose values for at
least these three parameters:
● initial learning rate - the step size used during function minimization. Setting this
larger can speed up training but make the final result less accurate.
● maximum number of epochs - You can think of a for-loop over the number of
epochs where each loop proceeds over the training dataset. Within this for-loop
is another nested for-loop that iterates over each batch of samples, where one
batch has the specified “batch size” number of samples. A higher value usually
leads to a more accurate model.
● minibatch size - the number of samples per batch to use to make predictions
during training. Typically, it is a multiple of 32.
Augmentation: The user has the option to augment his/her training set by processing
the training images in some way. Some of these options should have a range of possible
values. Augmentation should only be applied to images used for training and not
testing. Furthermore, each augmentation should only be done on the original images,
not on images generated by a different augmentation. You are required to implement the
following three:
● Rotation - duplicate each image multiple times in the dataset, but rotate each
copy by a certain amount. Example: rotating each image in the training set by 90,
180 and 270 degrees would quadruple the size of the training set.
● Scaling - duplicate each image multiple times in the dataset, but each copy is
zoomed in by a certain amount
● Flipping - flip each image horizontally, vertically, or both
You will find that Matlab provides various functions to perform image manipulation.
Run an accuracy test on a set of labeled test images: Once the network has been
trained, it must be measured for accuracy. In the “options” step, the user could specify
which labeled images in the directory could be used for testing. In this tab, the user can
now run the trained model on these images. The user should also have the option of
running an accuracy test automatically as soon as training is finished. Since the data is
labeled, you can determine the images that the model predicted correctly and
incorrectly. Just like in the provided example, the accuracy is the number of correct
predictions divided by the total number of test images. The results should be displayed
in this tab. The test images and their correct/incorrect prediction do not all need to be
displayed at once. For example, the UI can show one result at a time using a “next” and
“previous” button.
Classify a single, unlabeled image using the trained network: Once the user has
pressed the “train network” button, and the network has finished training, the user can
now select a yet unseen, unlabeled image from disk to classify. In this case, there is one
image, and the program should load the image and display it on the UI. This can be
done in a separate tab. From here, the user should also have the option of modifying the
image before trying to classify it. These modifications should happen before the image is
resized to 227x227. The modifications should update the image displayed in the UI.
They can be done successively. The required options include:
● Cropping - The user can crop the image using four values, which are the
amounts to clip on the top, bottom, left, and right of the image.
● Rotation - The user can rotate the image by a certain amount.
● Scaling - The user can zoom in on the image by a certain amount.
Classify a set of unlabeled images using the trained network: In this part, the user
selects multiple unlabeled images to be classified by the trained network. Unlike the
case for a single test image, you do not need to allow the user to modify the set of
images before classifying them. After the images have been classified, the images and
the prediction for each image needs to be displayed in some way. They do not all need
to be displayed at once. For example, the UI can show one result at a time using a “next”
and “previous” button.
Project Reach Component: 35% of grade.
These are the additional features you need to implement to have a completed project. The
remote functionality should be done in a separate part of the UI (such as in a different tab).
Choose your own classification problem: Choose an image classification domain for
running your program. You are encouraged to pick something in a field that interests
you. There should be a minimum of 3 categories. A more interesting problem will receive
a better grade. For example, determining if an image is of a cat, dog, or cow is not very
interesting. Determining if a picture of a certain species of plant has 3 possible diseases
or no disease at all is more interesting. For this part, we do not prioritize the potential
accuracy that can be obtained. Having a more interesting, but challenging problem is
more important than being able to train a highly accurate network for that problem.
Therefore, we only require a minimum of 60% accuracy. Once you have chosen your
problem domain and collected your images, experiment with the different options
supported by your app to try to maximize the accuracy of your classification. For this
part, we will once again not prioritize obtaining high accuracy, but instead on your
methodology in attempting to obtain high accuracy. Therefore, it is important that you
keep a record of your experiments and be able to explain clearly why you chose certain
values for certain options. Make sure that in all your tests, the set of test images are the
same so that you can fairly measure and compare accuracy across different networks.
Note down all the options and parameters that give you the highest accuracy for your
problem, so that another person can recreate your network. In this part of the project,
you are running your own program, so you’ll likely discover bugs in your program, which
gives you a good opportunity to detect and fix them.
Remote Classification: This part of the project involves remote programming and is
required. A decent percentage of the grade will depend on its completion, so be sure not
to start late. In this part, you will make your UI program train a neural network and then
idle in “server” mode. Next, you will take a mobile device (a laptop, tablet, or phone) that
contains an image from the problem domain you selected for the previous part. The
device will run a Matlab program that can transfer the image on the device remotely to
your server program. The server will then classify the image using the network and send
back its prediction of the category. This remote interaction will be implemented using
ThingSpeak and Dropbox. We will provide you with instructions and code, using Matlab,
to upload an image to Dropbox and download the same image. The ThingSpeak portion
determines the sender/receiver logic and knowing when to send/receive a response.
You must implement this part yourself. The remote interaction is required to run
smoothly. For example, once the remote program has sent its image, it should simply
wait until it receives a response from the server. No additional user interaction should be
necessary to receive a response. You are not required to program a UI using App
Designer for the remote app. If your problem domain allows for it, we encourage you to
use your phone’s camera to take a picture and send it to your trained network remotely.
Figure 3. Example UI for an Image Classifier
Project Topic 2: Vessel Traffic System
Project Description: Implement a MATLAB computer program that simulates a Vessel Traffic
System (VTS) that monitors virtual ships that periodically send their positions. There are two
components in this project: core and reach. The core component, which must be completed by
all teams, implements the Vessel Traffic System using a locally generated database with the
information of the ships and their position. The reach component, must implement at least 3
virtual ships that share their information and position periodically through ThingSpeak.
Additional functionalities, decided by each team, should be implemented.
Project Restrictions:
1. MATLAB toolboxes: Only MATLAB toolboxes included in the student license may be used. In
particular, the MATLAB Mapping Toolbox may not be used.
2. Graphical User Interface programming: All GUI implementations must be programmed using
App Designer. You are not allowed to implement the project using Guide, submissions
programmed with Guide will receive zero points without exception.
3. Collaboration Policy: Once teams are formed you are only allowed to talk and collaborate with
members within your team. Team members are expected to equally participate, and
collaboratively work towards the completion of the project. Other than contacting the teaching
assistants for clarification, you may not seek the assistance of other persons to complete your
team's project. Of course, general discussions about how to implement GUI, OOP and other
programming constructs could be discussed with other students, but your team must be totally
responsible for the implementation of code used in your project.
Project proposal: 10% of grade
Each team submits, via Canvas, a 2-3 page project proposal describing the project, a general
description of how you will implement the main components of your and a clear description of
the Reach features that your team proposes. Essentially, the scope of the project should be
challenging enough to merit full credit and doable within the timeline. An Appendix should
contain a breakdown of programming tasks, and who will be responsible for what, along with a
timeline that will meet the submission deadline (we suggest you make use of a Gantt chart). The
expectation is that each team member must take responsibility for a specific aspect of the
project and grading for each member will be adjusted according to how the project tasks were
delegated and who was responsible for what aspects of the project. The more specific you can
be in defining the programming tasks, what functions should exist, and what each function
should accomplish, the better.
Core component: 40% of grade.
The objective is to use App Designer to recreate a Vessel Traffic System (VTS) with similar
functionalities as Marine Traffic. The computer program that you design should show a map with
traffic information from ships. For the core component the functionalities required for the VTS
are the following:
- Display a fixed section of a map.
- Generate a local database (Matlab table) of at least 3 ships that must include the
following properties: MMSI, time, latitude, longitude, speed over ground, vessel name,
vessel type, length, width. The database should have at least 20 entries for each ship.
- Plot the generated database to simulate the real-time operation of the system. I.e., ship
position should be displayed in chronological order and their information kept even if a
transmission is not made for an amount of time.
- Configure speed alarm. E.g., if a ship is at a certain speed, an alarm should be
activated.
Fig. 1. Sample interface for VTS application using App Designer.
Fig. 2. Real marine traffic information from local port processed using Matlab.
Reach component: 35% of grade.
The main objective of the Reach component is to use ThinkSpeak as a medium of information
exchange with the VTS implemented in the Core component. For this purpose, an additional
computer program will be created. This program is going to generate information for at least 3
virtual ships and upload it periodically to ThingSpeak. The shared information would be similar
to the locally generated database from the Core component but will allow the VTS to work in a
more realistic fashion, i.e., virtual real time.
The team should create at least 1 public ThingSpeak channel. The core component should
perform the following tasks:
- Configure each ship to report the following information: MMSI, time, latitude, longitude,
speed over ground, vessel name, vessel type, length, width.
- Generate at least 3 ships that update their information and position periodically to
ThingSpeak. Also include custom report intervals for every ship.
- Operate each ship with the following customizable information: start position, stop
position, speed, start/stop transmission.
Fig. 3. Sample interface for Vessel application using App Designer.
As additional features, the following are presented as means of examples:
- Configure position alarm. E.g., if a ship enters a defined boundary an alarm should be
activated.
- Configure an alarm that will activate when a specific number of ships are in the same
area.
- Add a mechanism to select which ships to show and which to hide depending on the
available information.
- Filter ships by some parameter, e.g., size, flag, etc.
- Include a functionality to show the historic location information of a particular ship or a
set of ships.
- Add more ships with additional properties.
- Process incoming alarm from VTS and act, e.g., reply, change course or speed.
- Configure variable speed.
- Enable a round trip for circuit voyage, e.g. ABCA.
Project Topic 3: Final Project: Flight Simulator using Genetic
Algorithms
Project Description:
In this project we will implement a MATLAB App to simulate the communication between a
Primary Computer and a Secondary Computer to help a vehicle navigate through a 3D figure,
as if we were simulating a mobile travel simulator. The process, represented in Figure 1, to
generate the 3D figure is as follows:
The Primary Computer can be, for example a computer in a flying vehicle, is
processing 2D cross-sections terrain images captured by a camera in the vehicle.
The cross-section information has a limited number of data points that are sent to a
(assumed more powerful) Secondary Computer located in a ground.
All data exchange between Primary and Secondary Computers will be done using
Thingspeak,
The Secondary computer will use a Genetic Algorithm (GA) to generate a set of
sample points from the cross-section data received form the Primary Computer. The
TA will demonstrate to you a Genetic Algorithm that is easy to implement.
The Secondary Computer will then send the points generated by the GA back to the
Primary Computer. Using these points, the Primary Computer will create a simple 3D
terrain image.
Figure 1. Terrain cross-section data points obtained by the Primary Computer are sent to the
Secondary Computer. A genetic algorithm in the Secondary Computer creates a set of image
cross-section sample points which are sent to the Primary Computer to generate a low
resolution 3Dimage.
You have the freedom to simulate a navigation of your choosing such as an airplane
navigating through terrain, a motorcycle, etc. As you can see, the simulator will involve more
than one computer and the users should be able to coordinate the computers to process the
Primary Computer
3D Surface
Secondary Computer
Genetic Algorithms
CrossSections
1000 points
Fitted Points
20 points
3D surface's animation. As a final result, the main computer will show the navigation of a 3D
surface (terrain) with an inside view of the 3D terrain representing hills, mountains, etc. using
data provided by a secondary computer will provide the data points for the 3D surface using a
Genetic Algorithms to optimize for a set of data points representing the 3D surface. Please
see the examples of Graphical User Interfaces (GUIs) created with Apps simulating a flight
simulation on Figures 2 and 3. Livescripts with explanation and examples will be provided for
this project.
Figure 2. Example of a design view of the GUI representing the Main computer
creating a Flight Simulator
Figure 3. Example of a design view of the GUI representing Secondary computer
performing the GA analysis.
Each team makes decisions on what programming elements to use to interact with the users.
Special attention should be paid to:
Clarity on how to use the simulator and how the users should interact with the program.
The visual, auditory cues and special effects (e.g. animations, a sound selection, etc.).
The users should be able to coordinate two computers over the internet.
All projects should have the following elements:
A graphical user interface.
Genetic Algorithms
Navigation of a 3D surface.
A sound effect.
Make use of a user-defined OOP class in at least one programming element.
One or more tables.
Clearly indicate in your code and your video where these elements are implemented. In
your YouTube video (more about this below), please point out how you implemented
some features (especially in the Core and the Reach) inside your code. What functions
did you use? Did you use any data structures such as structs etc? What was challenging
about implementing a certain feature and why?
Useful Resources: MATLAB supporting files - zipped directory
Links to external resources:
· Play Audio
· MathSoft Guide
· GA function
· 3D surface, campo, camTarget
Project proposal: 10% of grade.
Each team submits a 2-3 page project proposal via Canvas describing the project they have
selected, a general description of how they will implement the main components of their
project, and a clear description of the Reach features that their team proposes. Essentially, the
scope of the project should be both challenging enough to merit full credit and doable within the
timeline. An Appendix should contain a breakdown of programming tasks, and who will be
responsible for what, along with a timeline that will meet the submission deadline (we suggest
you make use of a Gantt chart.. The expectation is that each team member must take
responsibility for a specific aspect of the project and grading
for each member will be adjusted according to how the project tasks were delegated and who
was responsible for what aspects of the project. The more specific you can be in defining the
programming tasks, what functions should exist, and what each function should accomplish,
the better.
Main programing tasks Suggestion:
GA optimization functions
3D Surface and navegación; surf, campos, camtarget funcions
Masks for Apps and remote coordination
Core Component: 40% of grade.
Complete the basic project as outlined in the project description.
For example, create a story about a pilot and show your design. You can show a cabin
flight traveling through mountains, as shown in Figure 2. Or a Motorcycle traveling
between hills, or a submarine traveling in a deep ocean with seamounts, etc.
● Main Computer – Animations
o Create a 3D surface with data points provided by the TA
o Create an APP using images related to your story
o Use Campos and camtarget functions to navigate the 3D surface
○ See a matlab example
https://www.mathworks.com/help/matlab/ref/camtarget.html
● Secondary Computer
○ Use GA to find the points that represent the 3D Surface (Mountain in figure 2)
○ Create the APP with inputs and output for your GA (see figure 3 as an example).
Reach Component: 35% of grade.
Implement the project enhancements described in your proposal but be sure to include the
following:
● ThinkSpeak: Use thinkSpeak to coordinate the main and the secondary computers. The
main computer should be able to generate a surface with the points provided by the
secondary computer.
● Sound Effects, optional, such as:
○ Turbulence
○ Music
○ Alarms
○ Others
The following requirements and deadlines are common to all project topics.
Code and UI Requirements: Your code should be well-organized with properly named
variables. The code should be well-documented, especially the part for remote functionality.
Also consider whether certain data structures are better for implementing certain features:
classes, structs, tables etc. The UI should run smoothly, be user-friendly, and be aesthetic.
The UI should not crash or generate bugs/errors when it is tested. All aspects of the
program should run without errors.
Grading Criteria: The projects are open ended. As long as your program can perform the
assigned tasks, there will be no correct or incorrect approaches. Certainly, there will be
more acceptable and attractive solutions, and that will be judged in comparison with
competing solutions submitted by your classmates.
Youtube Video Requirements 15% of grade.: Youtube has several examples of ENG6
videos (search ENG6). The format of the video is entirely up to your team as long as the
following criteria are met:
● The maximum length of the video is 10 minutes. Do not speed up the video of
yourselves explaining the project to fit the time to 10 minutes. You can skip the parts
where you are waiting for your program to finish running.
● Each team member must be seen in the video to present their work and contributions
● There should be a clear and easy-to-follow demonstration that shows the correct
functionality of your program (show your program actually working in the video – not
screen shots of before and after.) This is especially important for the remote
functionality, and your team members should demonstrate this part working in real-
time. The video should also go over in some detail how your code for remote
functionality works.
● Be honest if a certain feature is not working 100%. More leeway is given for honesty
over a broken feature than for a feature that is shown to be working when it actually
doesn’t when the application is run.
Team Evaluations: Each member must provide a brief personal summary of their involvement
and contributions. Each team member is required to submit evaluations of their teammates’
contribution. For example, if your team has members A, B, C, your evaluation can be similar to
the following for a single member.
Team Member A: was in charge of coding the UI elements for the various options. For the
Reach features, A was in charge of programming and displaying the plot. Team Members B, C
agree that A performed these tasks for the project.
Project Deadlines:
Deadline #1: Friday, May 20, 9:00 PM: Submit the Project Proposal: A team member must
submit your team name to canvas with the project proposal. Only one team member should do
this! This will be 10% of your grade.
The submission must contain an image of the design view with the main components of your
program. The image must show the buttons, axes, images, edit fields, etc.
Deadline #2: Saturday, May 28, 9:00 PM: Submit the Core Component. A single team
member should submit all relevant code files to Canvas in a zip file. The zip file should have the
mlapp, all .m files, all image files, and any other files that are needed for the program to run.
Meeting this deadline will be 20% of your grade.
Deadline #3: Friday, June 3, 9:00 PM: Submit the Final Project. The final submission should
include the Reach features, and the remote part should be working. A single member of each
team will submit all relevant coding files, all collected images in your problem domain, a link to
the Youtube video, and team evaluation materials. That person should also submit a zip file of
all the code, the zip file will have all .mlapp files, all image files and any other files that are
needed for the program to run. The remote program should also be included and named
appropriately to indicate that it’s the remote program. In addition, the submission should contain
a PDF of the team evaluation document. The link of the Youtube should be accessible to all
those who use the link. This will be the remaining 70% of your grade.
Code Integrity: Other than the code we provide, you are not allowed to use Matlab code written
by anyone else without first obtaining permission from the teaching staff. If the purpose of the
request is primarily to reduce the amount of programming you have to do, then it will likely not
be granted. However, if using existing code can turn a good project into a great one, then the
request will be considered. Using external code in your project without explicit approval from the
teaching staff is a violation of the Academic Code of Conduct and may result in disciplinary
action. Furthermore, do not use projects done in previous quarters, as we have a record of
these.
Collaboration Policy: Once the teams are formed you are only allowed to collaborate with
members within your team. Team members are expected to equally participate, and
collaboratively work towards the completion of the project. Other than contacting the teaching
assistants for clarification, you may not seek the assistance of other persons to complete your
team's project. Of course, general discussions about how to implement a GUI, OOP, and other
programming constructs could be discussed with other students, but your team must be totally
responsible for the implementation of code used in your project. Teams that share code in
violation of the rules may be subject to disciplinary action.
Team Issues: Issues involving team members should be resolved early in the project timeline.
These can include: team members not communicating/responding, team members not attending
meetings, team members not doing their assigned portion of work, etc. These cases need to be
brought up with your TA or instructor early in the project timeline, so that the necessary
adjustments and interventions can be made to address them. Issues with your team members
do not provide a valid excuse to turn in inadequate work, as these issues should be
brought up and resolved early in the project timeline.