云计算代写-CSC3065
时间:2021-11-22
CSC3065 Cloud Computing 2021/22
Assessment 2: Editor
Assessment Briefing Version: 2 (11/11/2021)
Weighting: 60%
Set By: David Cutting
Moderated By: Neil Anderson

Date Released: 12/11/2021
Submission Due: 1700 on 13/12/2021

Late submission penalties and rules will be applied in accordance with the QUB policy on
late submission. For more information on this or any other QUB policy with regards to
assessment please see:

https://www.qub.ac.uk/directorates/AcademicStudentAffairs/AcademicAffairs/Examination
sandAssessment/MarkSchemesandClassifications/

Please be aware that this is an individual assignment and as such will be checked for
plagiarism. You should ensure that the work is yours and yours alone, citing any third-party
sources as applicable. Plagiarism is a serious academic offense. Submission of your work
implies your claim that it is your individual work and has not previously been submitted for
academic credit.

If you have any questions about the assignment please see the module organiser David
Cutting in the first instance or use any of the support options listed in this document.
1. Assessment Details

The QUBeditoron3000 (the editor) is the world’s most over-engineered yet deeply
unreliable text editor. It has been implemented using several cloud technologies including
stateless, containers, CI, and is currently deployed on the QPC.

Your assignment is to start with this as a base and to add more functionality as specified in
this document. There is some potential for a choice for what specifically you wish to
implement so you must make decisions on how to make these improvements.

The editor is basically a text box with some buttons which perform operations (such as word
counts) and report back the result.

editor-frontend: this is a simple container that contains some static HTML and Javascript
that implements a frontend. When operations are required it makes an XMLHttpRequest in
Javascript to one of the worker services. The endpoints it uses for these requests are
configured within the Javascript.

Repository: http://gitlab.hal.davecutting.uk/root/editor-frontend
Live Demo: http://frontend.editor.qpc.hal.davecutting.uk/


editor-wordcount: this is a PHP based container that takes a HTTP GET parameter (text) and
returns some JSON including the answer (number of words) and a description string. There
is some rudimentary CI testing (unit testing of the function alone but not as a web service).

Repository: http://gitlab.hal.davecutting.uk/root/editor-wordcount
Live Demo: http://wordcount.editor.qpc.hal.davecutting.uk/?text=goodbye+cruel+world

editor-charcount: this is a Node/Express based container than takes a HTTP GET parameter
(text) and returns some JSON including the answer (number of characters) and a description
string. There is some rudimental CI testing (unit testing of the function along but not as a
web service).

Repository: http://gitlab.hal.davecutting.uk/root/editor-charcount
Live Demo: http://charcount.editor.qpc.hal.davecutting.uk/?text=goodbye+cruel+world

You are required to modify and extend this editor as described in the following tasks.

Make sure you read and understand the instructions carefully, especially those on
submission! The report is what is marked (code and the video will only be sample checked
as needed to verify a claim made in the report or for moderation – anything in the code or
video that is not in the report will not be marked).


1.1 Tasks

A. New Functions

Add a minimum of four new functions to the editor including the changes to the front-end
as well as back-end implementations for each. See the assessment criteria for more
information but in general higher marks are given for use of new languages (beyond those
provided) and things such as CI pipelines. Points can also be awarded for use of novel
technologies i.e. deployment using FaaS for one function as an example (new technology or
language points are only awarded once, i.e. if you use FaaS twice it only counts as novel
once even if you use different languages or providers).

Example functions to add could include: number of vowels, comma counter, number of
spelling errors, average word length, number of palindromes, number of instances of “and”.

Note only four functions will be marked, handing in work with more than four functions will
result in the first four being marked and others ignored.


B. Improvement to Current Implementation

There are a number of issues with how the current editor works which include (but are likely
not limited to):
• Lack of handling for asynchronous calls being blocked or actions happening while
waiting for a response in the front end (for example if I click to get a wordcount but
then click to get a character count quickly before the first one has completed, they
will overwrite each other)
• No error handling by the front end (what if a backend service returns an error result)
• No error handling by either of the provided backend example services (neither cope
for example with empty strings or the variable text not being passed)
• Static configuration of routes to services in the frontend (they are lines of code
inside the source, would be much better to have some external configuration file or
service to work)
• CI testing for the provided backend services are unit tests on the function alone (i.e.
they do not actually perform an HTTP request to check web API functionality)

Plan and implement improvements to address some or all of the above, or other
shortcomings or issues you perceive within the provided system. See the assessment criteria
and submission instructions for more information on what may be assessed and how to
show this working.



C. Custom Proxy Router

Currently each function endpoint has a custom URL configured in the frontend client. For
this challenge you will build a self-contained reverse web proxy as a container to which the
frontend will pass all calls to (including any variables).

This proxy will then itself hold the specific endpoints for different methods and make an
HTTP call to the end service before getting then returning the answer.

For full marks this service should be highly configurable (how do we update the end points
or add new ones, can this happen dynamically while the service is running for instance) and
support some service discovery options (what methods are available or automatic
registration of new services for example).

Note you are required to build this yourself rather than using a third-party service or
container.

D. Frontend Service Failure Handler

The frontend currently stores a single URL for each function (after C this will be a single URL
for the proxy). This challenge is to update the frontend to support multiple URLs for each
function/proxy. If a specific URL falls offline or errors the frontend should be able to
continue using other endpoints instead.

This may also include load balancing while multiple endpoints are available.

As mentioned in combination with C this would refer to multiple URLs for the proxy service
provided by C1. Please note “frontend” as always refers to the HTML+JS component (the in-
browser), should you implement this handler in the proxy it will not count for marks against
C this only relates to the browser.

E. Monitoring and Metrics

Monitoring and testing service – all the testing currently happens during the build cycle with
CI testing. Implement a separate service (this could be a container or other service) that
when used makes HTTP requests (perhaps with random text strings) to the functions and
checks (a) correctness of results against expectations and (b) overall performance in time (in
an ideal world this service should periodically check performance and record results in
addition to an on-demand operation). Note you are required to build this yourself rather
than using a third-party service or container.

For full marks you are expected to have some form of monitoring periodically test and alert
on failure of your editor services (this can be an external service or container as you like as
long as you have done the first part from scratch yourself).


F. Stateful Saving of Current Value

Currently there is no way to store the current text in the editor. For this section you should
implement, using any technology (containers, FaaS, database, etc) you like a way to:
• Save the current text and return (and display) an identifier for this saved text
• Provide the option to enter an identifier and thus recall the saved text to the
frontend


G. Multi-Vendor Architecture

Complete a justified design (you are not required to implement) for how the editor could be
fully deployed onto multiple cloud vendors perhaps with a common broker/router. Include
diagrams and text as appropriate demonstrating how your design would ensure ultimate
resilience and continue to badly add numbers together even after a zombie uprising.

1.2 Deployment Environment

As it stands the current editor is deployed on the QPC Kubernetes system with source code
and docker registry in the QPC gitlab. It is suggested that you continue to use this
architecture for your project, or at least most of it, but this isn’t a requirement.

You are free to use any provider(s) you like and their technology stacks but be aware that
(a) less support will be available and (b) you must make any code accessible when you
submit (see submission section for details). It’s strongly suggested you discuss with the
teaching team before selecting a platform other than QPC Kubernetes/gitlab.

If you use another platform and then run out of credit so the project cannot be completed
or demo’d then you will loose marks.
2. Assessment Criteria

The following are the criteria against which your submission will be marked and their
conceptual marking equivalents.

General Marking Criteria (note this will be applied holistically to each section and are examples of the standard required)

Criteria Outstanding
85% +
Excellent
70%-85%
Very Good
60%-70%
Good
50%-60%
Acceptable
40%-50%
Unacceptable
< 40%
Task A.
Additional
Functions

4 x 7% (total
of 28% for A)
Perfect
implementation
of function in a
new language or
paradigm with
excellent CI tests
fully covering
functionality
Excellent
implementation
of function in a
new language or
paradigm with
excellent CI
tests. Some very
minor
weaknesses in
implementation
such as a lack of
sensible error
conditions.
Good
implementation of
a function usually
in a new language
with good CI tests.
Some weaknesses
in some aspects of
implementation.

Function
implemented well
in a new language
or with significant
updates to
provided models
with CI tests.
Some aspects of
the
implementation
missing or
lacking.
Function
implemented and
working but little
extension shown
beyond provided
examples.
Function not fully
or at all
implemented,
significant errors
may be present in
submission.
Task B.
Addressing
shortcomings

12%
Excellent
understanding
of the key
shortcomings
shown with
exemplary well
demonstrated
implementation
leading to a
robust
functional
system.
Very good
understanding
of the key
shortcomings
shown with
strong well
demonstrated
implementation
leading to a
robust
functional
system.

Good
understanding of
the key
shortcomings
shown with well
demonstrated
implementation
leading towards a
robust functional
system.
Demonstrates
understanding of
some key
shortcomings
which are well
addressed in a
suitable fashion
leading towards
an improved
system.
Some shortcomings
addressed correctly
or showing good
intention and nearly
functional solutions.
Shortcomings
addressed in minor
part or not at all.
Task C.
Proxy Router

10%
Excellent and
perfectly
demonstrated
implementation
of proxy to a
near industry
standard
incorporating
dynamic
configuration,
service discovery
and advanced
features.
Excellent and
very well
demonstrated
implementation
of proxy to a
very high
standard
incorporating
some advanced
features well
implemented
such as dynamic
configuration
and service
discovery.


Very good solution
well demonstrated
incorporating some
significant best
practices and
advanced features
but with some
limitations.
Good proxy
clearly
demonstrated
and meeting the
main goals set
well (and beyond
simple hard
coded requests)
but with some
weaknesses in
design or
implementation.
Proxy meeting (or
almost meeting) the
challenge set but
with major
weaknesses in
design or
implementation.
Proxy failing to
meet the
requirements,
containing
significant errors or
lacking in
functionality.
Task D.
Frontend
Failure
Handling

10%
Excellent and
perfectly
demonstrated
implementation
of challenge to a
near industry
standard
incorporating
best practices
and
documentation
as appropriate.
Excellent and
very well
demonstrated
implementation
of challenge to a
very high
standard
incorporating
best practices
and
documentation
as appropriate.

Very good
solutions well
demonstrated
incorporating some
significant best
practices and
documentation as
appropriate.
Good solutions
clearly
demonstrated
and meeting the
goals set well but
with some
weaknesses in
design or
implementation.
Solutions meeting
(or almost meeting)
the challenge set but
with major
weaknesses in
design or
implementation.
Solutions failing to
meet the
requirements,
containing
significant errors or
lacking in
functionality.
Task E.
Monitoring &
Metrics

15%

Excellent
industry quality
monitoring with
all aspects
considered and
full periodic
testing and
alerting.

Excellent
monitoring
showing near
industry quality
with nearly all
aspects
considered and
full periodic
testing and
alerting.




Very good
monitoring
showing strong
consideration of
nearly all aspects
with periodic
testing and
alerting.
Good monitoring
showing promise
but with some
significant
weaknesses
and/or some
failings of
periodic testing
and alerting.
Basic monitoring in
place with simple
connections but
with major
weaknesses and/or
no periodic alerting.
Fails to perform
any effective
monitoring or be
close to
operational.
Task F.
Stateful Saving

15%

Excellent fully-
featured saving
and recall
including error
handling and
exceptions.

Excellent saving
and recall
including error
handling.
Save and recall
working well with
some quality
features.
Saving and recall
working correctly.
Saving and recall
almost working with
some errors.
Fails to meet
requirement to
save or recall and
with no or little
attempt.
Task G.
Architectural
Design

10%
Excellent fully
justified designs
showing an
appreciation for
the core issues
of high-
availability over
multiple
vendors.
Excellent
designs with
strong
justification
showing
appreciation for
the core issues.
Generally excellent
designs and
justification with
some issues or
weaknesses.
Good designs and
justification with
some weaknesses
throughout or
significant
weaknesses in
one area.
Fair designs and
justification but with
weaknesses or
issues throughout.
Poor quality or
badly justified
designs showing no
real understanding
of the core issues.
3. Feedback

Feedback in the form of marks will be provided as soon as practicable after submission with
the expectation that marking will be complete (and marks provided) within two working
weeks.

Individual feedback will take the form of a numeric score against each of the assessment
criteria (there may be brief comments for these criteria if appropriate), a total made from
these scores weighted by section, and an overall textual comment on the totality of the
submission.

Generalised feedback will be provided to the class as a whole including overall trends and
areas of particular concern.

Anyone wishing to discuss their marks in more detail are welcome to do so using any of the
support arrangements outlined in this document.



4. Submission

Submission will be via Canvas and also through repository access. Please read this section
carefully to avoid any mistakes which could lead to marks being lost.

Note: three uploads are required – unless you complete all three then this assignment will
be regarded as not submitted. It is your responsibility to check the validity/lack of
corruption of any uploads.

Late penalties will be applied to this assignment based on the latest timestamp of any
uploaded item.

You are required to:

1. Provide repository access if not using the QPC gitlab (gitlab.hal.davecutting.uk – if you are
using this then you don’t need to do anything other than provide the links in your report) so
for example the EEECS gitlab or any other repository service. It is your responsibility to
discuss this with a member of the teaching team to allow them access (or a link to a public
repository) if this is the case.

2. Upload three items to the relevant assignment option on Canvas:
• A completed report based on the template provided (you must use the template)
named as .pdf to the REPORT assignment on Canvas.
• A demonstration video showing the operation of your system (see 4.1 for more
details) to the VIDEO assignment on Canvas.
• A zip file containing copy of all source code you have written (this is required in
addition to the repository access for the external examiner) to the CODE assignment
on Canvas.

3. Validate your uploads by viewing them via Canvas and making sure they are correct and
uncorrupted. Previously students have uploaded the wrong assignments or an early version
etc – it is your responsibility to ensure the correct item is uploaded.


4.1 Video Submission

To demonstrate your system working you are required to submit a video, usually a screen
capture video, ideally as short as possible. This should show each of the key elements of
your solution working (with inspection mode or similar to show network traffic to the
endpoints).

You should use this video to highlight any specifics you wish to, for example any particularly
detailed functionality included.

You are not required to specifically present or explain your work but this allows an exaimer
to see it running! Be sure to cover all the features.

Please be mindful of filesize and compress/re-encode if needed. Previously we’ve had
submissions of 1GB or more for a short video! This will take a long time to upload and
upload (and check) must be completed by the deadline to avoid penalties.

Please note: the primary purpose of the video is for validation and for external
examination. The report is the marked component so you must ensure you evidence
everything in the report, anything included in the video but not in the report can not be
marked. The marker may not even view the video.

5. Support Available

A number of support avenues are available throughout this project. It’s suggested you try
them in this order, but this is your choice and you should feel free to avail yourself of one or
all.

Canvas Discussion – you can ask any questions (in general please, don’t include your work
as everyone can see!) on the Canvas Discussion Forum. This is very useful as everyone can
see (the question and the answer!) and its possible students can help each other out.

Practical Sessions – there is a practical session Tuesdays 1100-1300 where you are welcome
to ask questions about anything including this assignment. Just pop up on the Teams
question channel and say hi.

Module Drop-in – CSC3065 offers a virtual drop-in session every Thursday 1400-1500. Our
module demonstrators (who will be involved in the marking) will be guaranteed to be there
and the module lecturer will be most often as well. Just pop up on the Teams question
channel and say hi.

Office Hours – David Cutting has office hours available every week. Appointments can be
booked via the Office Hours link on Canvas.


essay、essay代写