程序代写案例-MAY 2021
时间:2021-08-03
MAY 2021
48 HOUR ASSESSMENT
SCHOOL OF COMPUTER SCIENCE
MODULE CODE: CS5012
MODULE TITLE: Language and Computation
TIME TO HAND IN: 48 hours
INSTRUCTIONS: (a) Answer all questions.
(b) Each question indicates the number of marks it carries.
The paper carries a total of 60 marks.
This assessment consists of exam-style questions and you should answer as you would
in an exam. As such, citations of sources are not expected, but your answers should be
from your own memory and understanding and significant stretches of text should not be
taken verbatim from sources. Any illustrations or diagrams you include should be original
(hand or computer drawn). You may word-process your answers, or hand-write and scan
them. In either case, please return your answers as a single PDF. If you handwrite, please
make sure the pages are legible, the right way up and in the right order. Your submission
should be your own unaided work. While you are encouraged to work with your peers to
understand the content of the course while revising, once you have seen the questions you
should avoid any further discussion until you have submitted your results. You must submit
your completed assessment on MMS within 48 hours of it being sent to you. Assuming
you have revised the module contents beforehand, answering the questions should take no
more than three hours.
Page 1 of 4
1. Regular languages
(a) Give a regular expression for the language accepted by the following finite au-
tomaton:
start
a
b
a
[3 marks]
(b) Let Σ be the alphabet {a, b}. Let L be the language described by
Σ∗ − (ε|Σ∗b)aa(ε|bΣ∗), where the hyphen denotes set difference and ε de-
notes the empty string. (Hint: L consists of the strings over Σ in which no
maximal substring consisting solely of a’s is of length 2.) Draw a finite automa-
ton that accepts L. [7 marks]
(c) We define a sequential transduction to be a transduction that can be described
by a sequential transducer. Is the class of sequential transductions closed under
intersection? If no, justify your answer by a counter-example. If yes, outline a
proof. [10 marks]
[Total marks 20]
Page 2 of 4
2. Morphology, syntax
(a) In German, the ending of a regular verb in present tense is normally t for third
person singular, unless the stem ends on t or d, in which case the ending is et.
Examples are:
stem 3rd sing.
lern lernt
arbeit arbeitet
red redet
Draw a finite-state transducer that maps a stem to this verb form. To be precise,
the input is the stem of a regular verb followed by the symbol #, and the output
is the third person singular present tense form of that stem, without the final #.
As transition label you can use the abbreviation other, which is short for a:a, b:b,
c:c, e:e, . . . , that is, a mapping from any letter (other than t or d) to that same
letter. [7 marks]
(b) Consider the following context-free grammar:
S → NP VP
VP → V NP | VP PP
PP → P NP
NP → I | D N | NP PP
V → saw
P → with
N → man | telescope
D → a
Given this grammar, list the triples constructed by the Cocke-Kasami-Younger
(CKY) algorithm for the sentence “I saw a man with a telescope”. Assume
the formulation of the first CKY algorithm discussed in the lectures, without
probabilities. [7 marks]
(c) What is the order of the time complexity of determining whether there is a
structurally ambiguous constituent in a sentence, given a context-free grammar
and a table of triples constructed by the CKY algorithm for that sentence?
Express this complexity in terms of the sentence length and the number of
nonterminals in the grammar. You may assume that existence of a given triple
in the table can be determined in constant time, and similarly that existence of a
given rule in the grammar can be determined in constant time. Motivate your
answer by describing an algorithm in words, or by giving a snippet of code in
Java or in self-explanatory pseudo code. [4 marks]
(d) Is there any constituent in the sentence from question 2(b) that is structurally am-
biguous? Explain how your algorithm from question 2(c) detects this. [2 marks]
[Total marks 20]
Page 3 of 4
3. Feature grammars, semantics
(a) Consider the following context-free grammar:
S → NP VP
VP → V NP
NP → he | she | him | her
V → likes
Augment this grammar to become a unification-based (feature-based) grammar,
by adding a ‘case’ feature, and constraints to enforce case agreement, to restrict
the language to only those sentences that are correct English. You can use the
notation used in the lectures or the notation of NLTK. [5 marks]
(b) Consider the following context-free grammar, partially augmented with semantic
rules:
S → NP VP {NP.sem(VP.sem)}
NP → Det Nominal {Det.sem(Nominal.sem)}
Nominal → Adj Nominal {Adj.sem(Nominal.sem)}
Det → every { . . . }
Det → a { . . . }
Adj → small { . . . }
Nominal → dog { . . . }
VP → barks { . . . }
Now consider the following two sentences, with their desired meaning represen-
tations:
A1 every dog barks
A2 ∀d Dog(d)⇒ ∃e Barking(e) ∧ Barker(e, d)
B1 a small dog barks
B2 ∃d Dog(d) ∧ Small(d) ∧ ∃e Barking(e) ∧ Barker(e, d)
Complete the above augmented grammar, by giving the semantics of ‘every’,
‘a’, ‘small’, ‘dog’, and ‘barks’, so that the desired meaning representations are
obtained for A1 and B1. Show that your solution is correct, by first giving
the lambda expressions obtained from A1 and B1 before any beta reductions,
and then showing step-by-step that the beta reductions lead to A2 and B2.
[15 marks]
[Total marks 20]
*** END OF PAPER ***
Page 4 of 4





















































































































学霸联盟


essay、essay代写