程序代写案例-T1 2021
时间:2021-05-09
T1 2021 Final Exam Study Guide 1. Vector Spaces: a. Definition of a vector space on a field (R or C). b. A set of vectors with two associated operations: addition and scalar multiplication. c. Closed under both operations. d. Has a additive identity (the zero vector) and a scalar multiplicative identity 2. Signal Representation and Signal Spaces: a. Useful to extend the concept above to any collection of elements, whether finite or infinite. b. Can define signal spaces for continuous and discrete functions c. Signals are points in the appropriate signal space. Understand the significance of this representation. d. Examples of signal spaces and their bases. 3. Inner product spaces: a. An inner product space is a space that has a inner product defined on it b. The inner product is a function from the space to the set of real numbers R (or complex numbers C). Be aware of the definition of the inner product for various spaces. c. You should be aware of the various properties of the inner product (e.g. the Cauchy-Schwartz inequality, the orthogonality, the interpretation of the inner product as the cosine of the angle…) d. The norm of an element v of the space is a function from the space to the positive real numbers R+ which is defined as [0,∞[ with certain useful properties. Be aware of the definitions and properties of norms. 4. Linear, time-invariant systems and operators a. Definition of linearity b. Given an operator, can you determine if it is linear, time-invariant, linear and time invariant, or not LTI? 5. Sampling: a. The interpretation of sampling b. The mathematics of sampling (you should be very familiar with these by now. Sampling leads to a spectrum that is periodic where each period is the superposition of the blocks of the original spectrum.) c. Aliasing: the superposition interpretation above explains what aliasing is. d. Nyquist theorem. 6. Reconstruction: a. Reconstruction involves the interpolation of the signal to give the continuous–time original. b. The repeated spectra that arise in the sampling process can be viewed as potential solutions for the reconstruction problem. c. Therefore, the reconstruction problem can be seen as choosing one of these solutions.
d. Reconstruction kernels, ideal reconstruction: How does reconstruction work? Can you write the reconstructed form of a signal in terms of the interpolation kernel? 7. The FT, DTFT, FS, and DFT: a. Definitions of each expansion b. Where does each apply? (for instance, the FT is the most general and applies to continuous time signals. We require the signal to have finite energy). c. Relationships between the Fourier Transform, Fourier Series, Discrete-time Fourier Transform, and Discrete Fourier Transform? d. The role of the FT and DTFT in understanding sampling and reconstruction (see diagram in the notes). e. The formulation of these transforms in terms of the bases of appropriate spaces. f. The orthonormality property. Expansion of elements of the signal space in terms of the bases elements using the inner product. g. Parseval’s theorem. 8. Convolution and polynomial multiplication: a. Formulation of the convolution. b. How to calculate a convolution. c. Convolution: linear vs circular convolution. d. Relationship between time and frequency domains. 9. The z-transform: a. Sampling, delay units b. Power series expansion. What is the z-transform (a mathematical manipulation tool). c. Can you write the Z-transform of a discrete sequence and a discrete operator? 10. Relationship between the z-transform and Fourier Transform. Radius of Convergence. 11. Manipulating the z-transform: a. What the transfer function of an LTI filter? b. Can you factorise the transfer function? c. Can you find the poles and zeros? d. Under what condition is the filter stable? e. What is BIBO stability? f. What is the stability triangle for a second order system? 12. Filter properties: a. Group delay b. Linear phase, minimum phase, all-pass: what are the properties of these filters? c. Can you write a filter as the product of a minimum phase and an all-pass transfer functions? 13. Filters: a. FIR, IIR b. Can you derive the transfer function of the filters? c. Can you find their impulse response given a filter transfer function?
d. Remember what the impulse response-transfer function relationship for first and second order sections. The filter impulse response can be found by factorization and partial fractions. e. What is the filter gain at dc and at the Nyquist frequency? 14. Filter implementation and structures: a. Direct Form, Canonical Form, Cascade, Parallel. b. Lattice filters c. Given a transfer function, can you find the relevant implementation? d. Given an implementation, can you find the transfer function? 15. Filter Design: a. FIR – Windowing, Frequency Sampling, Least Squares b. IIR – Impulse invariance, the derivative based method, the bilinear transformation, Pade’s method, least squares. c. Relative advantages and disadvantages of FIR and IIR filters and their design methods. 16. Fixed point arithmetic: a. Can you write the two’s complement fixed point representation of a number? b. Can you work out the scaling factor a fixed point representation? c. Can you work out the BIBO gain? d. Can you carry the dynamic range optimization? e. Remember that filters are made up of accumulators, multipliers and delay elements (shift registers or memory elements). f. Do you understand the behaviour of quantization and round-off errors? 17. The DFT, its relationship to the DTFT and its properties: a. Basis, orthogonality of the Fourier vectors b. DFT as an inner product c. Can you derive the properties d. Can you calculate the DFT e. Do you know how to obtain the DFT from the DTFT f. Do you know the relationships between the two (especially the effects of aliasing in the frequency and time domains)? 18. Filter implementation using the DFT: a. Linear vs circular convolution b. Overlap-add c. Overlap-save 19. Fundamental statistical concepts: a. The mean, variance, correlation, covariance. b. Statistical independence. Uncorrelated variables and the relationship to statistical independence. c. The Gaussian distribution and its properties. The multidimensional and single dimensional cases. d. Understanding the statistics from the perspective of information. 20. Random processes: a. Mean, and covariance. b. Stationarity, strict and wide sense stationarity. c. Auto and cross-correlation d. Ensemble versus time averages (first and second order statistics).
21. Power spectral density and the cross power spectral density. a. The relationship between the two. b. The periodogram and its relationship to the PSD. c. LTI systems, the identities involving LTI systems and the PSD and Cross-PSD. 22. The interpretation of the PSD. 23. Estimation of the correlation and Power Spectral Density. a. The implication of ergodicity b. The time average as an estimator of the ensemble mean c. The time estimates of the auto-correlation and cross-correlation (biased and unbiased). d. Biased and unbiased estimators of the PSD. e. The PSD as the DFT of the auto-correlation and its equivalence to the (suitably normalised) periodogram (or square of the magnitude of the DTFT). f. The Bartlett method of estimating the PSD: applying a sliding window and averaging. 24. Linear Prediction: a. Write sample to be predicted as a linear combination of past samples. b. Minimise the error between the true sample and the linear combination of past samples. c. Differentiate and set to 0 to obtain the coefficients in terms of the autocorrelation. d. The Orthogonality principle: error is orthogonal (in a statistical sense) with the samples that are used in the prediction. That is the error is uncorrelated with the samples that are used in the prediction. e. Geometric interpretation of the orthogonality principle. f. Information interpretation of the orthogonality principle: that the error contains the additional information in the new sample that cannot be obtained from the other samples. g. The prediction error power. h. The whiteness of the prediction error process. 25. The Wiener filter: extension of linear prediction. a. Predict one random process based on another. b. Two processes must be correlated in order for this to work. c. Similar concepts as the linear prediction are involved: the orthogonality principle is a crucial one. d. The matrix expression of the solution of the Wiener filter coefficients. 26. Unconstrained Wiener filter: Allow the filter extent to go to infinity. Relationship to least squares FIR filter design. 27. Signal Detection: a. The likelihood function: concept of likelihood vs pdf. b. Likelihood Ratio Test c. ML detection: Maximise the likelihood under each hypothesis, take the ratio, and compare to a suitable threshold. 28. Parameter Estimation: a. Least squares estimation (LSE). b. Maximum likelihood estimation (MLE). c. Condition for equivalence between LSE and MLE.
d. Derivation of the MLE: find the likelihood and maximise. Alternatively maximise the log-likelihood. 29. Multi-rate signal processing. a. General concept of resampling: start from sampled sequence, reconstruct the continuous signal, sample at the new sampling frequency. This rests on your understanding of reconstruction (requires an interpolation filter) and sampling (anti-aliasing filter). b. Upsampling: make sure you understand the steps (insert zeroes and filter) and what the spectrum looks like at each point. c. Downsampling: similarly to upsampling, make sure you understand the steps (filter then decimate) and what the spectrum looks like at each point. d. Resampling by a rational factor and the combination of the reconstruction and anti-aliasing filters. e. Polyphase sequences and their use for resampling: computational savings: make sure that you know how to obtain the polyphase sequences of a signal. 30. Subband filters: a. Perfect Reconstruction b. Quadrature Matched Filter Banks 31. Subband Transform: a. Tree Transform b. Polyphase representation c. Alias-free (perfect) reconstruction (matrix formulation): make sure you understand how to break the filter into its polyphase components, and how to solve the matrix equation.





学霸联盟


essay、essay代写