xuebaunion@vip.163.com

3551 Trousdale Rkwy, University Park, Los Angeles, CA

留学生论文指导和课程辅导

无忧GPA：https://www.essaygpa.com

工作时间：全年无休-早上8点到凌晨3点

微信客服：xiaoxionga100

微信客服：ITCS521

数学代写 - MATH 235 Final ANSWERS

时间：2020-12-08

1. (10 points)

Fix positive integers m,n and consider the vector space V of all m×n matrices with entries

in the real numbers R.

(a) Find the dimension of V and prove your answer. Please carry out all the steps of your

proof.

(b) Let P be the subset of V consisting of m×n matrices each of whose row sum is 1. Prove

or disprove: P is a subspace of V.

(c) Assume m ≥ 2 and n ≥ 2. Find a subspace of V of dimension 2. Please explain your

answer, but you don’t have to give a proof.

Answer:

(a) V has dimension mn. To prove this statement, it is enough to exhibit a basis β of V

with mn elements. For 1 ≤ i ≤ m and 1 ≤ j ≤ n, let eij denote the matrix in V with a 1 in

the (i, j) position and zeros elsewhere. Note that there are mn such elements eij, and let β

be the set of these mn matrices. If β forms a basis of V, then V has dimension mn and we

are done.

First we show that the span of β is V. Indeed, if A ∈ V, then

A =

m∑

i=1

n∑

j=1

Aijeij

so β spans V.

Next we show that β is a linearly independent set, which would finish the proof that β is a

basis for V and hence the dimension of V is mn. Suppose that 0 = A =

∑m

i=1

∑n

j=1 aijeij for

scalars aij. By the definition of eij, we conclude that aij = Aij. But a matrix A = 0 if and

only if all of its entries Aij = aij are equal to 0. This shows that β is a linearly independent

set, and hence the dimension of V is mn.

1

(b) P cannot be a subspace of V. Indeed, let A ∈ P. By the definition of P, each of the

rows of A sums to 1. Now consider B = 2A, and note that each of the rows of B sums to 2.

Therefore, B 6∈ P, and so P is not closed under scalar multiplication. Therefore, P is not a

vector space, and hence not a subspace of V.

(c) It suffices to find a linearly independent subset of β ⊂ V with 2 nonzero elements, since

then the span of β would be a two dimensional subspace of V. There are many such choices.

For example, using the notation from part (a) above, we could choose β = {e11, e12}.

2. (10 points)

P2(R) is the real vector space of real polynomials of degree at most 2. Let W be the following

subset of P2(R):

W = {f ∈ P2(R) | f(2) = f(1)}.

(a) Prove that W is a vector subspace of P2(R).

(b) Write down a basis for W . You do not need to prove that the set given is a basis, though

justification of how you found it must be given.

(c) W is isomorphic to Rd for what value of d? Justify your answer.

Answer:

(a) W is a subspace because it satisfies the three standard properties.

1. The zero polynomial O(x) belongs in W : O(2) = 0 = O(1).

2. W is closed under addition: let f and g be two polynomials in W , their sum (f + g)

also belongs to W because

(f + g)(2) = f(2) + g(2) = f(1) + g(1) = (f + g)(1).

3. W is closed under scalar multiplication: let f be a polynomial in W and λ a real

number, their scalar product (λf) also belongs to W because

(λf)(2) = λf(2) = λf(1) = (λf)(1).

(b) Let f(t) = at2 + bt+ c ∈ W . Then

4a+ 2b+ c = f(2) = f(1) = a+ b+ c.

2

Therefore, 3a+ b = 0.

This means that c is a free variable and a, b are related by b = −3a. Therefore,

f(t) = a(t2 − 3t) + c

and a basis for W is {t2 − 3t, 1}.

(c) The dimension of W is 2 and so W is isomorphic to R2 because all 2-dimensional real

vector spaces are isomorphic to R2.

3. (10 points)

Let V denote the linear span of the following functions from R to R: e−2x, 1, e2x. Also

suppose that these functions form an ordered basis β for V. Let T : V → V be the linear

transformation defined by (Tf)(x) = f(−x), and let D : V→ V be the linear transformation

defined by (Df)(x) = df(x)

dx

.

For the following questions, you must show your calculations, but you need not give a proof.

(a) Find the matrix [T ]β.

(b) Find the matrix [D]β.

(c) Find the matrix [TD]β.

Answer:

(a) We find that T maps e−2x, 1, e2x to e2x, 1, e−2x respectively. Therefore

[T ]β =

0 0 10 1 0

1 0 0

.

(b) We find that D maps e−2x, 1, e2x to −2e−2x, 0, 2e2x respectively. Therefore

[D]β =

−2 0 00 0 0

0 0 2

.

(c) Using parts (a) and (b), we find

[TD]β = [T ]β[D]β =

0 0 10 1 0

1 0 0

−2 0 00 0 0

0 0 2

=

0 0 20 0 0

−2 0 0

.

3

4. (10 points)

Let T : R2 → R2 be linear and suppose T 2 6= 0, where T 2 = T ◦ T and 0 denotes the zero

map.

(a) Show that 1 ≤ rank(T 2) ≤ rank(T ).

(b) By considering the possible values of rank(T ) separately, deduce that R(T ) = R(T 2),

where, say, R(T ) is the range of T .

Answer:

(a) T 2 6= 0 implies rank(T ) ≥ 1. Moreover,

R(T 2) = T [T [R2]] ⊆ T [R2] = R(T ),

which implies rank(T 2) ≤ rank(T ). Putting the two inequalities together gives

1 ≤ rank(T 2) ≤ rank(T ).

(b) The possible values of rank(T ) are 1 and 2. Let us consider them separately.

• If rank(T ) = 1, then rank(T 2) is bounded by above and below by 1 and therefore

rank(T ) = 1. It follows that R(T 2) is a subspace of R(T ). These two spaces have the

same dimension. Therefore, R(T ) = R(T 2).

• If rank(T ) = 2, then T is onto. T 2 is also onto because

T 2[R2] = T [T [R2]] = T [R2] = R2.

Therefore, R(T ) = R2 = R(T 2).

In either case, R(T ) = R(T 2).

5. (10 points)

Use elementary row and/or column operations to find the determinant of

A =

1 1 2 0

1 0 1 3

2 1 1 2

0 2 1 3

.

4

You must use the method of row and column operations to get any credit for this problem.

It’s also the easiest way.

Answer:

We can solve the problem using just row operations. First we add multiples of the first row

to the other rows, in order to clear the first column (except for its first entry). Such row

operations will not affect the determinant. We get:

1 1 2 0

0 −1 −1 3

0 −1 −3 2

0 2 1 3

.

Next, add multiples of the second row to the third and fourth rows, to get:

1 1 2 0

0 −1 −1 2

0 0 −2 −1

0 0 −1 9

.

Again, the determinant is unchanged. Finally, subtract 1/2 times the third row from the

fourth row to get

1 1 2 0

0 −1 −1 2

0 0 −2 1

0 0 0 19/2

.

Once again, the determinant is unchanged. Now we have an upper triangular matrix whose

determinant is the product of the diagonal entries. Thus

det(A) = 1 · (−1) · (−2) · (19/2) = 19.

6. (10 points)

Let T : R2 → R2 be a linear map and β the following basis for R2:

β =

{(

1

3

)

,

(

2

4

)}

.

Suppose that T is represented by the following matrix A in β:

A := [T ]β =

(

1 2

3 4

)

.

5

(a) Find the nullity of T showing all your work.

(b) Find the matrix representing T in the standard basis for R2 showing all your work.

Answer:

(a) det(A) = −2 6= 0 so rank(T ) = rank(A) = 2. By Rank-Nullity,

nullity(T ) = dim(R2)− rank(T ) = 2− 2 = 0.

(b) Let α be the standard basis for R2. The change of coordinates process gives

[T ]α = [IR2 ]

α

β [T ]β [IR2 ]

β

α = [IR2 ]

α

β A ([IR2 ]

α

β)

−1.

The change of coordinates matrix [IR2 ]

β

α has as columns the elements of β and so equals A.

Therefore

[T ]α = AAA

−1 = A =

(

1 2

3 4

)

.

7. (10 points)

P1(R) is the real vector space of real polynomials of degree at most 1. Consider the linear

maps T : P1(R)→ P1(R) and S : P1(R)→ P1(R) given by

T (p)(t) = 2p(t) + p′(t) and S(p)(t) = p(t) + (t+ 1)p′(t).

Answer the following questions by performing calculations. You need not give a proof.

(a) Find the eigenvalues of T and S.

(b) Find the corresponding eigenvectors for T and S.

(c) Which of the linear transformations T, S are diagonalizable?

Answer:

The matrices representing T and S respectively in the standard basis β = {1, t} are

[T ]β := A =

(

2 1

0 2

)

[S]β := B =

(

1 1

0 2

)

.

We answer the questions for A and B and note what this implies about T and S.

(a) We begin by finding the eigenvalues of the two matrices.

6

Setting the characteristic polynomials to 0, we get (first for A)

0 = det(A− tI) = det

(

2− t 1

0 2− t

)

= (2− t)2

So A has only one eigenvalue, 2. The same holds for T .

For B, we get

0 = det(B − tI) = det

(

2− t 1

0 1− t

)

= (2− t)(1− t)

so B has eigenvalues 2 and 1. The same holds for S.

(b) We begin by finding the eigenvectors of the two matrices.

First we deal with A, which has only one eigenvalue, λ = 2. If v =

(

x

y

)

is an eigenvector for

this eigenvalue, we must have Av = 2v, in other words

2x+ y = 2x

2y = 2y.

The second equation is always satisfied, so we need only consider the first equation. It

simplifies to y = 0. x could be anything, so an eigenvalue could be

(

x

0

)

for any nonzero

value of x. Therefore, the non-zero constant polynomials of the form p(t) = x are the only

eigenvectors of T with eigenvalue 2.

Next consider B. For the eigenvalue λ = 1, if v =

(

x

y

)

is an eigenvector then we get Bv = 2v,

which gives

x+ y = x

2y = y

The second equation gives y = 0 and then any value of x satisfies the first equation. Thus for

any nonzero value of x, we get that

(

x

0

)

is an eigenvector corresponding to λ = 1. Therefore,

the non-zero constant polynomials of the form p(t) = x are the only eigenvectors of S with

eigenvalue 1.

For λ = 2, we get the equations

x+ y = 2x

2y = 2y

7

The second equation is always true, and the first equation gives x = y. Therefore, for any

nonzero value of x, we get that

(

x

x

)

is an eigenvector corresponding to λ = 2. Therefore, the

non-zero polynomials of the form p(t) = x+xt are the only eigenvectors of S with eigenvalue

2.

(c) dim(P1(R)) = 2. T has only one eigenvector (up to scalar multiples), so T is not

diagonalizable. S has two linearly independent eigenvectors, so S is diagonalizable.

8. (10 points)

Let A and B be real n× n square matrices.

(a) Suppose that AB is not invertible. Is it true that at least one of A and B is not invertible?

Provide a proof or counter example.

(b) Suppose that A has at most n− 1 nonzero entries, that is at most n− 1 of the Aij 6= 0.

Is it true that det(A) = 0? Provide a proof or counter example.

(c) Suppose that A and B commute, that is AB = BA. Is it true that det(A2 − B2) =

det(A−B) det(A+B)? Provide a proof or counter example.

(d) Suppose that Ak = In for some positive integer k > 0. What are the possible values of

det(A)? Justify your answer.

Answer:

(a) True. AB is not invertible and so

0 = det(AB) = det(A) det(B).

Therefore, at least one of det(A) and det(B) is zero. Consequently, at least one of A and B

is not invertible.

(b) True. Use the definition of det.

det(A) =

∑

σ∈Sn

sgn(σ)A1σ(1) . . . Anσ(n).

In each of the n! products, there are n terms of the form Aiσ(i). At least one of the Aiσ(i) is

zero and so each product is 0. Therefore, det(A) = 0.

(c) True. The commutativity of A and B implies

(A−B)(A+B) = A2 − AB +BA−B2 = A2 −B2.

8

The multiplicativity of det implies the result:

det(A2 −B2) = det((A−B)(A+B)) = det(A−B) det(A+B).

(d) The possible values are det(A) = ±1. Indeed,

1 = det(In) = det(A

k) = det(A)k.

det(A) is a real number, because all the entries of A are real numbers. Therefore, det(A) is

a real root of unity and so det(A) = ±1.

9. (10 points)

Recall that we say an n × n matrix A over the complex numbers is self-adjoint if A∗ = A,

where A∗ is the complex conjugate of the transpose of A.

We call an n×n matrix A over the complex numbers a Mueller-Petridis matrix if A∗ = 3A.

(a) Give an example of a 2× 2 Mueller-Petridis matrix.

(b) Give a complete list of n× n Mueller-Petridis matrices, and prove your answer.

(c) Suppose A is an n× n matrix over the complex numbers, and A∗ = λA for some scalar

λ. What are the possible values of λ, and how does λ depend on the matrix A?

Answer:

(a) The only example is the 2× 2 zero matrix.

(b) Since (A∗)∗ = A, it follows that A = (A∗)∗ = (3A)∗ = 3(A∗) = 9A. Therefore, 9Aij = Aij

for every entry Aij of A. So A must be the n× n zero matrix.

(c) As in part (b), we have A = (A∗)∗ = (λA)∗ = λ(A∗) = λλA. Recall that for a complex

number λ, we have λλ = |λ|2. So we can conclude that either A = 0, in which case all values

of λ are allowed, or A 6= 0 and |λ|2 = 1.

10. (10 points)

Consider the following basis for R4.

β =

w1 =

1

0

0

0

,w2 =

1

−1

0

0

,w3 =

1

−1

1

0

,w4 =

1

−1

1

−1

.

9

(a) Apply the Gram-Schmidt orthonormalisation process to obtain an orthonormal basis

{v1,v2,v3,v4}. Show all your work.

(b) Find an orthonormal basis for the orthogonal complement to

span

w1 =

1

0

0

0

,w2 =

1

−1

0

0

.

Answer:

(a) The process gives

{v1 = e1,v2 = −e2,v3 = e3,v4 = −e4},

where ei is the ith standard basis vector.

(b) By construction v3 and v4 are orthonormal vectors that lie in the orthogonal complement.

They are 2 linearly independent vectors in a vector space of dimension 4 − 2 = 2 and so

{v3,v4} is a basis.

10