انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

القيم الذاتية

Share |
الكلية كلية التربية للعلوم الصرفة     القسم  قسم الرياضيات     المرحلة 2
أستاذ المادة عقيل كتاب مزعل الخفاجي       4/17/2011 9:25:59 AM

1 Eigenvectors. Eigenvalues
Last lecture we saw, that in order to find vectors, “stretched” by the operator with matrix A,
we need to solve the characteristic equation
det(A ? ¸I) = 0; (1)
which will give us different ¸i’s — coefficients, showing, how the vectors are changed after
applying the operator. Now we will give the following definition.
Definition 1.1. Let V be a vector space, and let A be a linear operator in vector space V .
Then the vector x is called eigenvector of the operator A is there exist a number ¸, which is
called eigenvalue such that
A(x) = ¸x:
So, our goal is to find eigenvectors, since the following proposition holds:
Proposition 1.2. Let V be an n-dimensional vector space, and A be a linear operator. Then
if there are n linearly independent eigenvectors, then the matrix of A is diagonal in the basis,
consisting of eigenvectors.
So far we know how to find ¸i’s — eigenvalues of the operator. In order to find eigenvectors,
we need to solve the system
(A ? ¸iI)x = 0 (2)
for every found eigenvalue ¸i.
We will give an example of computing eigenvalues and eigenvectors.
2 Properties of eigenvectors of an operator
Before start studying properties of eigenvectors and eigenvalues we will recall some definitions.
Let V be a vector space, and let A be a linear operator.
Definition 2.1. The vector x is called an eigenvector of A if there exists number ¸ such that
A(x) = ¸x:
Such number ¸ is called an eigenvalue.
To determine eigenvalues we used characteristic polynomial.
Definition 2.2. Let A be a square n £ n-matrix. The characteristic polynomial of A is
pA(¸) = (?1)n det(A ? ¸I) = det(¸I ? A):
We will prove in the next theorem 2.4, that it is uniquely defined by an operator, i.e. if we
take two different matrices of operator, the characteristic polynomial will be the same for both
of them.
Remark 2.3. (?1)n before determinant is needed to have positive sign before ¸n in the polynomial.
But sometimes we will omit (?1)n before determinant. We need only roots of this
polynomial, so change of the sign doesn’t affect them.
If we have an operator, we may wish to define a characteristic polynomial of it as a characteristic
polynomial of its matrix in some basis. The problem is that we don’t know which basis
should we choose. The following theorem shows that the choose of basis doesn’t matter.
Theorem 2.4. If A and B are 2 matrices of a linear operator, i.e. there exists an invertibleThis theorem allows us to define a characteristic polynomial of the operator without choosing
a particular basis.
Now our goal is to understand whether the operator is diagonalizable or not. Of course we
can compute its eigenvalues. If there are n different eigenvalues, then the following theorem
will show us that in this case there will be n linearly independent eigenvectors, and the basis
with respect to which the operator is diagonal is just a basis, which consists of the eigenvectors.
Theorem 2.5. Eigenvectors corresponding to different eigenvalues are linearly independent.
Proof. The proof goes by induction. Let ¸1; ¸2; : : : ; ¸k be eigenvalues and corresponding eigenvectors
are linearly independent, i.e. if e1; e2; : : : ; en are eigenvectors such that A(ei) = ¸iei for
all i = 1; : : : ; k, and
d1e1 + d2e2 + ¢ ¢ ¢ + dkek = 0
then di = 0 for all i’s.
Let we add another eigenvalue ¸k+1 and corresponding eigenvector ek+1, such that A(ek+1) =
¸k+1ek+1. We’ll prove that vectors e1; e2; : : : ; ek; ek+1 are still linearly independent. Let’s consider
a linear combination of them which is equal to 0:
c1e1 + c2e2 + ¢ ¢ ¢ + ckek + ck+1ek+1 = 0: (3)
Now we can apply a linear operator to both sides of this equality:
A(c1e1) + A(c2e2) + ¢ ¢ ¢ + A(ckek) + A(ck+1ek+1) = 0:
This is equivalent to
c1A(e1) + c2A(e2) + ¢ ¢ ¢ + ckA(ek) + ck+1A(ek+1) = 0;
and since they are eigenvectors, i.e. A(ei) = ¸iei, we have
c1¸1e1 + c2¸2e2 + ¢ ¢ ¢ + ck¸kek + ck+1¸k+1ek+1 = 0: (4)
Now, let’s multiply the equality (3) by ¸k+1, and subtract from (4). We’ll have:
c1(¸1 ? ¸k+1)e1 + c2(¸2 ? ¸k+1)e2 + ¢ ¢ ¢ + ck(¸k ? ¸k+1)ek = 0:
(note, that we don’t have term with ek+1 anymore!). But ¸k+1 6= ¸i; i = 1; : : : ; k. So, if ci 6= 0,
for all i’s, we got a nontrivial linear combination of e1; e2; : : : ; ek which is equal to zero, and
vectors e1; e2; : : : ; ek are not linearly independent. But they are linearly independent! Thus, all
ci’s are equal to 0, and vectors e1; e2; : : : ; ek; ek+1 are linearly independent.
So, now we can specify the main corollary of this theorem.
Corollary 2.6. Let A be a linear operator in the space V . If the characteristic polynomial
of A has n different roots, then A is diagonalizable with respect to basis, which consists of
eigenvectors.

Now we will see, that even if there are no n different roots, then there may exist a basi


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم