Page 2 :
Engineering Mathematics-I, B.Tech. Semester-I, U.P. Technical University,, Lucknow, , B.Tech. Semester-I, U.P. Technical University, Lucknow, , BABU RAM, Formerly Dean, Faculty of Physical Sciences,, Maharshi Dayanand University, Rohtak
Page 3 :
Associate Acquisitions Editor: Anita Yadav, Associate Production Editor: Jennifer Sargunar, Composition: White Lotus Infotech Pvt. Ltd, Pondicherry, Printer:, Copyright Ó 2010 Dorling Kindersley (India) Pvt. Ltd, This book is sold subject to the condition that it shall not, by way of trade or otherwise, be lent, resold, hired, out, or otherwise circulated without the publisher’s prior written consent in any form of binding or cover other, than that in which it is published and without a similar condition including this condition being imposed on, the subsequent purchaser and without limiting the rights under copyright reserved above, no part of this, publication may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or, by any means (electronic, mechanical, photocopying, recording or otherwise), without the prior written, permission of both the copyright owner and the publisher of this book., ISBN: 978-81-317-3335-6, 10 9 8 7 6 5 4 3 2 1, Published by Dorling Kindersley (India) Pvt. Ltd, licensees of Pearson Education in South Asia., Head Office: 7th Floor, Knowledge Boulevard, A-8 (A), Sector 62, Noida 201309, UP, India., Registered Office: 11 Community Centre, Panchsheel Park, New Delhi 110 017, India.
Page 6 :
Contents, Preface, , 3.2 Differentiability of a Function of Two, Variables 3.1, , viii, , Symbols and Basic Formulae, , ix, , 3.3 The Differential Coefficients, , 3.4 Distinction between Derivatives and, Differential Coefficients 3.2, , Unit I DIFFERENTIAL CALCULUS, 1, , 3.5 Higher-Order Partial Derivatives, , Successive Differentiation and Leibnitz`s Theorem 1.3, 1.1, , Successive Differentiation, , 1.2, , Leibnitz’s Theorem and its Applications 1.8, , 1.3, , Miscellaneous Examples, , Exercises, , 2, , 3.7, , 3.8 Differentiation of Composite, Functions 3.13, , 1.12, , 3.9 Transformation from Cartesian to Polar, Coordinates and Vice Versa 3.17, 3.10 Taylor’s Theorem for Functions of Several, Variables 3.19, , 2.1, , Determination of Asymptotes When the, Equation of the Curve in Cartesian Form is, Given 2.1, The Asymptotes of the General Rational, Algebraic Curve 2.2, , 2.3, , Asymptotes parallel to Coordinate, Axes 2.3, , 2.4, , Working Rule for Finding Asymptotes of, Rational Algebraic Curve 2.3, , 2.5, , Intersection of a Curve and its, Asymptotes 2.7, , 2.6, , Asymptotes by Expansion 2.9, , 2.7, , Asymptotes of the Polar Curves, , 2.8, , Circular Asymptotes, , 2.9, , Concavity, Convexity and Singular, Points 2.12, , 3.11 Extreme Values 3.23, 3.12 Lagrange’s Method of Undetermined, Multipliers 3.29, 3.13 Jacobians, , 3.16 Differentiation Under the Integral, Sign 3.36, 3.17 Approximation of Errors 3.40, 3.18 General Formula for Errors 3.40, , 2.9, , 3.19 Miscellaneous Examples, , Exercises, , Curve Tracing (Cartesian Equations), , 2.11, , Curve Tracing (Polar Equations), , 2.12, , Curve Tracing (Parametric Equations) 2.23, , 2.16, , 3.43, , 3.47, , Unit II MATRICES, , 2.21, , 2.25, , Continuity of a Function of Two, Variables 3.1, , 3.33, , 3.15 Necessary and Sufficient Conditions for, Jacobian to Vanish 3.35, , 2.11, , Partial Differentiation, , 3.33, , 3.14 Properties of Jacobian, , 2.10, , Exercises, , 3.2, , 3.7 Homogeneous Function and Euler’s, Theorem 3.9, , 1.11, , 2.2, , 3.1, , 3.6 Envelopes and Evolutes, , 1.3, , Asymptotes and Curve Tracing, 2.1, , 3, , 3.1, , 4, , Matrices, , 4.3, , 4.1 Concepts of Group, Ring, Field and Vector, Space 4.3, , 3.1, , 4.2 Matrices, , 4.9, , 4.3 Algebra of Matrices, , 4.10, , 4.4 Multiplication of Matrices, , 4.11
Page 7 :
vi, , n, , Contents, , Unit III INTEGRAL CALCULUS, , 4.5, , Associtative Law for Matrix, Multiplication 4.12, , 4.6, , Distributive Law for Matrix, Multiplication 4.12, , 4.7, , Transpose of a Matrix, , 4.8, , Symmetric, Skew-symmetric, and Hermitian, Matrices 4.14, , 5.2 Properties of Beta Function, , Lower and Upper Triangular Matrices, , 5.4 Properties of Gamma Function, , 4.9, , 5, , Beta and Gamma Functions, 5.1 Beta Function, , 4.14, , 4.10, , Adjoint of a Matrix 4.18, , 4.11, , The Inverse of a Matrix, , 4.12, , Methods of Computing Inverse of a, Matrix 4.21, , 4.13, , Rank of a Matrix, , 4.14, , Elementary Matrices, , 4.15, , Row Reduced Echelon Form and Normal, Form of Matrices 4.28, , 5.7 Miscellaneous Examples, , Exercises, , 4.27, , Equivalence of Matrices, , Row and Column Equivalence of, Matrices 4.33, , 5.7, , 5.6 Dirichlet’s and Liouville’s Theorems, , 4.25, , 4.17, , 5.3, , 5.7, , 5.5 Relation Between Beta and Gamma, Functions 5.7, , 4.19, , 4.16, , 5.3, , 5.3 Gamma Function, , 4.18, , 5.3, , 6, , 4.29, , 5.15, , 5.16, , Multiple Integrals, , 6.1, , 6.1 Double Integrals, , 6.1, , 6.2 Properties of a Double Integral, , 6.2, , 6.3 Evaluation of Double Integrals (Cartesian, Coordinates) 6.2, , 4.18, , Row Rank and Column Rank of a, Matrix 4.34, , 4.19, , Solution of System of Linear, Equations 4.34, , 4.20, , Solution of Non-homogenous Linear, System of Equations 4.35, , 6.4 Evaluation of Double Integral (Polar, Coordinates) 6.7, 6.5 Change of Variables in Double Integral, 6.6 Change of Order of Integration, , 6.13, , Consistency Theorem 4.36, , 4.22, , Homogeneous Linear Equations 4.40, , 4.23, , Characteristic Roots and Characteristic, Vectors 4.44, , 4.24, , The Cayley-Hamilton Theorem, , 4.25, , Algebraic and Geometric Multiplicity of an, Eigenvalue 4.48, , 6.10 Change to Spherical Polar Coordinates, from Cartesian Coordinates in a triple, Integral 6.32, , 4.26, , Minimal Polynomial of a Matrix, , 6.11 Volume as a Triple Integral, , 4.27, , Orthogonal, Normal, and Unitary, Matrices 4.50, , 4.28, , Similarity of Matrices, , 4.29, , Diagonalization of a Matrix, , 4.30, , Triangularization of an Arbitrary, Matrix 4.59, , 4.31, , Quadratic Forms, , 4.32, , Diagonalization of Quadratic, Forms 4.62, , 4.33, , Miscellaneous Examples, , 4.77, , 6.9, , 6.7 Area Enclosed by Plane Curves (Cartesian, and Polar Coordinates) 6.17, , 4.21, , Exercises, , 5.13, , 6.8 Volume and Surface Area as Double, Integrals 6.21, 6.9 Triple Integrals and their Evaluation 6.27, , 4.47, , 4.48, , 6.12 Miscellaneous Examples, , Exercises, , 4.53, , 6.35, , 6.40, , 6.42, , 4.54, , Unit IV VECTOR CALCULUS, , 4.61, , 4.64, , 7, , Vector Calculus, 7.1 Differentiation of a Vector, , 7.3, 7.5, , 7.2 Partial Derivatives of a Vector, Function 7.12
Page 8 :
Contents, , 7.3, , Gradient of a Scalar Field 7.13, , 7.14 Line Integral, , 7.4, , Geometrical Interpretation of a, Gradient 7.13, , 7.15 Work Done by a Force 7.33, 7.16 Surface Integral, , 7.36, , 7.5, , Properties of a Gradient, , 7.17 Volume Integral, , 7.41, , 7.6, , Directional Derivatives, , 7.7, , Divergence of a Vector-Point, Function 7.20, , 7.8, , Physical Interpretation of Divergence, , 7.9, , Curl of a Vector-Point Function, , 7.13, 7.14, , vii, , 7.30, , 7.18 Gauss’s Divergence Theorem 7.42, 7.19 Green’s Theorem in a Plane 7.48, , 7.10, , Physical Interpretation of Curl, , 7.20 Stoke’s Theorem, , 7.20, , Exercises, , 7.21, , 7.11, , The Laplacian Operator r, , Properties of Divergence and Curl, , 7.13, , Integration of Vector Functions 7.29, , 7.22, 7.24, , 7.52, , 7.21 Miscellaneous Examples, , 7.21, , 7.12, , 2, , n, , 7.57, , 7.64, , Examination Papers with Solutions, , Q.1, , Index, , I.1
Page 10 :
Preface, All branches of Engineering, Technology and Science require mathematics as a tool for the description of their, contents. Therefore, a thorough knowledge of various topics in mathematics is essential to pursue courses in, Engineering, Technology and Science. The aim of this book is to provide students with sound mathematics, skills and their applications. Although the book is designed primarily for use by engineering students, it is also, suitable for students pursuing bachelor degrees with mathematics as one of the subjects and also for those who, prepare for various competitive examinations. The material has been arranged to ensure the suitability of the, book for class use and for individual self study. Accordingly, the contents of the book have been divided into, seven chapters covering the complete syllabus prescribed for B.Tech. Semester-I of Uttar Pradesh Technical, University, Lucknow. A number of examples, figures, tables and exercises have been provided to enable, students to develop problem-solving skills. The language used is simple and lucid. Suggestions and feedback, on this book are welcome., , Acknowledgements, I am extremely grateful to the reviewers for their valuable comments. My family members provided moral, support during the preparation of this book. My son, Aman Kumar, software engineer, Adobe India Ltd, offered, wise comments on some of the contents of the book. I am thankful to Sushma S. Pradeep for excellently typing, the manuscript. Special thanks are due to Thomas Mathew Rajesh, Anita Yadav, M. E. Sethurajan and Jennifer, Sargunar at Pearson Education for their constructive support., BABU RAM
Page 11 :
Symbols and Basic Formulae, 1, , Greek Letters, a, , 2, , , , loga pq ¼ q loga p, , (v), , loga n ¼ loga b : logb n ¼, , phi, , b, , beta, , F capital phi, , g, , gamma, , ł psi, , , , capital gamma, , capital psi, , , , delta, , , , 4, , capital delta, , Z, , eta, , epsilon, , z, , zeta, , i, , iota, , w chi, , h, , theta, , pi, , l, , lambda, , , , m, , mu, , capital sigma, , , , nu, , , , o, , omega, , rho, , O, , capital omega, , kapha, , (ii), , 5, , sigma, tau, , 6, , (ii), , (iii), (iv), (v), (vi), , loga 1 ¼ 0; loga 0 ¼ 1 for a > 1; loga a ¼ 1, , (ii), , loge 2 ¼ 0:6931; loge 10 ¼ 2:3026; log10 e ¼ 0:4343, loga p þ loga q ¼ loga pq, p, loga p loga q ¼ loga, q, , (iii), , , , 1 ¼ 0.0174 radian, , (a), , First quadrant: All trig. ratios are positive, , (b), , Second quadrant: sin h and cosec h are positive, all, others negative, , (c), , Third quadrant: tan h and cot h are positive, all others, negative, , (d), , Fourth quadrant: cos h and sec h are positive, all others, negative, , Commonly Used Values of Triganometrical Ratios, , , , ¼ 1; cos ¼ 0; tan ¼ 1, 2, 2, 2, , , , cosec ¼ 1; sec ¼ 1; cos ¼ 0, 2, 2, 2, pffiffiffi, 3, 1, , , 1, sin ¼ ; cos ¼, ; tan ¼ pffiffiffi, 6 2, 6, 6, 2, 3, , , 2, pffiffiffi, cosec ¼ 2 ; sec ¼ pffiffiffi ; cot ¼ 3, 6, 6, 6, 3, pffiffiffi, 3, , 1, pffiffiffi, ; cos ¼ ; tan ¼ 3, sin ¼, 2, 3, 3 2, 3, , 2, , , 1, cosec ¼ pffiffiffi ; sec ¼ 2 ; cot ¼ pffiffiffi, 3, 3, 3, 3, 3, , 1, , 1, , sin ¼ pffiffiffi ; cos ¼ pffiffiffi ; tan ¼ 1, 4, 4, 4, 2, 2, , 1, pffiffiffi, , cosec ¼ pffiffiffi ; sec ¼ 2; cot ¼ 1, 4, 4, 4, 2, sin, , 7, , Trig. Ratios of Allied Angles, (a), , sinðhÞ ¼ sin h; cosðhÞ ¼ cos h, tanðhÞ ¼ tan h, cosecðhÞ ¼ cosec h; secðhÞ ¼ sec h, , Properties of Logarithm, (i), , 1 radian ¼ 180, , , Algebraic Signs of Trigonometrical Ratios, , Algebraic Formulae, Arithmetic progression a, a þ d, a þ 2d,…,, nth term Tn ¼ a þ (n 1) d, n, Sum of n terms ¼ ½2a þ ðn 1Þd, 2, Geometrical progression: a, ar, ar2,…,, n1, nth term Tn ¼ ar, að1 rn Þ, Sum of n terms ¼ 1 r, Arithmetic mean of two numbers a and b is 12 ða þ bÞ, pffiffiffiffiffi, Geometric mean of two numbers a and b is ab, 2ab, Harmonic mean of two numbers a and b is aþb, 2, If ax þ bx þ c ¼ 0 is quadratic, then, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, b2 4ac, (a) its roots are given by b 2a, (b) the sum of the roots is equal to ba, (c) product of the roots is equal to ac, (d) b2 4ac ¼ 0 ) the roots are equal, (e) b2 4ac > 0 ) the roots are real and distinct, (f) b2 4ac < 0 ) the roots are complex, (g) if b2 4ac is a perfect square, the roots are, rational, , logb n, logb a, , Angles Relations, (i), , xi, , e, , (i), , 3, , alpha, , (iv), , (b), , cotðhÞ ¼ cot h, Any trig. ratio of, ðn:90 hÞ ¼, , same trig: ratio of h when n is even, co-ratio of h when n is odd
Page 14 :
Differential Calculus, 1 Successive Differentiation and Leibnitz`s Theorem, 2 Asymptotes and Curve Tracing, 3 Partial Differentiation, , UNIT, , I
Page 27 :
2, , Asymptotes and Curve Tracing, , The aim of this chapter is to study the shape of a, plane curve y ¼ f (x). For this purpose, we must, investigate the variation of the function f, in the case, of unlimited increase and absolute value and of x or, y, or both, of a variable point (x, y) on the curve. The, study of such variation of the function requires the, concept of an asymptote. Before defining an, asymptote to a curve, let us define finite- and infinite branches of a plane curve as follows: 2, 2, Consider the equation of the ellipse ax2 þ by2 ¼ 1., Solving this equation, we get, rffiffiffiffiffiffiffiffiffiffiffiffiffi, rffiffiffiffiffiffiffiffiffiffiffiffiffi, x2, x2, y ¼ b 1 2 or y ¼ b 1 2 :, a, a, The first equation represents the upper half of the, ellipse while the second equation represents the, lower half of the ellipse. Thus, the earlier equation, represents two branches of the ellipse. Further,, both these branches lie within the finite part of the, xy-plane bounded by x ¼ ± a and y ¼ ± b. Hence,, both these branches of the ellipse are finite., Consider now the equation of the hyperbola, y2, x2, a2 b2 ¼ 1. Its solution is, ffi, ffi, b pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, b pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, y¼, x2 a2 or y ¼ , x 2 a2 :, a, a, Therefore, y tends to ± 1 as x ! ± 1. Hence, both, branches of this hyperbola extend to infinity and are, therefore called the infinite branches of the rectangular hyperbola., A variable point P(x, y) moves along a curve to, infinity if the distance of the point from the origin, increases without bound. In other words, a point, P(x, y) on an infinite branch of a curve is said to tend, to infinity along the curve if either x or y, or both, tend, to infinity as P(x, y) moves along the branch of the, curve., Now we are in a position to define an asymptote, to a curve., , A straight line, at a finite distance from the origin, is said to be a rectilinear asymptote (or simply, asymptote) of an infinite branch of a curve if the, perpendicular distance of a point P on that branch, from this straight line tends to zero as P tends to, infinity along the branch of the curve., For example, the line AB will be asymptote of, the curve in the following figure if the perpendicular, distance PM from the point P to the line AB tends to, zero as P tends to infinity along the curve., Y, , B, , M, , 0, , 2.1, , Let, , P, , A, , X, , DETERMINATION OF ASYMPTOTES WHEN THE, EQUATION OF THE CURVE IN CARTESIAN FORM, IS GIVEN, y ¼ mx þ c, , ð1Þ, , be the equation of a straight line. Let P (x, y) be an, arbitrary point on the infinite branch of the curve, f (x, y) ¼ 0. We wish to find the values of m and c so, that (1) is an asymptote to the curve. Let PM ¼ p, be the perpendicular distance of the point P (x, y), from (1). Then, y mx c, p ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi :, 1 þ m2, The abscissa x must tend to infinity as the point, P(x, y) recedes to infinity along this line. Thus,
Page 29 :
Asymptotes and Curve Tracing, , p2 00, 1, n ðmÞ þ p0n1 ðmÞ þ n2 ðmÞ þ ½. . . þ . . . ¼ 0:, x, 2!, As x ! 1, p ! c we have, c2 00, ðmÞ þ c0n1 ðmÞ þ n2 ðmÞ ¼ 0:, 2 n, If 00n ðmÞ 6¼ 0, then this last quadratic in c gives two, values of c. Therefore, there are two asymptotes, y ¼ mx þ c1 and y ¼ mx þ c2 ;, corresponding to the slope m. Thus, in this case, we, have two parallel asymptotes., Remark 2.1, (i) Since the degree of n(m) ¼ 0 is n at the, most, the number of asymptotes, real or, imaginary, which are not parallel to y-axis,, cannot exceed n. In case the curve has, asymptotes parallel to y-axis, then the degree, of n(m) is smaller than n by at least the, number of asymptotes parallel to y-axis., Thus, the total number of asymptotes cannot, exceed the degree n of the curve., (ii) Asymptotes parallel to y-axis cannot be, found by the said method as the equation of, a straight line parallel to y-axis cannot be, put in the form y ¼ mx þ c., , 2.3, , ASYMPTOTES PARALLEL TO COORDINATE AXES, , n, , 2.3, , 1, 1, lim 0 ð xÞ þ 1 ð xÞ þ 2 2 ð xÞ þ . . . ¼ 0, y, y, , y!1, , or 0 ðcÞ ¼ 0, so that c is a root of the equation 0(x) ¼ 0., If c1, c2,… are the roots of 0 (x) ¼ 0, then (x c1),, (x c 2),…are the factors of 0 (x). Also 0(x) is the, coefficient of the highest power of y, that is, of ym in, equation (1). Thus, we have the following simple, rule to determine the asymptotes parallel to y-axis., The asymptotes parallel to the y-axis are obtained, by equating to zero the coefficient of the highest, power of y in the given equation of the curve. In case, the coefficient of the highest power of y is a constant, or if its linear factors are imaginary, then there will be, no asymptotes parallel to the y-axis., , (ii) Asymptotes parallel to the x-axis of a rational, algebraic curve, Proceeding exactly as in case (i) mentioned earlier,, we arrive at the following rule to determine the, asymptotes parallel to the x-axis:, The asymptotes parallel to the x-axis are, obtained by equating to zero the coefficient of the, highest power of x in the given equation of the curve., In case the coefficient of the highest power of x is a, constant or if its linear factors are imaginary, then, there will be no asymptotes parallel to the x-axis., , (i) Asymptotes parallel to y-axis of a rational, algebraic curve, , 2.4, , Let f (x, y) ¼ 0 be the equation of any algebraic, curve of the mth degree. Arranging the equation in, descending powers of y, we get, , In view of the mentioned discussion, we arrive at the, following working rule for finding the asymptotes, of rational algebraic curves:, , ym 0 ð xÞ þ ym1 1 ð xÞ þ ym2 2 ð xÞ, þ . . . þ m ðxÞ ¼ 0;, , ð1Þ, , where 0(x), 1(x), 2(x),… are polynomials in x., Dividing the equation (1) by ym, we get, 1, 1, 0 ð xÞ þ 1 ð xÞ þ 2 2 ð xÞ, y, y, 1, ð2Þ, þ . . . þ m m ðxÞ ¼ 0:, y, If x ¼ c be an asymptote of the curve parallel to, y-axis then lim x ¼ c, where (x, y) lies on the, y!1, , curve (1). Therefore,, , WORKING RULE FOR FINDING ASYMPTOTES OF, RATIONAL ALGEBRAIC CURVE, , 1. A curve of degree n may have utmost n, asymptotes., 2. The asymptotes parallel to the y-axis are, obtained by equating to zero the coefficient, of the highest power of y in the given, equation of the curve. In case the coefficient, of the highest power of y is a constant or if, its linear factors are imaginary, then there, will be no asymptotes parallel to the y-axis., The asymptotes parallel to the x-axis are, obtained by equating to zero the coefficient, of the highest power of x in the given
Page 30 :
2.4, , n, , Engineering Mathematics-I, , equation of the curve. In case the coefficient, of the highest power of x is a constant or if, its linear factors are imaginary, then there, will be no asymptotes parallel to the x-axis., If y ¼ mx þ c is an asymptote not parallel to the, y-axis, then the values of m and c are found as follows:, (i) Find n(m) by putting x ¼ 1, y ¼ m in the, highest-degree terms of the given equation, of the curve. Solve the equation n(m) ¼ 0, for slope (m). If some values are imaginary,, reject them., (ii) Find n-1(m) by putting x ¼1, y ¼ m in the, next lower-degree terms of the equation of, the curve. Similarly n-2(m) may be found, taking x ¼1, y ¼ m in the next lower-degree, terms in the curve and so on., (iii) If m1, m2,… are the real roots of n(m),, then the corresponding values of c, that is,, c1, c2,… are given by, n1 ðmÞ, ; m ¼ m1 ; m2 ; . . . :, c¼ 0, n ðmÞ, Then the required asymptotes are, y ¼ m1 x þ c1 ; y ¼ m2 x þ c2 ; . . ., (iv) If 0n ðmÞ ¼ 0 for some m but n-1(m) 6¼ 0,, then there will be no asymptote corresponding to that value of m., (v) If 0n ðmÞ ¼ 0 and n-1(m) ¼ 0 for some, value of m, then the value of c is determined from, c2 00, c, ðmÞ þ 0n1 ðmÞ þ n2 ðmÞ ¼ 0:, 1!, 2! n, This equation will yield two values of c and, thus, we will get atmost two parallel, asymptotes corresponding to this value of, m, provided 00n ðmÞ 6¼ 0., (vi) Similarly, if 00n ðmÞ ¼ 0n1 ðmÞ ¼ n2 ðmÞ ¼ 0;, then the value of c is determined from, c3 00, c2, c, n ðmÞ þ 00n1 ðmÞ þ 0n2 ðmÞ, 1!, 3!, 2!, þ n3 ðmÞ ¼ 0:, In this case, we get atmost three parallel asymptotes, corresponding to this value of m., , EXAMPLE 2.1, Find the asymptotes of the curve, , , , , y2 x2 a2 ¼ x2 x2 4a2 :, Solution. The equation of the curve is, , , , , y2 x2 a2 ¼ x2 x2 4a2, or, y2 x2 x4 a2 y2 þ 4a2 x2 ¼ 0:, Since the degree of the curve is 4, it cannot have more, than four asymptotes. Equating to zero, the coefficient, of the highest power of y, the asymptote parallel to, the y-axis is given by x2 a2 ¼ 0. Thus, the, asymptotes parallel to the y-axis are x ¼ ±a., Since the coefficient of the highest power of x, in the given equation is constant, there is no, asymptote parallel to the x-axis., To find the oblique asymptotes, we put x ¼ 1, and y ¼ m in the highest-degree term, that is fourthdegree term y2 x2 x4 in the given equation and get, 4 (m) ¼ m2 1. Therefore, slopes of the asymptotes are given by, 4 ðmÞ ¼ m2 1 ¼ 0:, Hence, m ¼ ±1. Again putting y ¼ m and x ¼ 1 in the, next highest-degree term, that is, third-degree term, we, have 3(m) ¼ 0 (since there is no term of degree 3)., Now c is given by, 3 ðmÞ, 0, ¼ 0:, ¼, c¼ 0, 4 ðmÞ 2m, Therefore, the oblique asymptotes are y ¼ x þ 0 and, y ¼ x þ 0., Hence, all the four asymptotes of the given, curve are x ¼ ±a and y ¼ ±x., EXAMPLE 2.2, Find all the asymptotes of the curve, f ðx; yÞ ¼ y3 xy2 x2 y þ x3 þ x2 y2 1 ¼ 0:, Solution. The given curve is of degree 3 and so, it may, have atmost three asymptotes. Since the coefficients of, the highest power of x and y are constants, the curve, has no asymptote parallel to the coordinate axes., To find the oblique asymptotes, we put x ¼ 1, and y ¼ m in the expression containing third-degree, terms of f(x, y). Thereby we get, 3 ðmÞ ¼ m3 m2 m þ 1 ¼ 0:, This equation yields m ¼ 1, 1,1. Further, putting, x ¼ 1, y ¼ m in the next highest-degree term, we get, 2 ðmÞ ¼ 1 m2 :
Page 37 :
Asymptotes and Curve Tracing, , or, a½1 þ ð1Þn 2, ¼ a½1 þ ð1Þn :, r cos h ¼, 1 þ ð1Þn, Putting n ¼ 0, 1, 2,…, the asymptotes of the curve, are given by, r cos h ¼ 2a and r cos h ¼ 0:, Thus, we note that there are only two asymptotes of, the given curve., EXAMPLE 2.17, Find the asymptotes of the curve r ¼ a tan h., Solution. The equation of the given curve may be, written as, 1 1 cos h, ¼, ¼ f ðhÞ:, r a sin h, Therefore, f (h) ¼ 0 implies cos h ¼ 0 and so,, h ¼ ð2n þ 1Þ 2. Also, 1, f 0 ðhÞ ¼ cosec2 h:, a, Therefore,, h, i, 1, 1, :, f 0 ð2n þ 1Þ ¼ , ¼, 2, 2, a, ð, 1, Þ2n, a sinð2n þ 1Þ, 2, , Thus,, 1, ¼ að1Þ2n1 ¼ a:, f 0 ð2n þ 1Þ 2, The asymptotes are now given by, , , r sin h ð2n þ 1Þ, ¼ a:, 2, Proceeding as in the earlier example, we get the, asymptotes as, r cos h ¼ a and r cos h ¼ a:, EXAMPLE 2.18, Find the asymptotes of the following curves:, (i) rh ¼ a, (ii) r ¼ 122acos h, (iii) r sin nh ¼ a., Solution. (i) From the given equation, we get, 1 h, ¼ ¼ f ðhÞ:, r a, Therefore, f (h) ¼ 0 yields ha ¼ 0 or h ¼ 0., Also, 1, 1, ¼ a:, f 0 ðhÞ ¼ and so; 0, a, f ð hÞ, , n, , 2.11, , Thus, the asymptotes are given by, 1, ¼ a or r sin h ¼ a:, r sinðh 0Þ ¼ 0, f ð 0Þ, (ii) From the given equation, we get, 1 1 2 cos h, ¼, ¼ f ðhÞ:, r, 2a, Therefore, f (h) ¼ 0 gives 1 2cosh ¼ 0 or cos h ¼ 12, and so, h ¼ 2n 3, where n is an integer. Further,, 1, sin h, :, f 0 ðhÞ ¼ ð2 sin hÞ ¼, 2a, a, This gives, , , 1 , , 1, , ¼ sin 2n , ¼ sin, f 0 2n , 3, a, 3, a, 3, pffiffiffi, 3, :, ¼, 2a, Hence, the asymptotes are given by, h, , i, 1, 2a, ¼ pffiffiffi, r sin h 2n , ¼ 0, 3, f 2n 3, 3, or on simplification,, , , , 2a, , 2a, ¼ pffiffiffi and r sin h þ, ¼ pffiffiffi :, r sin h , 3, 3, 3, 3, (iii) The equation of the curve may be written as, 1 sin nh, ¼, ¼ f ðhÞ:, r, a, Therefore, f (h) ¼ 0 implies that sin nh ¼ 0 and so,, nh ¼ m, where m is an integer. Thus, h ¼ m, n . Also,, n, cos, nh, f 0 ð hÞ ¼, a, and so,, m, n cos m, :, ¼, f0, n, a, Hence, the asymptotes are given by, , m, 1, a, ;, ¼ 0 m ¼, r sin h , n, n cos m, f n, where m is an integer., , 2.8, , CIRCULAR ASYMPTOTES, , Let the equation of a curve be r ¼ f (h). If, lim f ðhÞ ¼ a, then the circle r ¼ a is called the, h!1, , circular asymptote of the curve r ¼ f (h), EXAMPLE 2.19, Find the circular asymptotes of the curves, , , , , (i) r eh 1 ¼ a eh þ 1 :
Page 38 :
2.12, , n, , Engineering Mathematics-I, , (ii) rðh þ sin hÞ ¼ 2h þ cos h:, (iii) r ¼, , ah, h1 :, , Solution. (i) The given equation is, , , , , r eh 1 ¼ a eh þ 1, or, , , a eh 1, r¼ h, ¼ f ðhÞ:, e 1, Now, , , a eh 1, 1 þ eh, lim h, ¼ a lim, ¼ a:, h!1 e 1, h!1 1 eh, Hence, r ¼ a is the circular asymptote., (ii) The equation of the given curve is, 2h þ cos h, r¼, ¼ f ðhÞ:, h þ sin h, Further,, 2 þ 1h cos h, 2h þ cos h, lim f ðhÞ ¼ lim, ¼ lim, h!1, h!1 h þ sin h, h!1 1 þ sin h, h, 2, ¼ 2:, 1þ0, Hence, r ¼ 2 is the required circular asymptote., ¼, , (iii) The given equation is, ah, r¼, h1, , and, , ah, 1, ¼ a lim, ¼ a:, h!1 1 1, 1, h, Hence, r ¼ a is the circular asymptote of the given, curve., lim, , h!1 h, , 2.9, , CONCAVITY, CONVEXITY AND SINGULAR, POINTS, , Consider the curve y ¼ f ðxÞ, which is the graph, of a single-valued differentiable function in a, plane. The curve is said to be convex upward or, concave downward on the interval (a, b) if all points, of the curve lie below any tangent to it on this, interval. We say that the curve is convex downward, or concave upward on the interval (c, d ) if all points, of the curve lie above any tangent to it on this, interval. Generally, a convex upward curve is, called a convex curve and a curve convex down, is alled a concave curve. For example, the curves, , in figures (a) and (b) are respectively convex and, concave curves., , P, , Tangent at P, Tangent at P, P, , a, , b, (a) Convex Curve, , c, , d, (b) Concave Curve, , The following theorems tell us whether the given, curve is convex or concave in some given interval., Theorem 2.1. If at all points of an interval (a, b) the, second derivative of the function f (x) is negative,, that is, f 00 ðxÞ < 0, then the curve y ¼ f ðxÞ is convex, on that interval., Theorem 2.2. If at all points of an interval (c, d ) the, second derivative of the function f ðxÞ is positive,, that is, f 00 ðxÞ > 0, then the curve y ¼ f ðxÞ is concave on that interval., A point P on a continuous curve ¼ f ðxÞ is said, to be a point of inflexion if the curve is convex on, one side and concave on the other side of P with, respect to any line, not passing through the point P., In other words, the point that separates the convex part of a continuous curve from the concave part, is called the point of inflexion., The following theorem gives the sufficient conditions for a given point of a curve to be a point of, inflexion., Theorem 2.3. Let y ¼ f ðxÞ be a continuous curve., If f 00 ð pÞ ¼ 0 or f 00 ð pÞ does not exist and if the, derivative f 00 ðxÞ changes sign when passing through, x ¼ p, then the point of the curve with abscissa, x ¼ p is the point of inflexion., Thus at a point of inflexion P, f 00 ðxÞ is positive, on one side of P and negative on the other side. The, above theorem implies that at a point of inflexion, f 00 ðxÞ ¼ 0 and f 000 ðxÞ 6¼ 0.
Page 39 :
Asymptotes and Curve Tracing, , For example, the point P, in the figure shown, below is a point of inflexion., , n, , 2.13, , For example, the curve in the figure below has a, cusp at the origin., y, , y, , P, x, , 0, , x, , 0, , A point through which more than one branches, of a curve pass is called a multiple point on the, curve., If two branches of curve pass through a point,, then that point is called a double point. If r branches of a curve pass through a point, then that point, is called a multiple point of order r., If two branches of a curve through a double, point are real and have different tangents, then the, double point is called a node., For example, the curve in the figure below has a, node at the origin., , y, , (Origin as a Cusp), , Let Pðx; yÞ be any point on the curve, f ðx; yÞ ¼ 0. The slope dy, dx of the tangent at P is, given by, @f, dy, ¼ @x, @f, dx, @y, , or, , @f @f dy, þ :, ¼ 0;, @x @y dx, , which is a first degree equation in dy, dx. Since, multiple point, the curve must have at least, tangents, therefore dy, dx must have at least, values at a double point. It is possible if, only if, @f, ¼0, @x, , and, , at a, two, two, and, , @f, ¼ 0:, @y, , Hence the necessary and sufficient conditions, for the existence of multiple points are, 0, , x, , @f, ¼ 0 and, @x, , @f, ¼ 0:, @y, , (Origin as a Node), , EXAMPLE 2.20, Find the points of inflexion of the curve, If two branches through a double point P are real, and have coincident tangents, then P is called a cusp., , yða2 þ x2 Þ ¼ x3 :
Page 41 :
Asymptotes and Curve Tracing, , the curve, we have y ¼ 3p4 ffiffi3. Hence the points, of inflexion on the curve are, , , , , 1 4, 1, 4, ; pffiffiffi and, ; pffiffiffi :, 3 3 3, 3, 3 3, EXAMPLE 2.23, Find the points of inflexion and the intervals of convexity and concavity of the Gaussian curve, 2, y ¼ ex ., Solution. The equation of the Gaussian curve is, 2, y ¼ ex . Therefore, dy, 2, ¼ 2xex ;, dx, , Solution. The given exponential curve is y ¼ ex ., Then, , Hence the curve is everywhere concave., EXAMPLE 2.25, Determine the existence and nature of the double, points on the curve, , Solution. We have, f ðx; yÞ ¼ y2 ðx 2Þ2 ðx 1Þ ¼ 0;, @f, ¼ ðx 2Þð3x 4Þ;, @x, , d2y, <0, dx2, , 1, for x > pffiffiffi ;, 2, , d2y, we have 2 > 0;, dx, , d2y, ¼ ex > 0 for the all values of x:, dx2, , dy, ¼ ex ;, dx, , f ðx; yÞ ¼ y2 ðx 2Þ2 ðx 1Þ ¼ 0:, , For the existence of points of inflexion, we must, 2, have ddx2y ¼ 0, which yields x ¼ p1ffiffi2., Now, since, we have, , 2.15, , EXAMPLE 2.24, Determine whether the curve y ¼ ex is concave or, convex., , d2y, 2, ¼ 2ex ½2x2 1:, 2, dx, , 1, for x < pffiffiffi ;, 2, , n, , therefore the point of inflexion exists for x ¼ p1ffiffi2., , @f, ¼ 2y:, @y, Now for the existence of double points, we must, have, , Putting x ¼ p1ffiffi2 in the given equation, y ¼ e2 ., , 1, Therefore p1ffiffi2 ; e2 is a point of inflexion on the, 1, , @f, @f, ¼, ¼ 0:, @x @y, Hence, , curve., Also, , ðx 2Þð3x 4Þ ¼ 0 and 2y ¼ 0;, 1, for x < pffiffiffi ;, 2, , 2, , we have, , d y, >0, dx2, , which yield, 4, and y ¼ 0:, 3, , Thus the possible double points are (2, 0) and 43 ; 0 ., But, only (2, 0) satisfies the equation of the curve., To find the nature of the double point (2, 0), we shift, the origin to (2, 0). The equation reduces to, x ¼ 2;, , 1, for x > pffiffiffi ;, 2, , we have, , d2y, < 0:, dx2, , Thus another point of inflexion exists for the value, x ¼ p1ffiffi2. Putting x ¼ p1ffiffi2 in the equation of the, 2, Gaussian curve, we get, y ¼ e 1 . Hence the second, point of inflexion is p1ffiffi2 ; e2 ., 1, , y2 ¼ ðx þ 2 2Þ2 ðx þ 2 1Þ ¼ x2 ðx þ 1Þ, ¼ x3 þ x2 :
Page 42 :
2.16, , n, , Engineering Mathematics-I, , Equating to zero the lowest degree term, we get, y2 x2 ¼ 0, which gives y ¼ x as the tangent at, (2, 0). Therefore, at the double point (2, 0), there, are two real and district tangents. Hence the double, point (2, 0) is a node on the given curve., EXAMPLE 2.26, Does the curve x4 ax2 y þ axy2 þ a2 y2 ¼ 0 have a, node on the origin?, Solution. Equating to zero the lowest degree term in, the equation of the given curve, we have, a2 y2 ¼ 0; which yields y ¼ 0; 0:, Therefore there are two real and coincident tangents, at the origin. Hence the given curve has a cusp or, conjugate point at the origin and not a node., , 2.10, , CURVE TRACING (CARTESIAN EQUATIONS), , The aim of this section is to find the appropriate, shape of a curve whose equation is given. We, shall examine the following properties of the curves, to trace it., 1. Symmetry: (i) If the equation of a curve, remains unaltered when y is changed to –y, then the, curve is symmetrical about the x-axis. In other, words, if the equation of a curve consists of even, powers of y, then the curve is symmetrical about the, x-axis. For example, the parabola y2 ¼ 4ax is, symmetrical about the x-axis., (ii) If the equation of a curve remains unaltered, when x is changed to x, then the curve is symmetrical about the y-axis. Thus, a curve is symmetrical about the y-axis, if its equation consists of, even powers of x. For example, the curve x2 þ y2 ¼, a2 is symmetrical about the y-axis., (iii) If the equation of a curve remains, unchanged when x is replaced by x and y is, replaced by y, then the curve is symmetrical in the, opposite quadrants. For example, the curve xy ¼ c2, is symmetrical in the opposite quadrants., (iv) If the equation of a curve remains unaltered, when x and y are interchanged, then the curve is, symmetrical about the line y ¼ x. For example, the, , folium of Descarte’s x3 þ y3 ¼ 3axy is symmetrical, about the line y ¼ x., 2. Origin: (i) If the equation of a curve does not, contain a constant term, then the curve passes, through the origin. In other words, a curve passes, through the origin if (0, 0) satisfies the equation of, the curve., (ii) If the curve passes through the origin, find, the equation of the tangents at the origin by, equating to zero the lowest-degree terms in the, equation of the curve. In case there is only one, tangent, determine whether the curve lies below, or above the tangent in the neighbourhood of the, origin. If there are two tangents at the origin, then, the origin is a double point; if the two tangents, are real and distinct, then the origin is a node; if, the two tangents are real and coincident, then the, origin is cusp; if the two tangents are imaginary,, then the origin is a conjugate point or an isolated, point., Y, , 0, , (Origin as a Node), , Y, , X, 0, , X, , (Origin as a Cusp), , 3. Intersection with the Coordinate Axes: To, find the points where the curve cuts the coordinate, axes, we put y ¼ 0 in the equation of the curve to, find where the curve cuts the x-axis. Similarly, we, put x ¼ 0 in the equation to find where the curve, cuts the y-axis., 4. Asymptotes: Determine the asymptotes of, the curve parallel to the axes and the oblique, asymptotes., 5. Sign of the Derivative: Determine the points, where the derivative dy, dx vanishes or becomes infinite. This step will yield the points where the tangent is parallel or perpendicular to the x-axis., 6. Points of Inflexion: A point P on a curve is, said to be a point of inflexion if the curve is concave, on one side and convex on the other side of P with
Page 43 :
Asymptotes and Curve Tracing, , respect to any line AB, not passing through the, point P., Y, , P, , (Point of Inflexion), , X, , 0, , n, , 2.17, , or x ¼ 0. Thus, the tangent at (a,0) is parallel to, the y-axis., (v) The given equation can be written as, x 2 ð a2 x 2 Þ, :, y2 ¼, a2, When x ¼ 0, y ¼0 and when x ¼ a, y¼ a. When, 0 < x < a, y is real and so, the curve exists in, this region. When x > a, y2 is negative and so, y, is imaginary. Hence, the curve does not exist, in the region x > a., (vi) The given curve has no asymptote., Hence, the shape of the curve is as shown in the, following figure:, , There will, be a point3of inflexion at a point P on, 2, the curve if ddx2y ¼ 0 but ddxy3 6¼ 0., 7. Region, Where the Curve Does Not Exist:, Find out if there is any region of the plane such that, no part of the curve lies in it. This is done by solving, the given equation for one variable in terms of the, other. The curve will not exist for those values of one, variable which make the other variable imaginary., EXAMPLE 2.27, Trace the curve, , , , a2 y 2 ¼ x 2 a2 x 2 :, , Solution. The equation of the, curve is, a2 y 2 ¼ x 2 a2 x 2 :, We observe the following:, (i) Since powers of both x and y are even, it follows, that the curve is symmetrical about both the axes., (ii) Since the equation does not contain constant, terms, the curve passes through the origin. To, find the tangent at the origin, we equate to zero, the lowest-degree terms in the given equation., Thus, the tangents at the origin are given by, a2 y2 a2 x2 ¼ 0 or y ¼ x:, Since tangents are distinct, the origin is a node., (iii) Putting y ¼ 0 in the given equation, we get, x ¼ 0 and x ¼ ± a. Therefore, the curve crosses, the x-axis at (0, 0), (a, 0), and (a, 0)., (iv) Shifting the origin to (a, 0), the given equation, reduces to, a2 y2 ¼ ðx þ aÞ2 ½a2 ðx þ aÞ2 , or, , , a2 y2 ¼ ðx þ aÞ2 2ax x2 :, Equating to zero the lowest-degree term, the, tangent at the new origin is given by 4a2x2 ¼ 0, , EXAMPLE 2.28, Trace the curve, xy2 ¼ 4a2 ð2a xÞðWitch of AgnesiÞ:, Solution. We note that, (i) The curve is symmetrical about the x-axis, because the equation contains even powers, of y., (ii) Since the equation consists of a constant, term, 8a3, the curve does not pass through, the origin., (iii) Putting y ¼ 0 in the equation, we get, x ¼ 2a. Therefore, the curve crosses the, x-axis at (2a, 0). When x ¼ 0, we do not get, any value of y. Therefore, the curve does, not meet the y-axis., Shifting the origin to (2a, 0), the equation, of the curve reduces to, ðx þ 2aÞy2 ¼ 4a2 ð2a x 2aÞ, or, y2 x þ 2ay2 þ 4a2 x ¼ 0:, Equating to zero, the lowest-degree terms, of this equation, the equation of the tangent, at this new origin is given by, 4a2 x ¼ 0 or x ¼ 0:, Hence, the tangent at the point (2a, 0) to the, curve is parallel to y-axis.
Page 44 :
2.18, , n, , Engineering Mathematics-I, , (iv) Equating to zero the coefficient of highest power of y, the asymptote parallel to, the y-axis is x ¼ 0, that is, the y-axis., Further, the curve has no other real, asymptote., (v) The equation of the given curve can be, written as, 4a2 ð2a xÞ, :, y2 ¼, x, Therefore, when x ! 0, y approaches 1, and so, the line x ¼ 0 is an asymptote., When x ¼ 2a, y ¼ 0. When 0 < x < 2a, the, value of y is real and so, the curve exists in, the region 0 < x < 2a. When x > 2a, y is, imaginary and so, the curve does not exists, for x > 2a. Similarly, when x is negative,, again y is imaginary. Therefore, the curve, does not exist for negative x., In view of the mentioned points, the shape, of the curve is as shown in the following, figure:, y, , coincident tangents at the origin. Hence,, the origin is a cusp., (iii) Putting x ¼ 0 in the equation, we get y ¼ 0, and similarly, putting y ¼ 0, we get x ¼ 0., Therefore, the curve meets the coordinate, axes only at the origin., (iv) Equating to zero the highest power of y in, the equation of the curve, the asymptote, parallel to the y-axis is x ¼ 2a. The curve, does not have an asymptote parallel to the, x-axis or any other oblique asymptote., (v) The given equation can be written as, x3, :, y2 ¼, 2a x, When x ! 2a, y2 ! 1 and so, x ¼ 2a is an asymptote. If x > 2a, y is imaginary. Therefore, the curve, does not exist beyond x ¼ 2a. When 0 < x < 2a, y2 is, positive and so, y is real. Therefore, the curve exists, in the region 0 < x < 2a. When x < 0, again y is, imaginary. Therefore, the curve does not exist for a, negative x., In view of the said observations, the shape of, the curve is as shown in the following figure:, y, x = 2a, , 0, , (2a, 0), , x, , 0, , x, , EXAMPLE 2.29, Trace the curve, y2 ð2a xÞ ¼ x3 ðCissoidÞ:, Solution. We note that, (i) Since the powers of y in the given equation, of the curve are even, the curve is symmetrical about the x-axis., (ii) Since the equation of the curve does not, contain a constant term, the curve passes, through the origin. Equating to zero the, lowest-degree term in the equation, the tangent at the origin is given by 2ay2 ¼ 0., Thus, y ¼ 0, y ¼ 0 and so, there are two, , EXAMPLE 2.30, Trace the curve, x3 þ y3 ¼ 3axy (Folium of Descartes):, Solution. We observe that, (i) The curve is not symmetrical about the, axes. However, the equation of the curve, remains unaltered if x and y are interchanged. Hence, the curve is symmetrical, about, 3a 3athe line y ¼ x. It meets this line at, 2 ; 2 .
Page 45 :
Asymptotes and Curve Tracing, , 2 ðmÞ ¼ 3am, and further,, 03 ðmÞ ¼ 3m2 :, Therefore,, c¼, , 2 ðmÞ 3am a, ¼, ¼ :, 03 ðmÞ 3m2 m, , For m ¼ 1, we have c ¼ a. Hence, the oblique, asymptote is, y ¼ x a, , or, , x þ y þ a ¼ 0:, , ( 3a , 3a ), 2 2, , 0, , x, , 0, , The slope of the asymptotes are given by m3 þ 1 ¼ 0., The real root of this equation is m ¼ 1. Also,, putting x ¼ 1, y ¼ m in the second-degree term, we, have, , y=x, , a=, , 3 ðmÞ ¼ m3 þ 1, , y, , y+, , (iii) The curve intersects the coordinate axes, only at the origin., (iv) If, in the equation of the curve, we take, both x and y as negative, then the righthand side becomes positive while the, left-hand side is negative. Therefore, we, cannot take both x and y as negative., Thus, the curve does not lie in the third, quadrant., (v) There is no asymptote parallel to the axes., Further, putting x ¼ 1, y ¼ m in the highest-degree term, we have, , 2.19, , In view of the earlier facts, the shape of the curve is, as shown in the following figure:, , x+, , (ii) Since the equation does not contain a constant term, the curve passes through the origin. Equating to zero the lowest-degree, term, we get 3axy ¼ 0. Hence, x ¼ 0,, y ¼ 0 are the tangents at the origin. Thus,, both y- and x-axis are tangents to the curve, at the origin. Since there are two real and, distinct tangents at the origin, the origin is a, node of the curve., , n, , EXAMPLE 2.31, Trace the curve, y2 ða þ xÞ ¼ x2 ða xÞ:, Solution. We note that, (i) The equation of the curve does not alter if y, is changed to y. Therefore, the curve is, symmetrical about the x-axis., (ii) Since the equation does not contain a constant term, the curve passes through the origin. The tangents at the origin are given by, ay2 ax2 ¼ 0 or y ¼ x:, Thus, there are two real and distinct tangents, at the origin. Therefore, the origin is a node., (iii) Putting y ¼ 0, we have x2(a x) ¼ 0 and, so, the curve intersects the x-axis at x ¼ 0, and x ¼ a, that is, at the points (0, 0) and, (a, 0). Putting x ¼ 0, we get y ¼ 0. Thus,, the curve intersects the y-axis only at (0, 0)., Shifting the origin to (a, 0), the equation of, the curve reduces to, , , y2 ð2a þ xÞ ¼ x x2 þ 2ax þ a2 :, Equating to zero the lowest-degree term,, we get a2x ¼ 0.Hence, at the new origin,, x ¼ 0 is the tangent. Thus, the tangent at, (a, 0) is parallel to the y-axis., (iv) The equation of the curve can be written as, x2 ða x Þ, :, y2 ¼, aþx, 2, When x lies in 0 < x < a, y is positive and so,, the curve exists in this region. But when x >, a, y2 is negative and so, y is imaginary.
Page 46 :
2.20, , n, , Engineering Mathematics-I, , Thus, the curve does not exist in the region, x > a. Further, if x ! a, then y2 ! 1 and, so, x ¼ a is an asymptote of the curve. If, a < x < 0, y2 is positive and therefore, the, curve exists in a < x < 0. When x < a, y2, is negative and so, the curve does not lie in, the region x < a., (v) Equating to zero the coefficient of the highest power of y in the equation of the curve,, we have x þ a ¼ 0. Thus, x þ a ¼ 0 is the, asymptote parallel to the y-axis. To see, whether oblique asymptotes are there or, not, we have 3(m) ¼ m2 þ 1. But the roots, of m2 þ 1 ¼ 0 are imaginary. Hence, there, is no oblique asymptote., Thus, the shape of the curve is as shown in, the following figure:, y, y=x, , (v) When 0 < y <1, then all the factors are, negative and so, x is negative. When 1 < y <, 2, x is positive. Similarly, when 2 < y <3,, then x is negative. At y ¼ 3, x ¼ 0. When, y > 3, x is positive. When y < 0, x is, negative. Hence, the shape of the curve is, as shown in the following figure:, y (3, 0), , (2, 0), , (1, 0), (–6, 0), , 0, , x, , EXAMPLE 2.33, Trace the curve, x3 þ y3 ¼ a2 x:, , x=a, , Solution. We note the following characteristics of the, given curve:, x = –a, , x, 0, , y = –x, , EXAMPLE 2.32, Trace the curve, x ¼ ðy 1Þ ðy 2Þ ðy 3Þ:, Solution. We note that, (i) The equation of the curve has odd powers, of x and y. Therefore, the curve is not, symmetrical about the axes. It is also not, symmetrical about y ¼ x or in the opposite, quadrants., (ii) The curve does not pass through the origin., (iii) Putting x ¼ 0 in the given equation, we get, y ¼ 1, 2, and 3. Thus, the curve cuts the, y-axis at y ¼ 1, 2, and 3. Similarly, putting, y ¼ 0, we see that the curve cuts the x-axis, at x ¼ 6., (iv) The curve has no linear asymptotes since, y ! ± 1, x ! ± 1., , (i) Since the equation of the curve contains, odd powers of x and y, the curve is not, symmetrical about the axes. But if we, change the sign of both x and y, then the, equation remains unaltered. Therefore, the, curve is symmetrical in the opposite, quadrants., (ii) Since the equation of the curve does not have, a constant term, the curve passes through the, origin. The tangent at the origin is given by, a2x ¼ 0. Thus, x ¼ 0, that is, y-axis is tangent to the curve at the origin., (iii) Putting y ¼ 0 in the equation, we get, x(x2a2) ¼ 0 or x(x a) (x þ a) ¼ 0., Hence, the curve cuts the x-axis at x ¼ 0,, x ¼ a, and x ¼ a, that is, at the points, (0, 0), (a, 0), and (a, 0). On the other hand, putting x ¼ 0 in the equation, we get y ¼ 0., Therefore, the curve cuts the y-axis only at, the origin (0, 0)., (iv) The curve does not have any asymptote, parallel to the axes. But, 3 ðmÞ ¼ m3 þ 1; 2 ðmÞ ¼ 0:
Page 47 :
Asymptotes and Curve Tracing, , Thus, the slope of the oblique asymptotes is, given by m3 þ 1 ¼ 0.Thus, the real root is, m ¼ 1. Also, 2 ðm Þ, c¼ 0, ¼ 0:, 3 ðm Þ, Therefore, the curve has an oblique, asymptote y ¼ x., (v) From the equation of the curve, we have, y 3 ¼ a2 x x 3 :, Differentiating with respect to x, we get, dy, dy a2 3x2, ¼, 3y2 ¼ a2 3x2 or, :, dx, dx, 3y2, Thus,, , dy, ¼ 1, dx ða;0Þ, and so, the tangent at (a,0)is perpendicular to, the x-axis. Similarly, dy, dx ða; 0Þ ¼ 1 and, so, the tangent at (a, 0) is also perpendicular to the x-axis., paffiffi, Also we note that dy, dx ¼ 0 implies x ¼ 3., Therefore, the tangents at these points are, parallel to the x-axis., (vi) Also y3 ¼ a2x x3 ¼ x(a2 x2) implies that, y3 is positive in the region 0 < x < a. But y3 is, negative in the region x > a. The earlier facts, imply that the shape of the given curve is as, shown in the following figure:, y, , (–a, 0), , 0, , (a, 0), , x, , n, , 2.21, , If the equation of the curve remains unchanged by, changing r into r, then the curve is symmetrical, about the pole and the pole is the center of the, curve., If the equation of the curve remains unchanged, when h is changed to h and r is changed in to, r, then the curve is symmetrical about the line, h ¼ 2 :, 2. Pole: By putting r ¼ 0, if we find some real value, of h, then the curve passes through the pole which, otherwise not. Further, putting r ¼ 0, the real value, of h, if exists, gives the tangent to the curve at the, pole., 3. Asymptotes: Find the asymptotes using the, method to determine asymptotes of a polar curve., 4. Special Points on the Curve: Solve the equation, of the curve for r and find how r varies as h, increases from 0 to 1 and also as h decreases from, 0 to 1. Form a table with the corresponding, values of r and h. The points so obtained will help in, tracing the curve., 5. Region: Find the region, where the curve does, not exist. If r is imaginary in a < h < b, then the, curve does not exist in the region bounded by the, lines h ¼ a and h ¼ b., 6. Value of tan: Find tan , that is, r dh, dr ; which will, indicate the direction of the tangent at any point. If, for h ¼ a, ¼ 0 then h ¼ a will be tangent to the, curve at the point h ¼ a. On the other hand if for, h ¼ a, ¼ 2, then at the point h ¼ a, the tangent, will be perpendicular to the radius vector h ¼ a., 7. Cartesian Form of the Equation of the Curve:, It is useful sometimes to convert the given equation, from polar form to cartesian form using the relations x ¼ r cos h and y ¼ r sin h., EXAMPLE 2.34, Trace the curve r ¼ a sin 3h., , y = –x, , 2.11, , CURVE TRACING (POLAR EQUATIONS), , To trace a curve with a polar form of equation, we, adopt the following procedure:, 1. Symmetry: If the equation of the curve does not, change when h is changed into h the curve is, symmetrical about the initial line., , Solution. We note that, (i) The curve is not symmetrical about the, initial line. But if we change h to h and, r to r, then the equation of the curve, remains unchanged. Therefore, the curve is, symmetrical about the line h ¼ 2., (ii) Putting r ¼ 0, we get sin3h ¼ 0. Thus,, 3h ¼ 0, or h ¼ 0; 3. Thus, the curve passes
Page 48 :
2.22, , n, , Engineering Mathematics-I, , through the pole, and the lines h ¼ 0 and h ¼, , 3 are tangents to the curve at the pole., , and the line h ¼ 0 is tangent to the curve at, the pole., , (iii) r is maximum when sin3h ¼ 1 or 3h ¼ 2 or, h ¼ 6. The maximum value of r is a., , (iii) The curve cuts the line h ¼ at (2a, )., dr, r, ¼ a sin h and so, tan ¼ r dh, (iv) dh, dr ¼ a sin h ¼, að1cos hÞ, ˚, h, h, , a sin h ¼ tan 2. If 2 ¼ 2, then ¼ 90 ., Thus, at the point h ¼ , the tangent to the, curve is perpendicular to the radius vector., , dr, (iv) We have dh, ¼ 3a cos 3h and so,, dh, 1, tan ¼ r dr ¼ 3 tan 3h. Thus, ¼ 2 when, 3h ¼ 2 or h ¼ 6, and the tangent is perpendicular to the radius vector h ¼ 6., , (v) Some points on the curve are given below:, , 2 5, h : 0 6 3, , 2, 3, 6, r : 0 a 0 a 0 a 0, One loop of the curve lies in the region, 0 < h < 3. The second loop lies in the, region 3 < h < 2, 3 in the opposite direction, because r is negative there. The third loop, lies in the region 2, 3 < h < as r is positive, (equal to a) there., When h increases from to 2, we get, again the same branches of the curve., Hence, the shape of the curve is shown in, the following figure:, , y, , π, u 2, 3, π, 5, u, 6, , π, u, 3, , a, , u, , (v) The values of h and r are:, h : 0 3 2 2, , 3, r : 0 a2 a 3a, 2a, 2, We observe that as h increases from 0 to ,, r increases from 0 to 2a. Further, r is never, greater than 2a. Hence, no portion of the, curve lies to the left of the tangent at, (2a, 0). Since | r | 2a, the curve lies, entirely within the circle r ¼ 2a., (vi) There is no asymptote to the curve because, for any finite value of h, r does not tend to, infinity., Hence, the shape of the curve is as shown, in the following figure:, y, , π, 6, θ =0, , θ =π, , a, , 0, , uπ, , u0, , 0, , x, , x, , a, u, , 4π, 3, , u, , 5π, 3, , EXAMPLE 2.35, Trace the curve r ¼ a (1cos h) (Cardioid)., Solution. The equation of the given curve is r ¼ a, (1cos h). We note the following characteristics of, the curve:, (i) The equation of the curve remains unchanged, when h is changed to h. Therefore, the, curve is symmetrical about the initial line., (ii) When r ¼ 0, we have 1 cosh ¼ 0 or h ¼ 0., Hence, the curve passes through the pole, , EXAMPLE 2.36, Trace the curve r ¼ a þ b cos h, a < b (Limacon)., Solution. The given curve has the following, characteristics:, (i) Since the equation of the curve remains, unaltered when h is changed to h, it follows that the curve is symmetrical about, the initial line., (ii) r ¼ 0 when, a þ b cos h ¼ 0 or, h ¼ cos1 ab . Since ab < 1, cos1 ab is, real. Therefore, the curve passes through, the, pole and the radius vector h ¼ cos1 ab is, tangent to the curve at the pole.
Page 49 :
Asymptotes and Curve Tracing, , (iii) We note that r is maximum when cos h ¼ 1,, that is when h ¼ 0. Thus, the maximum, value of r is a þ b. Thus, the entire curve, lies within the circle r ¼ a þ b. Similarly, r, is minimum when cos h ¼ 1, that is when, h ¼ . Thus, the minimum value of r is a b,, which is negative., dr, (iv) dh, ¼ b sin h and so, tan ¼ r dh, dr ¼, r, aþb cos h, b sin, ¼, , ., Thus,, , ¼, 90˚, when, h, h, b sin h, ¼ 0, . Hence, at the points h ¼ 0 and h ¼ ,, the tangent is perpendicular to the radius, vector., (v) The following table gives the value of r, corresponding to the value of, h:, , , 2, r : aþb a, h: 0, , cos1 , , a, b, , a, <h< , b, negative, ab, , cos1 , , 0, , (vi) Since r is not infinite for any value of h, the, given curve has no asymptote:, Hence, the shape of the curve is as shown, in the following, figure:, Y, , n, , 2.23, , (iii) It cuts the x-axis at (a, 0) and (a, 0). But it, does not meet y-axis., (iv) Shifting the origin to (a, 0), we get, ðx þ aÞ2 y2 ¼ a2 or x2 y2 þ 2ax ¼ 0:, Therefore, the tangent at (a, 0) is given by, 2ax ¼ 0 and so, the tangent at (a, 0) is, x ¼ 0, the line parallel to the y-axis., (v) The curve has no asymptote parallel to, coordinate axes. The oblique asymptote, (verify) are y ¼ x and y ¼ x., (vi) The equation of the curve can be written as, y 2 ¼ x 2 a2 :, When 0 < x < a, the y2 is negative and so, y is, imaginary. Therefore, the curve does not lie in the, region 0 < x < a. But when x > a, y2 is positive and, so, y is real. Thus, the curve exists in the region, x > a. Further, when x ! 1, y2 ! 1., In view of the mentioned facts, the shape of the, curve is as shown in the following figure:, y, y x, , 0, , (a–b, π ), , (a+b, 0), , X, , (–a, 0), , 0, , (a, 0), , y x, , 2.12, EXAMPLE 2.37, Trace the curve r2 cos 2h ¼ a2., Solution. The equation of the given curve can be, written as, , , r2 cos2 h sin2 h ¼ a2, or, x2 y2 ¼ a2 since x ¼ r cos h; y ¼ r sin h:, Therefore, the given curve is a rectangular hyperbola. We note that, (i) The curve is symmetrical about both the, axes., (ii) It does not pass through the origin., , CURVE TRACING (PARAMETRIC EQUATIONS), , If the equation of the curve is given in a parametric, form, x ¼ f (t) and y ¼ (t), then eliminate the, parameter and obtain a cartesian equation of the, curve. Then, trace the curve as dealt with in case of, cartesian equations., In case the parameter is not eliminated easily, a, series of values are given to t and the corresponding, values of x, y, and dy, dx are found. Then we plot the, different points and find the slope of the tangents at, these points by the values of dy, dx at the points., EXAMPLE 2.38, Trace the curve, x ¼ aðt þ sin tÞ;, , y ¼ að1 þ cos tÞ:
Page 50 :
2.24, , n, , Engineering Mathematics-I, , Solution. We note that, (i) Since y ¼ a(1 þ cost) is an even function of, t, the curve is symmetrical about the y-axis., (ii) We have y ¼ 0 when cost ¼ 1, that is, when t ¼ , . When t ¼ , we have, x ¼ a. When t ¼ , x ¼ a. Thus, the, curve meets the x-axis at (a, 0) and, (a, 0)., (iii) Differentiating the given equation, we get, dx, dy, ¼ að1 þ cos tÞ;, ¼ a sin t:, dt, dt, Therefore,, , Hence, the shape of the curve is as shown in the following figure:, y, , (2a, 0), , 2a, aπ, (–a π, 0), , (aπ, 0), , 0, , x, , dy, , dy dt, a sin t, ¼ ¼, dx dx, a, ð, 1, þ cos tÞ, dt, 2a sin 2t cos 2t, t, ¼, ¼ tan :, 2, 2a cos2 2t, Now, , , dy, , ¼ tan ¼ 1:, dx t¼, 2, Thus, at the point (a, 0), the tangent to the, curve is perpendicular to the x-axis., Similarly, at the point ða; 0Þ; dy, dx ¼ 1, and hence, at the point (a, 0), the tangent to the curve is perpendicular to the, x-axis., (iv) y is maximum when cost ¼ 1, that is, t ¼ 0., When t ¼ 0 x ¼ 0 and y ¼ 2a. Thus, the, curve cuts the y-axis at (0, 2a). Further,, , dy, ¼0, dx t¼0, and so, at the point (0, 2a), the tangent to, the curve is parallel to the x-axis., (v) It is clear from the equation that y cannot be, negative. Further, no portion of the curve, lies in the region y > 2a., (vi) There is no asymptote parallel to the axes., (vii) The values of x, y corresponding to the, values of t are as follows:, t, x, y, , , , 2 , , , a a 2 þ 1, 0, a, , , 0, 2 , 0 a 2 þ 1 a, 2a, a, 0, , EXAMPLE 2.39, Trace the curve, , 2, , 2, , 3, , x3 þ y3 ¼ a 2 :, , Solution. (i) The parametric equation of the curve are, x ¼ a cos3 t; y ¼ a sin3 t:, Therefore,, jxj a and y a:, This implies that the curve lies within the square, bounded by the lines x ¼ ±a, y ¼ ± a., (ii) The equation of the curve can be written as, 2 1 2 1, x 3, y 3, þ 2 ¼ 1:, 2, a, a, This equation shows that the curve is symmetrical, about both the axes. Also it is symmetrical about, the line y ¼ x since interchanging of x and y do, not change the equation of the curve., (iii) The given curve has no asymptote., (iv) The curve cuts the x-axis at (a, 0) and (a, 0). It, meets the y-axis at (0, a) and (0, a). For x ¼ a, we, have cos3 t ¼ 1 or t ¼, ! 0. Therefore,, , dy, dy, dt, ¼ dx, ¼ ð tan tÞt¼0 ¼ 0:, dx t¼0, dt, t¼0, , Hence, at the point (a, 0), the x-axis is the tangent to, the curve., Similarly, at the point (0, a), the y-axis is the, tangent to the curve., Hence, the shape of the curve is as shown in the, following figure.
Page 55 :
3.2, , n, , Engineering Mathematics-I, , Division by x yields, uðx þ x; yÞ uðx; yÞ, ¼ A e:, x, Taking the limit as x? 0, we get, @u, ¼ A; since e ! 0 as x ! 0:, @x, Similarly, we can show that @u, @y ¼ B. Thus, if u ¼ u, (x, y) is differentiable, the partial derivatives @u, @x and, @u, @y are respectively the differential coefficients A, and B. Hence,, @u, @u, x þ y þ e :, u ¼, @x, @y, The differential of the dependant variable du is, defined to be the principal part of u. Hence,, u ¼ du þ e :, Now as in the case of functions of one variable,, the differentials of the independent variables, are identical with the arbitrary increments of, these variables, that is, dx ¼ x, dy ¼ y. Therefore,, @u, du ¼ @u, @x dx þ @y dy., , 3.4, , DISTINCTION BETWEEN DERIVATIVES AND, DIFFERENTIAL COEFFICIENTS, , We know that the necessary and sufficient condition, for a function y ¼ f (x) to be differentiable at a point, x is that it possesses a finite derivative at that point., Thus, for functions of one variable, the existence of, derivative f 0 (x) implies the differentiability of f at that, point. But for a function of more than one variable, this is not true. We have seen earlier that if f (x, y) is, differentiable at (x, y), then the partial derivative of f, with respect to x and y exist and are equal to the, differential coefficients A and B, respectively., However, the partial derivative may exist at a point, when the function is not differentiable at that point., In other words, the partial derivatives need not, always be the differential coefficients., EXAMPLE 3.1, 3, 3, Show that the function f ðx; yÞ ¼ xx2 y, þy2 , where x and y, are not simultaneously zero and f (0, 0) ¼ 0, is not, differentiable at (0, 0) but the partial derivatives at, (0, 0) exist., , Solution. Suppose that the given function is differentiable at the origin. Then, by definition,, f ðh; kÞ f ð0; 0Þ ¼ Ah þ Bk þ e g;, ð1Þ, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, 2, 2, where g ¼ h þ k and e ? 0 as g ? 0. Putting, h ¼ g cos h, k ¼ g sin h in (1) and dividing throughout, by g, we get, cos3 h sin3 h ¼ A cos h þ B sin h þ e:, Since e ? 0 as g ? 0, we take the limit as g ? 0 and, get, cos2 h sin3 h ¼ A cos h þ b sin h;, which is impossible since h is arbitrary. Hence,, the function is not differentiable at (0,0). On the other, hand,, f ðh; 0Þ f ð0; 0Þ, h0, ¼ lim, ¼ 1 and, fx ð0; 0Þ ¼ lim, h!0, h!0 h, h, f ð0; kÞ f ð0; 0Þ, k 0, ¼ lim, ¼ 1:, fy ð0; 0Þ ¼ lim, k!0, k!0, k, k, Hence, the partial derivatives at (0,0) exist., , 3.5, , HIGHER-ORDER PARTIAL DERIVATIVES, , Partial derivatives are also, in general, functions of x, and y which may possess partial derivatives with, respect to both independent variables. Thus,, , x ðx;yÞ, @ @u, and, lim ux ðxþx;yÞu, (i) @x, x, @x ¼ x!0, , , ux ðx;yþyÞux ðx;yÞ, @ @u, ;, (ii) @y, y, @x ¼ lim, y!0, , provided that each of these limits exist. The second2, order partial derivatives are denoted by @@xu2 or uxx, @2u, and @y@x or uyx. Similarly, we may define higherorder partial derivatives of @u, @y ., EXAMPLE 3.2, If, , xyðx2 y2 Þ, ; f ð0; 0Þ ¼ 0;, x2 þ y2, show that fxy ð0; 0Þ 6¼ fyx ð0; 0Þ:, f ðx; yÞ ¼, , Solution. When ðx; yÞ is not the origin, then, ", #, @f, x2 y2, 4x2 y2, ¼y 2, ;, þ, @x, x þ y2 ðx2 þ y2 Þ2, ", #, @f, x2 y2, 4x2 y2, ¼x 2, þ, :, @y, x þ y2 ðx2 þ y2 Þ2, , ð1Þ, ð2Þ
Page 60 :
Partial Differentiation, , Since, , P, , x2, a2 þu, , X, , ¼ 1 (given), we have, P 2, @u 4 a2xþu, 4, 2x, ¼ P x 2 ¼ P x2 :, @x, 2, 2, 2, 2, ða þuÞ, , ð5Þ, , ða þuÞ, , From (4) and (5), we get, 2 2 2, , , @u, @u, @u, @u, @u, @u, þ, þ, ¼2 x þy þz, :, @x, @y, @z, @x, @y, @z, EXAMPLE 3.14, If r2 ¼ x2 þ y2 þz2 and V ¼ rm, show that, Vxx þ Vyy þ Vzz ¼ mðm þ 1Þrm2 :, Solution. We have r2 ¼ x2 þ y2 þz2. Differentiating, @r, ¼ 2x or, partially with respect to x, we get 2r @x, @r, x, ¼, ., @x, r, It is given that V ¼ rm. Therefore,, x, @V @V @r, ¼, : ¼ mrm1, ¼ mxrm2, @x, @r @x, r, and, @2V, @r, ¼ m rm2 þ xðm 2Þrm3, @x2, @x, ¼ m½rm2 þ ðm 2Þx2 rm4 :, Similarly, due to symmetry, we have, , @2V, ¼ m rm2 þ ðm 2Þy2 rm4 ;, @y2, , @2V, ¼ m rm2 þ ðm 2Þz2 rm4 :, @z2, Hence,, @2V @2V @2V, þ 2 þ 2 ¼ 3mrm2 þ mðm 2Þ, @x2, @y, @z, ðx2 þ y2 þ z2 Þrm4, ¼ 3mrm2 þ mðm 2Þrm2, ¼ rm2 ½3m þ m2 2m, ¼ mðm þ 1Þrm2 :, , 3.6, , ENVELOPES AND EVOLUTES, , Let a be a parameter which can take all real values, and let f (x, y, a) ¼ 0 be a family of curves., Suppose that P is a point of intersection of two, members f (x, y, a) ¼ 0 and f (x, y, a þ a) ¼ 0 of this, family. As a ?0, let P tends to a definite point Q on, the member a. The locus of Q (for varying value of, a) is called the envelope of the family. Thus,, , n, , 3.7, , \The envelope of a one parameter family of, curves is the locus of the limiting positions of the, points of intersection of any two members of the, family when one of them tends to coincide with, the other which is kept fixed.", The coordinates of the points of intersection of, the curves f (x, y, a) ¼ 0 and f (x, y, a þ a) ¼ 0, satisfy the equations, f ðx; y; aÞ ¼ 0 and f ðx; y; a þ aÞ f ðx; y; aÞ ¼ 0, and therefore, they also satisfy, f ðx; y; a þ aÞ f ðx; y; aÞ, ¼ 0:, a, Taking limit as a ?0, it follows that the coordinates of the limiting positions of the point of, intersection of the curves f (x, y, a) ¼ 0 and f (x, y,, a þ a) satisfy the equations, @f, ¼ 0:, f ðx; y; aÞ ¼ 0 and, @a, Hence, the equation of the envelope of the family, of curves f (x, y, a) ¼ 0, where a is a parameter, is, determined by eliminating a between the equa@, tions f (x, y, a) ¼ 0 and @a, f ðx; y; aÞ ¼ 0., f ðx; y; aÞ ¼ 0 and, , The evolute of a curve is the envelope of the normals to that curve., EXAMPLE 3.15, Find the envelope of the family of straight lines, y ¼ mx þ ma , the parameter being m., Solution. We have, , a, :, ð1Þ, m, Differentiating with respect to m, we obtain, a 12, a, 0 ¼ x 2 or m ¼, :, m, x, Putting this value of m in (1), we get, a 12, a 12 xa12 ax12, 1, 1, xþa, ¼ 1 þ 1 ¼ 2a2 :x2 ;, y¼, x, x, 2, 2, x, a, and so, the parabola y2 ¼ 4ax is the envelope of the, family., y ¼ mx þ, , EXAMPLE 3.16, Find the envelope of the straight lines x cos a þ ysin, a ¼ l sin a cos a, where the parameter is the angle a.
Page 62 :
Partial Differentiation, , Hence, the equation of the normal to the ellipse at, (a cos h, b sin h) is, a sin h, ðx a cos hÞ, y b sin h ¼, b cos h, or, ax, by, , ¼ a2 b2 :, ð1Þ, cos h sin h, Differentiating (1) partially with respect to h, we get, ax sin h by cos h, þ, ¼ 0:, ð2Þ, cos2 h, sin2 h, The equation (2) gives, 1, by, ðbyÞ3, or tan h ¼ , tan3 h ¼ , 1 :, ax, ðaxÞ3, Therefore,, 1, , ðbyÞ3, sin h ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi and, 2, 2, ðaxÞ3 þðbyÞ3, 1, , ðaxÞ3, cos h ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, 2, 2, ðaxÞ3 þðbyÞ3, or, 1, , ðbyÞ3, sin h ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi and, 2, 2, ðaxÞ3 þðbyÞ3, , n, , 3.9, , , or xn yx . Thus, we can define a homogeneous, function as follows:, , A function f(x, y), which can be expressed as xn yx ,, is called a homogeneous function of degree n in x, and y., To check whether a function f (x, y) is homogeneous or not, we put tx for x and ty for y in it. If f, (tx, ty) ¼ tn f (x, y), then the function f (x, y) is a, homogeneous function of degree n which is otherwise a nonhomogeneous, function., , Let u ¼ xn f yx be a homogeneous function of x, and y of degree n. Then,, y, y y, @u, ¼ nxn1 f, þ xn f 0, : 2, @x, h xy, y x y xi, y, ¼ xn1 n f, , f0, ¼ xn1 , ;, x, x, x, x, which is a homogeneous function of degree n – 1., Similarly,, y 1, y, y, @u, ¼ xn f 0, : ¼ xn1 f 0, ¼ xn1 ł, ;, @y, x x, x, x, which is a homogeneous function of degree n – 1. It, follows, therefore, that \If u is a homogeneous, function of x and y of degree n, then ux and uy are also, homogeneous functions of x and y of degree n – 1.", , An expression of the form, a0 xn þ a1 xn1 y þ a2 xn2 y2 þ . . . þ an yn, , Theorem 3.3. (Euler’s Theorem). If u is a homogeneous function of x and y of degree n, then, @u, @u, ¼ nu:, x þy, @x, @y, Proof: Since u is a homogeneous function of x and y, of degree n, it can be expressed as, y, u ¼ xn f, :, x, Differentiating partially with respectto x,we have, y, y, @u, 1, ¼ nxn1 f, þ xn f 0, 2, @x, x, x, x, y, , n1, n2 0 y, yx f, :, ¼ nx f, x, x, Similarly, the differentiation with respect to y yields, y 1, y, @u, ¼ xn f 0, ¼ xn1 f 0, :, @y, x, x, x, , in which every term is of nth degree is called a, homogeneous function of degree n. This can also be, rewritten as, y, y 2, y n1, y n, þ...þan1, þ an, xn a0 þa1, þa2, x, x, x, x, , Therefore,, y, y, y, @u, @u, x þ y ¼ nxn f, yxn1 f 0, þ xn1 yf 0, @x, @y, x, x, x, y, n, ¼ nx f, ¼ nu:, x, , 1, , ðaxÞ3, cos h ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi :, 2, 2, ðaxÞ3 þðbyÞ3, Substituting these values of sin h and cos h in (1) we, get, h, i3, 2, , 2, , 2, , ðaxÞ3 þðbyÞ3 ¼ a2 b2, , or, 2, , 2, , 2, , ðaxÞ3 þ ðbyÞ3 ¼ ða2 b2 Þ3 :, , 3.7, , HOMOGENEOUS FUNCTIONS AND EULER’S, THEOREM
Page 64 :
Partial Differentiation, , Using (1) and (2), the last expression reduces to, @2u, @2u, @2u, þ y 2 2 þ u1 ¼ u1, x2 2 þ 2xy, @x, @x@y, @y, or, @2u, @2u, @2u, þ y2 2 ¼ 0:, x2 2 þ 2xy, @x, @x@y, @y, EXAMPLE 3.23, If u is a homogeneous function of x and y of degree n,, show that, @2u, @2u, @2u, þ y2 2 ¼ nðn 1Þu:, x2 2 þ 2xy, @x, @x@y, @y, Solution. By Euler’s theorem, we have, @u, @u, ¼ n u:, ð1Þ, x þy, @x, @y, Differentiating (1) partially with respect to x, we get, x, , @ 2 u @u, @2u, @u, þ, y, ¼n, þ, 2, @x, @x, @x@y, @x, , or, @2u, @u, @u @u, @u, ¼n , ¼ ðn 1Þ : ð2Þ, þy, @x2, @x@y, @x @x, @x, Again differentiating (1) partially with respect to y,, we have, @2u, @ 2 u @u, @u, þy 2 þ, ¼n, x, @y@x, @y, @y, @y, or, @2u, @2u, @u, þ y 2 ¼ ðn 1Þ :, ð3Þ, x, @y@x, @y, @y, x, , Multiplying (2) by x and (3) by y and then adding, both, we have, @2u, @2u, @2u, þ y2 2, x2 2 þ 2xy, @x, @x@y, @y, , , @u, @u, ¼ ðn 1Þ x þ y, @x, @y, ¼ ðn 1Þnu ½using (1):, EXAMPLE 3.24, 2 2, y, If sin u ¼ xxþy, , show that, @u, @u, ¼ 3 tan u:, x þy, @x, @y, Solution. We have, , y 2, x2 y2, 3 x, sin u ¼, ¼x, ¼ v:, xþy, 1 þ yx, , n, , 3.11, , We observe that v is a homogeneous function of, degree 3. Therefore, by Euler’s theorem,, @v, @v, ð1Þ, x þ y ¼ 3v:, @x, @y, Since v ¼ sin u, we have, @v, @u, @v, @u, ¼ cos u, and, ¼ cos u :, @x, @x, @y, @y, Hence, (1) reduces to, @u, @u, x cos u þ y cos u, ¼ 3 sin u, @x, @y, or, @u, @u, ¼ 3 tan u:, x þy, @x, @y, EXAMPLE 3.25, If u ¼ sin1 xy þ tan1 yx, show that, @u, @u, ¼ 0:, x þy, @x, @y, Solution. We have, y, x, y, 1, y, u ¼ sin1 þ tan1 ¼ sin1 y þ tan1 ¼ x0 f, :, y, x, x, x, x, Therefore, u is a homogeneous function of degree 0, in x and y. Hence by Euler’s theorem, we have, @u, @u, ¼ 0:u ¼ 0:, x þy, @x, @y, For verification of the result, see Example 3.6., EXAMPLE 3.26, 3, þy3, If u ¼ tan1 xxy, , show that, x2, , @2u, @2u, @2u, þ y2 2 ¼ sin 4u sin 2u:, þ 2xy, 2, @x, @x@y, @y, , Solution. We have, y 3, x3 þ y3, 21þ x, ¼x, tan u ¼, ¼ v:, xy, 1 yx, Then v is a homogeneous function of degree 2 in x, and y. Hence, by Euler’s theorem,, @v, @v, x þ y ¼ 2v:, @x, @y, But, @v, @u @v, @u, ¼ sec2 u ;, ¼ sec2 u :, @x, @x @y, @y, Therefore,, x sec2 u, , @u, @u, þ y sec2 u, ¼ 2 tan u, @x, @y
Page 71 :
3.18, , n, , Engineering Mathematics-I, , Now,, @V @V @r @V @h, @V sin h @V, ¼, : þ, : ¼ cos h, , and, @x, @r @x @h @x, @r, r @h, @V @V @r @V @h, @V cos h @V, ¼, : þ, : ¼ sin h, þ, :, @y, @r @y @h @y, @r, r @h, Therefore,, , , @, @ sin h @, ¼ cos h , and, @x, @r, r @h, , , @, @ cos h @, ¼ sin h þ, :, @y, @r, r @h, Hence,, , , , @2V, @ sin h @, @V sin h @V, , , cos, h, ¼, cos, h, @x2, @r, r @h, @r, r @h, , , @, @V sin h @V, ¼ cos h, cos h, , @r, @r, r @h, , , sin h @, @V sin h @V, , cos h, , r @h, @r, r @h, , , 2, @ V sin h @V sin h @ 2 V, , ¼ cos h cos h 2 þ 2, @r, r @h, r @r@h, , 2, sin h, @ V, @V, , cos h, sin h, r, @h@r, @r, , 2, cos h @V sin h @ V, , , r @h, r @h2, 2, @ V, sin h cos h @ 2 V, ¼ cos2 h 2 2, @r, r2, @r@h, 2, 2, sin h @ V sin h @V 2 sin h @V, þ 2, ð1Þ, þ, þ 2, r @h2, r @r, r @h, and, , , , @2V, @ cos h @, @V cos h @V, þ, ¼ sin h þ, sin h, @y2, @r, r @h, @r, r @h, , , @, @V cos h @V, sin h, þ, ¼ sin h, @r, @r, r @h, , , cos h @, @V cos h @V, sin h, þ, þ, r @h, @r, r @h, , , @ 2 V cos h @V cos h @ 2 V, þ, ¼ sin h sin h 2 2, @r, r @h, r @r@h, , cos h, @2V, @V, sin h, þ cos h, þ, r, @h@r, @r, , sin h @V cos h @ 2 V, þ, , r @h, r @h2, , @ 2 V sin h cos h @ 2 V cos h sin h @V, þ, þ, @r2, r, @r@h, r2, @h, 2, 2, 2, cos h sin h @ V cos h @ V cos2 h @V, þ 2 : 2þ, þ, r, @h@r, r, r @r, @h, sin h cos h @V, :, ð2Þ, , r2, @h, Adding (1) and (2), we obtain, @ 2 V @ 2 V @ 2 V 1 @V 1 @ 2 V, þ, þ 2 ¼ 2 þ :, ;, @x2, @y, @r, r @r r2 @h2, which is the required result., ¼ sin2 h, , EXAMPLE 3.42, Transform the expression (, , , 2 2 ), @V @V 2, @V, @V, 2, 2, 2, x þy, þða x y Þ, þ, @x, @y, @x, @y, by substituting x ¼ r cos h and y ¼ r sin h., Solution. If V is a function of x, y, then, @V @V @x @V @y x @V y @V, ¼, : þ, : ¼, þ, @r, @x @r @y @r r @x r @y, or, , , @V, @V, @V, @, @, ¼x, þy, ¼ x þy, r, V:, @r, @x, @y, @x, @y, Thus,, @, @, @, r ¼x þy :, @r, @x, @y, Similarly,, @, @, @, ¼x y :, @h, @y, @x, Now,, @V @V @r @V @h, @V sin h @V, ¼, : þ, : ¼ cos h, , and, @x @r @x @h @x, @r, r @h, @V, @V cos h @V, ¼ sin h, þ, :, @y, @r, r @h, Therefore,, 2 2 2, , @V, @V, @V, 1 @V 2, þ, ¼, þ 2, ;, @x, @y, @r, r @h, and the given expression is equal to, " , , , 2 #, 2, @V 2, @V, 1, @V, r, þða2 r2 Þ, þ 2, @r, @r, r @h, 2 2, 2, a, @V, 2 @V, ¼a, þ 2 1, :, @r, @h, r
Page 72 :
Partial Differentiation, , EXAMPLE 3.43, If x ¼ r cos, r sin h, prove that, h 2and y ¼, , 2, @, u, @, u, @2u, 2 þ 4xy, ðx2 y2 Þ, 2, @x, @y, @x@y, @2u, @u @ 2 u, ¼ r2 2 r 2 ;, @r, @r @h, where u is any twice-differentiable function of x, and y., Solution. We have, @u @u @x @u @y, ¼, þ :, @r @x @r @y @r, @u, @u x @u y @u, ¼, þ, ¼ cos h þ sin h, @x, @y r @x r @y, and so,, @u, @u, @u, ¼x þy :, ð1Þ, r, @r, @x, @y, Therefore,, , , @, @u, r, r, @r @r, , , , @, @, @u, @u, ¼ x þy, x þy, @x, @y, @x, @y, , , , , @, @u, @u, @, @u, @u, x þy, x þy, ¼x, þy, @x @x, @y, @y @x, @y, 2, 2, 2, 2, @ u, @ u, @ u, @ u, @u, @u, þ xy, þ y2 2 þ x þ y, ¼ x2 2 þ xy, @x, @x@y, @y@x, @y, @x, @y, 2, 2, 2, @ u, @ u, @ u, @y, @u, þ y2 2 þ x þ y :, ¼ x2 2 þ 2xy, @x, @x@y, @y, @x, @y, Therefore,, @2u, @u, @2u, @2u, r2 2 r, ¼ x2 2 þ 2xy, @r, @r, @x, @x@y, 2, @ u, þ y2 2 ; using (1): ð2Þ, @y, Again,, @u @u @x @u @y, @u, @u, ¼, þ, ¼x y :, @h @x @h @y @h, @y, @x, Therefore,, , , , @2u, @, @, @u, @u, ¼ x y, x y, @y, @x, @y, @x, @h2, , , , , @, @u, @u, @, @u, @u, x y, x y, ¼x, y, @y @y, @x, @x @y, @x, 2, 2, 2, @ u, @ u, @ u, @u, @u, ¼ x2 2 2xy, þ y2 2 x y : ð3Þ, @y, @y@x, @x, @x, @y, Subtracting (3) from (2), we get the required result., , n, , 3.19, , EXAMPLE 3.44, If x ¼ r cos h, y ¼ r sin h, and z ¼ f (x, y), prove that, @z @z, 1 @z, ¼ cos h , sin h:, @x @r, r @h, Solution. Since x ¼ r cos h and y ¼ r sin h, we have, pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, , r ¼ x2 þ y2 and h ¼ tan1 xy . Therefore,, @r x r cos h, ¼ ¼, ¼ cos h;, @x r, r, @r y r sin h, ¼ ¼, ¼ sin h;, @y r, r, @h, y, r sin h sin h, ¼ 2, ; and, ¼, ¼, @x, x þ y2, r2, r, @h, y, r cos h cos h, ¼, ¼, ¼, :, @y x2 þ y2, r2, r, Now,, @z @z @r @z @h, @z sin h @z, ¼ : þ :, ¼ cos h , :, @x @r @x @h @x, @r, r @h, , 3.10, , TAYLOR’S THEOREM FOR FUNCTIONS OF, SEVERAL VARIABLES, , In view of Taylor’s theorem for functions of one, variable, it is not unnatural to expect the possibility, of expanding a function of more than one variable, f (x þ h, y þ k, zþ l,. . .) in a series of ascending, powers of h, k, l,. . .. To fix the ideas, we consider, here a function of two variables only, the reasoning, in the general case being precisely the same., Theorem 3.5. (Taylor). If f (x, y) and all its partial, derivatives of order n are finite and continuous for all, points (x,y) in the domain a x a þ h and b , y b þ k,, then, f ða þ h; b þ kÞ ¼ f ða; bÞ þ, 1 2, 1, d, f, ða;, bÞ, þ, ., . . þ ðn1Þ, d n1 f ða; bÞ þ Rn ;, þRn ;, 2!, where, 1, Rn ¼ d n f ða þ hh; b þ hkÞ; 0 < h < 1, n!, and, @f, @f, df ¼ h þ k :, @x, @y, Proof: Consider a circular domain of center (a,b) and, radius large enough for the point (a þ h, b þk) to be, also with in the domain. The partial derivatives of, the order n of f (x, y) are continuous in the domain., Write, x ¼ a þ ht and y ¼ b þ kt
Page 93 :
3.40, , n, , Engineering Mathematics-I, , or, , , , , 1, mþ1, , 2, , and occur due to the limitation of computing, aids. However, this type of error can be, minimized by, , Z1, ¼, , xm log xdx:, 0, , Applying again the Leibnitz’s Rule, we get, ! Z1, d, 1, @ m, ðx log xÞdx, ¼, 2, dm ðm þ 1Þ, @m, 0, , or, ð1Þð2Þ, ðm þ 1Þ3, , Z1, xm ðlog xÞ2 dx:, , ¼, 0, , Repeated use of Leibnitz’s Rule yields, Z1, ð1Þð2Þð3Þ, ¼ xm ðlog xÞ3 dx, ðm þ 1Þ4, 0, , :::::::::::::::::::::::::::::::::::::::::::::::::::, :::::::::::::::::::::::::::::::::::::::::::::::::::, ð1Þð2Þð3Þ . . . ðnÞ, ðm þ 1Þnþ1, , ðm þ 1Þnþ1, , 3.17, , Z1, xm ðlog xÞn dx:, , ¼, , (ii) Retaining at least one more significant, figure at each step of calculation., 3. Truncation Error. If is the error caused by, using approximate formulas during computation such as the one that arise when a, function f ðxÞ is evaluated from an infinite, series for x after truncating it at certain stage., For example, we will see that in, Newton – Raphson Method for finding the, roots of an equation, if x is the true value of, the root of f ðxÞ ¼ 0 and x0 and h are approximate value and correction respectively, then, by Taylor’s Theorem,, , xm ðlog xÞn dx, , ¼, 0, , or, ð1Þn n!, , Z1, , (i) Avoiding the subtraction of nearly equal, numbers or division by a small number., , 0, , APPROXIMATION OF ERRORS, , In numerical computation, the quantity [True value –, Approximate Value] is called the error., We come across the following types of errors, in numerical computation., 1. Inherent Error (initial error). Inherent error is, the quantity which is already present in the, statement (data) of the problem before its, solution. This type of error arises due to the, use of approximate value in the given data, because there are limitations of the mathematical tables and calculators. This type of, error can also be there due to mistakes by, human. For example, some one can write, by, mistake, 67 instead of 76. The error in this, case is called transposing error., 2. Round – off Error. This error arises due to, rounding off the numbers during computation, , f ðx0 þ hÞ ¼ f ðx0 Þ þ hf 0 ðx0 Þ, h2 00, f ðx0 Þ þ þ ¼ 0:, 2!, To find the correction h, we truncate the series, just after first derivative. Therefore some error, occurs due to this truncation., 4. Absolute Error. If x is the true value of a, quantity and x0 is the approximate value, then, jx x0 j is called the absolute error., 5. Relative Error. If x is the true value of a, quantity, xx0 and x0 is the approximate value, then, is called the relative error., x, 6. Percentage Error. If x is the true value of, quantity, and x0 is the approximate value, then, xx0, 100 is called the percentage error., x, Thus, percentage error is 100 times the relative error., þ, , 3.18, , GENERAL FORMULA FOR ERRORS, , Let, u ¼ f ðu1 ; u2 ; . . . ; un Þ, , ð1Þ, , be a function of u1 ; u2 ; . . . ; un which are subject, to the errors u1 ; u2 ; . . . ; un respectively.
Page 95 :
3.42, , n, , Engineering Mathematics-I, , Therefore the error formula yields, V ¼, , For the second part of the question,, , @V, @V, , , x þ, y ¼ ðxyxÞ þ ðx2 yÞ:, @x, @y, 2, 4, , a¼, , , ;, 6, , h ¼, , h, :, 100, , Therefore, Putting x ¼ 4; y ¼ 6; x ¼ y ¼ 0:1; we get, V ¼ ð1:2Þ þ ð0:4Þ ¼ 1:6 cm :, 3, , Further, the lateral surface is given by S ¼ xy, and so, @S, ¼ y and, @x, , @S, ¼ x:, @y, , 1 2 h, þ, 3 3 100, , , 2 4, 8, 2, þ h2 pffiffiffi, þ pffiffiffi þ pffiffiffi a, 3 3, 3 3 3 3, 2, p, ffiffi, ffi, h, þ 2 3 h2 a, ð1Þ, ¼, 50, But after compensation A ¼ 0. Therefore (1), implies, A ¼ 2h, , Therefore, , a ¼ , S ¼ ðyx þ xyÞ:, , Putting the values of x; y; x and y, we get, S ¼ ð0:6 þ 0:4Þ ¼ cm2 :, EXAMPLE 3.89, The height h and the semi-vertical angle a of a cone, are measured and from them the total area A of the, cone (including the base) is calculated. If h and a, are in error by small quantities h and a respectively, find corresponding error in the area. Show, further that a ¼ 6, an error of 1 percent in h will be, approximately compensated by an error of 0.33, degree in a., Solution. Radius of the base ¼ r ¼ h tan a. Further,, slant height ¼ l ¼ h sec a. Therefore, , EXAMPLE 3.90, The time T of a complete oscillation of a simple, pendulum, of length L is governed by the equation, pffiffiffiffiffiffiffiffi, T ¼ 2 L=g, g is constant, find the approximate, error in the calculated value of T corresponding to, the error of 2% in the value of L., Solution. We have, sffiffiffi, l, :, T ¼ 2, g, Taking logarithm, we get, 1, 1, log T ¼ log 2 þ log l log g, 2, 2, Differentiating (1), we get, 1, 1 l 1 g, T ¼, , T, 2l 2 g, , Total area ¼ r2 þ r l ¼ r ðr þ lÞ, ¼ h tan aðh tan a þ h sec aÞ, ¼ h2 ðtan2 a þ sec a tan aÞ:, Then the error in A is given by, @A, @A, hþ, a, @h, @a, ¼ 2 hðtan2 a þ sec a tan aÞ h, , A ¼, , þ h2 ð2 tan a sec2 a þ sec3 a þ sec a tan2 aÞ a, , 1, 57:3, pffiffiffi radians ¼ , ¼ 0:33, 173:2, 100 3, , or, T, 1 l, 1g, 100 ¼, 100 , 100, T, 2 l, 2 g, 1, ¼ ½2 0 ¼ 1:, 2, Hence the approximate error is 1%., , ð1Þ
Page 104 :
Matrices, 4 Matrices, , UNIT, , II
Page 106 :
4, , Matrices, , There are many situations in pure and applied, mathematics, theory of electrical circuits, aerodynamics, nuclear physics, and astronomy in which, we have to deal with algebraic structures and, rectangular array of numbers or functions. These, arrays will be called matrices. The aim of this chapter is to study algebra of matrices along with its, application to solve system of linear equations., , 4.1, , CONCEPTS OF GROUP, RING, FIELD AND, VECTOR SPACE, , Definition 4.1 Let S be a non-empty set. Then a, mapping f : S S ? S is called a binary operation in S., A non-empty set along with one or more binary, operations defined on it is called an algebraic, structure., Definition 4.2 A non-empty set G together with a, binary operation f : G G ? G defined on it and, denoted by is called a group if the following, axioms are satisfied:, , Definition 4.4 The number of elements in a group G is, called the order of the group G and is denoted by O, (G). A group having a finite number of elements is, called a finite group., EXAMPLE 4.1, Let ℤ be the set of all integers and let f: ℤ ℤ ? ℤ, defined by f (a, b) ¼ a, b ¼ a þ b be binary, operation in ℤ. Then, (i) a þ (b þ c) ¼ (a þ b) þ c for all a, b, c 2 ℤ, (ii) a þ 0 ¼ 0 þ a ¼ a for all a 2 ℤ and so 0 acts, as an additive identity., (iii) a þ ( a) ¼ ( a) þ a ¼ 0 for a 2 ℤ and so, ( a) is the inverse of a., (iv) a þ b ¼ b þ a, a, b 2 ℤ (Commutativity)., Hence, (ℤ, þ) is an infinite additive abelian group., , (G1) Associativity: For a, b, c, 2 G,, ða bÞ c ¼ a ðb cÞ, , EXAMPLE 4.2, The set of all integers ℤ cannot be a group under, multiplication operation f (a, b) ¼ ab. In fact, ±1 are, the only two elements in ℤ which have inverses., , (G2) Existence of Identity: There exists an element e in, G such that for all a 2 G,, a e¼e a¼a, , EXAMPLE 4.3, The set of even integers [0, ±2, ±4, …] is an additive, abelian group under addition., , (G3) Existence of Inverse Element: For each element, a 2 G, there exists an element b 2 G, such that, a b ¼ b a ¼ e:, Definition 4.3 Let G be a group. If for every pair a,, b 2 G,, a b ¼ b a;, then G is called a commutative (or abelian) group., If a b 6¼ b a, then G will be called nonabelian or non-commutative group., , EXAMPLE 4.4, The set of vectors V form an additive abelian group, under addition., EXAMPLE 4.5, We shall note in article 13.10 on matrices that the set, of all m n matrices form an additive abelian, group.
Page 107 :
4.4, , n, , Engineering Mathematics-I, , EXAMPLE 4.6, The set {1, 1} is a multiplicative abelian group of, order 2., Definition 4.5 Let S be a set with binary operation, f (m, n) ¼ mn, then an element a 2 S is called, (i) Left cancellative if, ax ¼ ay ) x ¼ y for all x; y 2 S;, (ii) Right cancellative if, xa ¼ ya ) x ¼ y for all x; y 2 S:, If any element a is both left- and right cancellative,, then it is called cancellative (or regular). If every, element of a set S is regular, then we say that cancellation law holds in S., Theorem 4.1 If G is a group under the binary operation f (ab) ¼ a b ¼ ab then for a, b, c 2 G,, ab ¼ ac ) b ¼ c, , (left cancellation law), , ba ¼ ca ) b ¼ c, , (right cancellation law), , (Thus cancellation law holds in a group)., Proof: Since G is a group and a 2 G, there exists an, element c 2 G such that ac ¼ ca ¼ e. Therefore,, ab ¼ ac ) cðabÞ ¼ cðacÞ, ) ðcaÞb ¼ cðacÞ, ) eb ¼ ce, ) b ¼ c:, Similarly, we can show that, ba ¼ ca ) b ¼ c:, Theorem 4.2 Let G be a group. Then,, (a) The idetity element of G is unique., (b) Every a 2 G has a unique inverse., 1, (c) For every a 2 G, ða1 Þ ¼ a, (d) For all a, b 2 G, ðabÞ1 ¼ b1 a1 :, Proof: (a) Suppose that there are two identity elements e and e 0 in G. Then,, ee0 ¼ e since e0 is an identity element;, , and, ee0 ¼ e0 since e is an identity element:, Hence e ¼ e 0 ., (b) Suppose that an arbitrary element a in G has two, inverses b and c. Then, ab ¼ ba ¼ e and ac ¼ ca ¼ e., Therefore,, ðbaÞc ¼ ec ¼ c, and, bðacÞ ¼ be ¼ b:, But, by associativity in G,, ðbaÞc ¼ bðacÞ:, Hence b ¼ c., (c) Since G is a group, every element a 2 G has its, inverse a1. Then, a1a ¼ e. Now, 1, a1 a1 ¼ e ¼ a1 a, 1, , By left cancellation law, it follows that ða1 Þ ¼ a., (d) We have, , , , , ðabÞ b1 a1 ¼ a bb1 a1 ¼ aea1, ¼ aa1 ¼ e:, Similarly, 1 1 , , , b a ðabÞ ¼ b1 a1 a b, ¼ b1 eb ¼ b1 b ¼ e:, Thus, , , , , ðabÞ b1 a1 ¼ b1 a1 ðabÞ ¼ e:, , Hence, by the definition of inverse,, ðabÞ1 ¼ b1 a1 :, Definition 4.6 A subset H of a group G is said to a, subgroup of G, if under the binary operation in G, H, itself forms a group., Every group G has two trivial subgroups, G, itself and the identity group {e}., The non-trivial subgroups of G are called, proper subgroups of G., EXAMPLE 4.7, The additive group ℝ of real numbers is a subgroup, of the additive group C of complex numbers., Regarding subgroups, we have
Page 108 :
Matrices, , Theorem 4.3 A non-empty subset H of a group G is a, subgroup of G if and only if, (i) a; b 2 H ) ab 2 H;, (ii) a 2 H ) a1 2 H:, Conditions (i) and (ii) may be combined into a single, one and we have “A non-empty subset H of a group G is, a subgroup of G if and only if a; b 2 H ) ab1 2 H.”, Theorem 4.4 The intersection of two subgroups of a, group is again a subgroup of that group., Definition 4.7 Let G and H be two groups with binary, operations : G G ? G and ł: H H ? H,, respectively, then a mapping f: G ? H is said to be a, group homomorphism if for all a, b 2 G,, f ðða; bÞÞ ¼ łð f ðaÞ; f ðbÞÞ:, , ð1Þ, , Thus if G is additive group and H is multiplicative, group, then (1) becomes, f ða þ bÞ ¼ f ðaÞ: f ðbÞ:, If, in addition f is bijective, then f is called an, isomorphism., EXAMPLE 4.8, Let Z be additive group of integers. Then the mapping f: Z ? H, where H is the additive group of, even integers defined by f (a) ¼ 2a for all a 2 Z is a, group homomorphism. In fact, for a, b 2 Z, f ða þ bÞ ¼ 2ða þ bÞ ¼ 2a þ 2b ¼ f ðaÞ þ f ðbÞ:, Also, f ðaÞ ¼ f ðbÞ ) 2a ¼ 2b ) a ¼ b;, and so f is one-one homomorphism (called, monomorphism)., Definition 4.8 Let G and H be two groups. If f: G ? H, is a homomorphism and eH denotes the identity, element of H, then the subset, K ¼ f x: x 2 G; f ð xÞ ¼ eH g, of G is called the kernel of the homomorphism f., Definition 4.9 A non-empty set R with two binary, operation ‘þ’ and ‘.’ is called a ring if the following, conditions are satisfied., , n, , 4.5, , (i) Associativity of ‘þ’: if a, b, c 2 R, then, a þ ðb þ cÞ ¼ ða þ bÞ þ c, (ii) Existence of Identity for ‘þ’: There exists an, element 0 in R such that, a þ 0 ¼ 0 þ a ¼ a for all a 2 R, (iii) Existence of inverse with respect to ‘þ’: To each, element a 2 R, there exists an element b 2 R, such that, aþb¼bþa¼0, (iv) Commutativity of ‘þ’: If a, b 2 R, then, aþb¼bþa, (v) Associativity of ‘.’: If a, b, c 2 R, then, a ð b c Þ ¼ ð a bÞ c, (vi) Distributivity of ‘þ’ over ‘.’: If a, b, c 2 R, then, a ðb þ cÞ ¼ a b þ a c, and, , (Left distributive law), , ð a þ bÞ c ¼ a c þ b c, (Right distributive law), Let R be a ring, if there is an element 1 in R such that, a.1 ¼ 1. a ¼ a for every a 2 R, then R is called a ring, with unit element., If R is a ring such that a.b ¼ b.a for every, a, b 2 R, then R is called commutative ring., A ring R is said to be a ring without zero, divisors if a:b ¼ 0 ) a ¼ 0 or b ¼ 0., EXAMPLE 4.9, We have seen that (ℤ, þ) is an abelian group., Further, if a, b, c 2 ℤ then, a ðb cÞ ¼ ða bÞ c, a ðb þ cÞ ¼ a b þ a c, ð a þ bÞ c ¼ a c þ b c, a1¼1a¼a, ab¼ba, a b ¼ 0 ) a ¼ 0 or b ¼ 0:, Hence ℤ is commutative ring with unity which is, without zero divisor.
Page 109 :
4.6, , n, , Engineering Mathematics-I, , EXAMPLE 4.10, The set of even integers is a commutative ring but there, does not exist any element b satisfying b · a ¼ a · b ¼ a, for a 2 R, Hence, it is a ring without unity., EXAMPLE 4.11, We shall see later on that the set of n n matrices, form a non-commutation ring with unity. This ring, is a ring with zero divisors. For example, if, and, , 0 0, , then their product is, 1 0, , 1 0, 0 0, , 0 0, . But, 0 0, , none of the given matrix is zero., Definition 4.10 A commutative ring with unity is, called an integral domain if it has no zero divisor., For example, ring of integers is an integral, domain., Definition 4.11 A ring R with unity is said to be a, division ring (or skew field) if every non-zero element of R has a multiplicative inverse., Definition 4.12 A commutative division ring is called, a field., For example, the set of rational number Q, under addition and multiplication forms a field., Similarly, ℝ and C are also fields. Every field is an, integral domain but the converse is not true. For, example, the set of integers form an integral domain, but is not a field. An important result is that, “Every finite integral domain is a field.”, Definition 4.13 A subset S of ring R is called a subring, of R if S is a ring under the binary operations in R., Thus, S will be a subring of R if, (i) a; b 2 S ) a b 2 S;, (ii) a; b 2 S ) ab 2 S:, For example, the set of real numbers is a subring of, the ring of complex numbers., Definition 4.14 A mapping f : R ? R 0 from the ring R, into the ring R 0 is said to be a ring homomorphism if, (i) f (a þ b) ¼ f (a) þ f (b),, (ii) f (ab) ¼ f (a). f (b),, for all a, b 2 R., , If, in addition, f is one-to-one and onto then f is, called ring isomorphism., Definition 4.15 A non-empty set V is said to be a, Vector Space over the field F if, (i) V is an additive abelian group., (ii) If for every a 2 F, v 2 V, there is defined, an element av, called scalar multiple of a, and v, in V subject to, aðv þ vÞ ¼ av þ av;, ða þ bÞv ¼ av þ bv;, aðbvÞ ¼ ðabÞ v;, 1v ¼ v;, for all a, b 2 F, v, w 2 V, where 1 represents the unit, elements of F under multiplication., In the above definition, the elements of V are, called vectors whereas the elements of F are called, scalars., EXAMPLE 4.12, Let V2 ¼ {(x, y): x, y 2 ℝ} be a set of ordered pairs., Define addition and scalar multiplication on V2 by, ð x; yÞ þ ðx0 ; y0 Þ ¼ ð x þ x0 ; y þ y0 Þ;, and, að x; yÞ ¼ ð ax; ayÞ:, Then V2 is an abelian group under addition operation defined earlier and, a½ðx; yÞ þ ðx0 ; y0 Þ ¼ aðx þ x0; y þ y0 Þ, ¼ ðax þ ax0; ay þ ay0 Þ, ¼ ðax þ ayÞ þ ðax0 þ ay0 Þ, ¼ aðx; yÞ þ aðx0; y0 Þ;, ða þ bÞðx; yÞ ¼ ðða þ bÞx; ða þ bÞyÞ, ¼ ðax þ bx; ay þ byÞ, ¼ ðax; ayÞ þ ðbx; byÞ, ¼ aðx; yÞ þ bðx; yÞ;, aðbðx; yÞÞ ¼ ðabÞ ðx; yÞ;, 1:ðx; yÞ ¼ ð x; yÞ:, Hence, V2 is a vector space over ℝ. It is generally, denoted by ℝ2.
Page 113 :
4.10, , n, , Engineering Mathematics-I, , Definition 4.28 A matrix in which the number of rows, is equal to the number of columns is called a square, matrix., If A is a square matrix having n rows and n, columns then it is also called a matrix of order n., Definition 4.29 If the matrix A is of order n,, the elements a11 ; a22 ; . . . ; ann are said to constitute, the main diagonal of A and the elements, an1 ; an12 ; . . . ; a1n constitute its secondary, diagonal., Definition 4.30 A square matrix A ¼ [aij] is said to be, a diagonal matrix if each of its non-diagonal element is zero, that is, if aij ¼ 0 whenever i 6¼ j., A diagonal matrix whose diagonal elements, in, order, are d1, d2,…, dn is denoted by Diag ½d1 ;, d2 ; ... ; dn or Diag ½a11 ; a22 ; ... ; ann if A ¼ ½aij :, Definition 4.31 A diagonal matrix, whose diagonal, elements are all equal is called a scalar matrix., For example, the matrix, 2, 3, 2 0 0, 40 2 0 5, 0 0 2, is a scalar matrix of order 3., Definition 4.32 A scalar matrix of order n, each, of whose diagonal element is equal to 1 is called, a unit matrix or identity matrix of order n and is, denoted by In., For example, the matrix, 3, 2, 1 0 0 0, 6 0 1 0 0 7, 7, 6, 4 0 0 1 0 5, 0 0 0 1, is a unit matrix of order 4., Definition 4.33 A matrix, rectangular or square, each, of whose entry is zero is called a zero matrix or a, null matrix and is denoted by 0., Definition 4.34 A matrix having 1 row and n column, is called a row matrix (or a row vector). For, example, the matrix, ½2, is a row matrix., , 3, , 5 6, , 2, , Definition 4.35 If a matrix has m rows and 1 column,, it is called a column matrix (or a column vector)., For example, the matrix, 2, 3, 2, 6 1 7, 6, 7, 4 0 5, 3, is a column matrix., Definition 4.36 A submatrix of a given matrix A is, defined to be either A or any array obtained on deleting some rows or columns or both of the matrix A., Definition 4.37 A square submatrix of a square matrix is, called a principal submatrix if its diagonal elements, are also the diagonal elements of the matrix A., Thus to obtain principal submatrix, it is necessary to delete corresponding rows and columns. For, example, the matrix, 3 1, 4 3, is a principal submatrix of the matrix, 3, 2, 1 2 3 4, 6 2 3 1 0 7, 7, 6, 4 6 4 3 2 5:, 1 2 4 1, Definition 4.38 A principal square submatrix is called, leading submatrix if it is obtained by deleting only, some of the last rows and the corresponding columns. For example,, 1 2, 2 3, is the leading principal submatrix of the matrix, 3, 2, 1 2 3 4, 6 2 3 1 0 7, 7, 6, 4 6 4 3 2 5:, 1 2 4 1, , 4.3, , ALGEBRA OF MATRICES, , Matrices allow the following basic operations:, (a), , Multiplication of a matrix by a scalar., , (b) Addition and subtraction of two matrices., (c) Product of two matrices., However, the concept of dividing a matrix by another, matrix is undefined.
Page 114 :
Matrices, , Definition 4.39 Let a be a scalar (real or complex) and, A be a given matrix. Then the multiplication of, A ¼ [aij] by the scalar a is defined by, aA ¼ a½aij ¼ ½aaij ;, that is, each element of A is multiplied by the scalar, a. The order of the matrix so obtained will be the, same as that of the given matrix A., For example, 4, , 3 1, 2 1, , 2, 0, , ¼, , 12, 8, , 4 8, 4 0, , :, , Definition 4.40 Two matrices A ¼ [aij] and B ¼ [bij], are said to be comparable (conformable) for addition/subtraction if they are of the same order., Definition 4.41 Let A and B be two matrices of the, same order, say m x n. Then, the sum of the matrices, A and B is defined by, C ¼ ½cij ¼ A þ B ¼ ½aij þ ½ bij ¼ ½aij þ bij :, Thus,, cij ¼ aij þ bij ; 1 i m; 1 j n;, The order of the new matrix C is same as that of, A and B. Similarly,, C ¼ A B ¼ ½aij ½bij ¼ ½aij bij , Thus,, cij ¼ aij bij for 1 i m; 1 j n, Definition 4.42 If A1, A2,…, An are n matrices which, are conformable for addition and l1 ; l2 ; . . . ln are, scalars, then l1 A1 þ l2 A2 þ . . . þ ln An is called a, linear combination of the matrices A1, A2, …, An., Let A ¼ ½aij ; B ¼ ½bij ; C ¼ ½cij ,be m n matrices with entries from the complex numbers. Then, the following properties hold:, (a) A þ B ¼ B þ A (Commutative law for, addition), (b) (A þ B)þ C ¼ A þ (B þ C) (Associative, law for addition), (c) A þ 0 ¼ 0 þ A ¼ A (Existence of additive, identity), (d) Aþ(A)¼(A) þ A ¼ 0 (Existence of, inverse), Thus the set of matrices form an additive commutative group., , 4.4, , n, , 4.11, , MULTIPLICATION OF MATRICES, , Definition 4.43 Two matrices A ¼ ½aij m n and B ¼, ½bij p q are said to comparable or conformable for, the product AB if n ¼ p, that is, if the number of, columns in A is equal to the number of rows in B., Definition 4.44 Let A ¼ ½aij m n and B ¼ ½bij p q be, two matrices. Then, the product AB is the matrix, C ¼ ½cij m q such that, cij ¼ ai1 b1j þ ai2 b2j þ . . . þ a1n bnj, n, X, ¼, aik bkj for 1 i m; 1 j n:, k¼1, , Note that the cij [the (i, j)th element of AB] has, been obtained by multiplying the ith row of A,, namely ðai1 ; ai2 ; . . . ; ain Þ with the jth column of B,, namely, 3, 2, b1j, 6 b2j 7, 7, 6, 6 ... 7, 7, 6, 4 ... 5, bnj, Remark 4.1 In the product AB, the matrix A is called, prefactor and B is called postfactor., EXAMPLE 4.17, Construct an example to show that product of two, non-zero matrices may be a zero matrix., Solution. Let, x 0, 0 0, B¼, :, y 0, a b, Then A and B are both 2 2 matrices. Hence, they, are conformable for product. Now,, x 0, 0 0, AB ¼, y 0, a b, 0þ0 0þ0, 0 0, ¼, ¼, :, 0þ0 0þ0, 0 0, A¼, , Definition 4.45 When a product AB ¼ 0 such that, neither A nor B is 0 then the factors A and B are, called divisors of zero., The above example shows that in the algebra, of matrices, there exist divisors of zero, whereas in, the algebra of complex numbers, there is no zero, divisor.
Page 115 :
4.12, , n, , Engineering Mathematics-I, , EXAMPLE 4.18, Taking 2, 1, A ¼ 4 1, 0, , and, 3, 2, 0, 2, 1 5; B ¼ 4 1, 2, 1, , 3, 2, 0, , 3, 3 4, 2 3 5;, 1 2, , show that matrix multiplication is not, in general,, commutative., Solution. Both A and B are 3 3 matrices. Therefore,, both AB and BA are defined. We have,, 2, 32, 3, 1 3 0, 2 3 4, 7, 6, 76, AB ¼ 4 1 2 1 5 4 1 2 3 5, 2, , 0, , 0, , 2, , 9, 2, 2, , 13, 7, 4 5, 4, , 3, , 4, , 5, 6, ¼ 4 1, 2, and, , 2, , 2, , 3, , 32, , 1, , 1, , 2, , 1, , 3, , 0, , 76, 5 4 1, 0, 3, 11, 7, 8 5:, 5, , 6, BA ¼ 4 1 2 3, 1 1 2, 2, 1 12, 6, ¼ 4 1 7, 2 1, , 2, 0, , 3, , 7, 1 5, 2, , AC ¼, , 0, 0, , 4, 5, , 1, 0, , 2, 0, , ¼, , 0, 0, , 0, 0, , :, , Hence AB ¼ AC, A 6¼ 0 without having B ¼ C and, so we cannot ordinarily cancel A from AB ¼ AC, even if A 6¼0., Remark 4.2 The above examples show that in matrix, algebra, The commutative law AB ¼ BA does not, hold true., (b) There exist divisors of zero, that is, there, exists matrices A and B such that AB ¼0, but neither A nor B is zero., (c) The cancellation law does not hold in, general, that is, AB ¼ AC, A 6¼ 0 does not, imply in general that B ¼ C., (a), , 4.5, , ASSOCIATIVE LAW FOR MATRIX, MULTIPLICATION, , If A ¼ ½aij mn B ¼ ½bjk n p , and C ¼ ½ckl n p are, three matrices with entries from the set of complex, numbers, then, ðABÞC ¼ AðBCÞ:, (Associative Law for Matrix Multiplication):, , Hence AB 6¼ BA., EXAMPLE 4.19, Give an example to show that cancellation law does, not hold, in general, in matrix multiplication., Solution. Let, 0, 0, , A¼, C¼, , 4, 5, 1, 0, , ;, 2, 0, , 5, 0, , B¼, , 4, 0, , ;, , 0 4, 0 5, , 5, 0, , DISTRIBUTIVE LAW FOR MATRIX, MULTIPLICATION, , If A ¼ ½aij m n ; B ¼ ½bjk n p and C ¼ ½ckl p q are, three matrices with elements from the set of, complex numbers, then, AðB þ CÞ ¼ AB þ AC, (Distributive Law for Matrix Multiplication):, , :, , Then A and B are conformable for multiplication., Similarly, A and C are also conformable for multiplication. Thus,, AB ¼, , 4.6, , 4, 0, , ¼, , 0, 0, , 0, 0, , Definition 4.46 The matrices A and B are said to be, anticommutative or anticommute if AB ¼ BA., For example, each of the Pauli Spin matrices (used, in the study of electron spin in quantum mechanics), x ¼, , 0 1, ; y ¼, 1 0, , 0, i, , i, ; z ¼, 0, , 1, 0, , 0, ;, 1
Page 116 :
Matrices, , where i2 ¼ 1 anticommute with the others. In fact,, x y ¼, y x ¼, , 0, , 1, , 0 i, , 1, 0, i, , 0, i, 0, , i, , i, , ¼, , 0, 0 1, 1 0, , ¼, , and so x y ¼ y x ., , Definition 4.48 The sum of the main diagonal elements aii ; i ¼ 1; 2; . . . ; n of a square matrix A is, called the trace or spur of A., Thus,, tr A ¼ a11 þ a22 þ . . . þ ann :, Theorem 4.10 Let A and B be square matrices of order, n and l be a scalar. Then, (a), , tr (lA) ¼ l tr A,, , (b) tr (A þ B) ¼ tr A þ tr B,, (c) tr (AB) ¼ tr (BA)., Proof: Let, A ¼ ½aij nn and B ¼, (a) We have, lA ¼ l aij, and so, tr ðlAÞ ¼, , n, X, , laii ¼ l, , I¼1, , bij, , , nn, , nn, , n, X, , aii ¼ l tr A:, , (b) We have, A þ B ¼ ½aij þ bij n n, trðA þ BÞ ¼, ¼, , n, X, i¼1, n, X, , :, , , , i¼1, , and so, , cij ¼, , n, X, , ½aii þ bii , aii þ, , i¼1, , n, X, , bii, , i¼1, , ¼ tr A þ tr B:, (c) We have, AB ¼ ½cij n n, , aik bkj ;, , k¼1, , and, , BA ¼ ½dij n n ;, , where, , n, X, , dij ¼, , Definition 4.47 If A and B are matrices of order n, then, the matrix AB BA is called the commutator of A, and B., , 4.13, , where, , 0, , ;, 0 i, i 0, ;, 0 i, , n, , bik akj :, , k¼1, , Then, tr ðABÞ ¼, , n, X, i¼1, , ¼, , n, n, X, X, k¼1, , cii ¼, !, ¼, , aik bki, , i¼1, , ¼, , n, n, X, X, i¼1, n, X, , k¼1, n, X, , k¼1, , i¼1, , n, X, , !, aik bki, !, bki aik, , dkk ¼ tr ðBAÞ:, , k¼1, , EXAMPLE 4.20, If A and B are matrices of the same order say n,, show that the relation AB BA ¼ In does not hold, good., Solution. Suppose on the contrary that the relation, AB – BA ¼ In holds true. Since A and B are of same, order, AB and BA are also of order n. Therefore,, trðAB BAÞ ¼ tr In, ) tr AB tr BA ¼ tr In :, Since tr AB ¼ tr BA, we have, 0 ¼ tr In ¼ 1 þ 1 þ 1 . . . þ 1 ¼ n;, which is absurd. Hence, the given relation does not, hold good., Definition 4.49 An n n matrix A is said to be nilpotent if An ¼0 for some positive integer n., The smallest positive integer n, for which, An ¼ 0, is called the degree of nilpotence of A., For example, the matrix, 2, 3, 0 1 2 1, 6 0 0 1 2 7, 6, 7, 4 0 0 0 1 5, 0 0 0 0, is nilpotent and the degree of nilpotence is 4., Similarly, the matrix, 6 9, A¼, 4 6
Page 117 :
4.14, , n, , Engineering Mathematics-I, , is nilpotent with degree of nilpotence 2. In fact,, ", #", #, 6 9, 6 9, 2, A ¼, 4 6, 4 6, ", # ", #, 36 36 54 54, 0 0, ¼, ¼, :, 24 þ 24 36 þ 36, 0 0, It can be shown that every 2 2 nilpotent matrix A, such that A2 ¼ 0 may be written in the form, lm, m2, ;, 2, l lm, where l, m are scalars. If A is real then l, m are also, real., Definition 4.50 A square matrix A is said to be, involutory if A2 ¼ I, For example, the matrix, 1, 0, , 4.7, , TRANSPOSE OF A MATRIX, , Definition 4.52 A matrix obtained by interchanging, the corresponding rows and columns of a matrix A, is called the transpose matrix of A., The transpose of a matrix A is denoted by AT (or, by A 0 ). Thus, if A ¼ [aij]mn, then AT ¼ [aji]nm is, an n m matrix. For example, the transpose of the, matrix, 2, 3, 1 0 2, 4 3 7 4 5, 1 2 8, is, 2, 3, 1 3 1, 4 0 7 2 5:, 2 4 8, Further,, (i), , The transpose of a row matrix is a column, matrix. For example,2if A ¼, 3 [1 2 4 3 ], then, 1, 6 2 7, 7, AT ¼ 6, 4 4 5:, 3, , (ii), , The transpose of a column matrix is a row, matrix. For example,, 2 if 3, 3, A ¼ 4 8 5;, 3, , 0, 1, , is involutory., Theorem 4.11 A matrix A is involutory if and only if, (I þ A) (I A) ¼ 0., Proof: Suppose first that A is involutory, then, A2 ¼ I, or, , I A2 ¼ 0, , then, , or, I A ¼ 0 since I ¼ I, 2, , or, , 2, , 2, , ðI þ AÞ ðI AÞ ¼ 0 since AI ¼ IA:, , Conversely, let, ðI þ AÞ ðI AÞ ¼ 0, Then,, I 2 IA þ AI A2 ¼ 0, or, or, or, , I 2 A2 þ 0 ¼ 0, , AT ¼ ½ 3 8 3 , (iii) If A is m n matrix, then AT is an n m, matrix. Therefore, the product AAT, AT A, are both defined and are of order m m, and n n, respectively., If A ¼ [aij]m n and B ¼ [bij]m n are matrices of, the same order and if l is a scalar, then the transpose, of matrix has the following properties:, (a) (AT)T ¼ A, (b) (lA)T ¼ lAT, (c) (A þ B)T ¼ AT þ BT, (d) (AB)T ¼ BT AT (Reversal law)., , I A ¼0, 2, , 2, , A2 ¼ I 2 ¼ I:, , Definition 4.51 A square matrix A is said to be, idempotent if A2 ¼ A., For example, In is idempotent., , 4.8, , SYMMETRIC, SKEW-SYMMETRIC, AND, HERMITIAN MATRICES, , Definition 4.53 A square matrix A is said to be symmetric if A ¼ AT.
Page 118 :
Matrices, , Thus, A ¼ [aij]nn is symmetric if aij ¼ aji for, i i n, 1 j n., Definition 4.54 A square matrix A ¼ ½aij nn is said to, be skew symmetric if aij ¼ – aji for all i and j., Thus square matrix is skew-symmetrical if A ¼ – AT., For example,, 2, 3, a h g, 4 h b f 5, g f c, is symmetric matrix whereas the matrix, 2, 3, 0, 1 2, 4 1 0 3 5, 2 3 0, , Then, T 1, T, 1, A þ AT, ¼ A þ AT, 2, 2, , 1 T T T, 1, A þ A, ¼ AT þ A, ¼, 2, 2, , 1, T, ¼ A þ A ¼ P:, 2, and so P is symmetric. Further,, PT ¼, , T, T, 1, 1, A AT, A AT, ¼, 2, 2, , 1 T T T, 1, A A, ¼, ¼ AT A, 2, 2, , 1, T, ¼ Q;, ¼ AA, 2, , QT ¼, , Properties of Symmetric and Skew-Symmetric, Matrices, In a skew-symmetric matrix A, all diagonal elements are zero. In fact, if A is, skew-symmetric, then, aij ¼ aji for all i and j:, ) aii ¼ aii, ) aii ¼ 0:, (b), , The matrix which is both symmetric and, skew-symmetric must be a null matrix. In, fact, if A ¼ [aij] is symmetric, then, aij ¼ aji for all i and j:, , Further, if A ¼ [aij] is skew-symmetric, then, aij ¼ – aji for all i and j. Adding, we get 2aij ¼ 0 for, all i and j and so aij ¼ 0 for all i and j. Hence, A is a, null matrix. Thus, “Null matrix is the only matrix, which is both symmetric and skew-symmetric.”, For any square matrix A, A þ AT is a, symmetric matrix and A AT is a skewsymmetric matrix. In fact, we note that, , , , (i) AþAT T ¼ AT þ AT T ¼ AT þA ¼ A þAT, T, and so A þ A is symmetric., , , , (ii) A AT T ¼ AT AT T ¼ AT A, ¼ ðA A T Þ, T, and so A – A is skew-symmetric., (c), , (d), , Every square matrix A can be expressed, uniquely as the sum of a symmetric and a, , 4.15, , skew-symmetric matrix. To show it, set, , , 1, 1, P ¼ A þ AT and Q ¼ A AT :, 2, 2, , is a skew-symmetric matrix., , (a), , n, , and so Q is skew-symmetric. Also P þ Q ¼ A. Thus, A can be expressed as the sum of a symmetric and a, skew-symmetric matrix., To establish the uniqueness of the expression,, let A ¼ P1 þ Q1, where P1 is symmetric and Q1 is, skew-symmetric. It is sufficient to show that P1 ¼ P, and Q1 ¼ Q. We have,, AT ¼ ðP1 þ Q1 ÞT ¼ PT1 þ QT1 ¼ P1 Q1, Thus, A þ AT ¼ 2P1 or P1 ¼, , , 1, A þ AT ¼ P:, 2, , Also,, , , 1 , A þ AT, 2, , 1, ¼ A AT ¼ Q :, 2, Hence, the expression is unique., (e) If A is a square matrix, then A þ AT and, AAT are symmetric matrices. These facts, follow from, T, T, (i) ðA þ AT Þ ¼ AT þ ðAT Þ ¼ AT þ A, T, ¼AþA, and, T, T, (ii) ðAAT Þ ¼ ðAT Þ AT ¼ AAT :, (f) If A and B are two symmetric matrices,, then AB-BA is a skew-symmetric matrix., Q 1 ¼ A P1 ¼ A
Page 119 :
4.16, , n, , Engineering Mathematics-I, , and, , In fact,, ðAB BAÞT ¼ ðABÞT ðBAÞT, ¼ BT AT AT BT ðReversal LawÞ, ¼ BA AB, ¼ ðAB BAÞ:, (g), , If A is a symmetric (skew-symmetric), then, BT ABis a symmetric (skew-symmetric), matrix. In fact, if A is symmetric, AT ¼ A, and so, , , 20, 13, 1, 0, 1 3 7, 1 2 4, , , 1, 1 6B, C7, C, B, A AT ¼ 4@ 3 0 2 A @ 2 0 2 A5, 2, 2, 4 2 5, 7 2 5, 0, 1, 0, 1, 0 12 32, 0 1 3, 1B, C, B, C, ¼ @ 1 0 0 A ¼ @ 12 0 0 A:, 2, 3, 0 0, 3 0 0, 2, Hence, 0, , T, T, BT AB ¼ BT AT BT ¼ BT AT B, , 1 2 4, , 1, , 0, , 1, , 5, 2, , 11, 2, , 1, , 0, , C, C, B, B, B, @ 3 0 2 A ¼ @ 52 0 2 A þ @, 11, 7 2 5, 2 2 5, , ¼ BT AB, , 0 12 32, 1, 2, 3, 2, , 1, , C, 0 0 A:, 0 0, , and if A is skew-symmetric, then, , , T, T, BT AB ¼ BT AT BT ¼ BT ðAÞB ¼ BT AB:, , EXAMPLE 4.21, Express the matrix, 2, , 1 2, A¼4 3 0, 7 2, , 3, 4, 2 5, 5, , Definition 4.55 A matrix obtained from a given matrix, A by replacing its elements by the corresponding, conjugate complex numbers is called the conjugate, of A and is denoted by A., Thus if A ¼ ½aij m n , then A ¼ ½aij mn , where, aij denotes the complex conjugate of aij., Definition 4.56 A matrix whose all elements are real is, called a real matrix., If A is a real matrix, then obviously A ¼ A, Further if A and B are two matrices, then, , as the sum of a symmetric matrix and a skewsymmetric matrix., , (a), (b), , Solution. We know that every square matrix A, can be expressed as the sum of symmetric, matrix 12 ðA þ AT Þ and a skew-symmetric matrix, 1, T, 2 ðA A Þ. In the present case, 20, 1, 2, , AþA, , , T, , 1 2 4, , 1 6B, 4@ 3, 2, 7, 0, 2, 1B, ¼ @5, 2, 11, , ¼, , 1, , 0, , 1 3 7, , 13, , C, B, C7, 0 2 A þ @ 2 0 2 A5, 2 5, , 4 2 5, 1, 0, 1, 1 52 11, 5 11, 2, C, B5, C, C, 0 4 A ¼ B, @2 0 2A, 11, 2 5, 4 10, 2, , (c), (d), , ð, AÞ¼ A, , ðA þ BÞ ¼ A þ B, ðlAÞ ¼ l A, ðABÞ ¼ A B, where l a complex number., , Definition 4.57 The transpose of the conjugate of a, matrix A is called transposed conjugate or tranjugate of A and is denoted by Ah or sometimes by A ., We observe that, T, ðAÞ ¼ ðAT Þ :, , For example, let, 2, 2, A¼4 1þi, 3 þ 2i, , 3, 1 þ 2i 3 þ 4i, 7, 2 þ i 5;, 4 þ i 3 þ 3i
Page 120 :
Matrices, , then, , 2, , 2, A ¼ 4 1 i, 3 2i, and, , 3, , 3, 2, 1 i 3 2i, Ah ¼ 4 1 2i, 7, 4 i 5:, 3 4i 2 i 3 3i, Let A and B the matrices, then the tranjugate of, the matrix possesses the following properties:, h, (a) Ah ¼ A, (b) ðA þ BÞh ¼ Ah þ Bh , A and B being of the, same order., lAh , l being a complex number., (c) ðlAÞh ¼ , h, (d) ðABÞ ¼ Bh Ah , A and B being conformable, to multiplication., Definition 4.58 A square matrix A ¼ [aij] is said to be, Hermitian if aij ¼ aji for all i and j., Thus, a matrix is Hermitian if and only if A ¼ Ah., We note that, (a), , A real Hermitian matrix is a real symmetric, matrix., (b) If A is Hermitian, then, aii ¼ aii for all i;, and so aii is real for all i. Thus, every diagonal, element of a Hermitian matrix must be real., Definition 4.59 A square matrix A ¼ [aij] is said to be, Skew-Hermitian if aij ¼ aji for all i and j. Thus, a, matrix is Skew-Hermitian if A ¼ – Ah. We observe that, (a), , A real Skew-Hermitian matrix is nothing, but a real Skew-symmetric matrix., (b) If A is Skew-Hermitian matrix, then aii ¼, aii or aii þ aii ¼ 0 and so aii is either a pure, imaginary number or must be zero. Thus the, diagonal element of a Skew-Hermitian matrix, must be a pure imaginary number or zero., For example,, 2, 3, 2, 3 4i 2 þ 3i, 4 3 þ 4i, 0, 7 5i 5, 2 3i 7 þ 5i, 4, , 4.17, , is an Hermitian matrix, whereas, the matrix, , 1 2i 3 4i, 7, 2i 5, 4 i 3 3i, , 2, , n, , 0, 3 þ 4i, , 3 þ 4i, i, , is Skew-Hermitian., It can be shown easily that if A is any square matrix,, then A þ Ah, AAh, AhA are Hermitian and A – Ah is, Skew-Hermitian., EXAMPLE 4.22, Show that every square matrix can be uniquely, expressible as the sum of a Hermitian matrix and, Skew-Hermitian matrix., Solution. As mentioned above, if A is any square matrix,, and A – Ah is Skewthen A þ Ah is Hermitian, , , 1, Hermitian. Therefore, 2 A þ Ah and 12 A Ah are, Hermitian and Skew- – Hermitian, respectively, so that, ¼, , A, , , , 1, 1, A þ Ah þ, A Ah ;, 2, 2, , which proves first part of our result. The uniqueness, can be proved easily and is left to the reader., EXAMPLE 4.23, Show that every square matrix A can be uniquely, expressed as P þ i Q where P and Q, are Hermitian, matrices., Solution. We take, P, , ¼, , , , 1, 1, A þ Ah and Q ¼, A Ah :, 2, 2i, , Then A ¼ P þ iQ. Further,, , h, h, 1, 1, A þ Ah, A þ Ah, ¼, 2, 2, 1 h 1 h h 1 h, ¼ A þ A ¼ ðA þ AÞ, 2, 2, 2, 1, h, ¼ ðA þ A Þ ¼ P;, 2, , Ph ¼, ,
Page 123 :
4.20, , n, , Engineering Mathematics-I, , The matrix B is then called the inverse of A. If there, exists no such matrix B, then A is called noninvertible (singular). The inverse of A is denoted by, 1 1, A1. For example, if A ¼, and, 0 1, 1 1, B¼, , then, 0 1, AB ¼, BA ¼, , 1, 0, 1, 0, , 1, 1 1, 1, 0, 1, 1, 1 1, 1, , 0, , 1, , ¼, ¼, , 1, 0, 1, , 0, 1, 0, , 0, , 1, , :, , The condition is sufficient. Let A be non singular., Therefore, |A| 6¼ 0. Let, 1, ðadj AÞ: Then, B¼, jAj, , , 1, 1, adj A ¼, ðA adj AÞ, AB ¼ A, jAj, jAj, 1, ½ jAj I ¼ I:, ¼, jAj, Similarly, BA ¼ I. Hence, AB ¼ BA ¼ I and so, 1, B ¼ jAj, ðadj AÞ is the inverse of A., , Thus AB ¼ BA ¼ I2. Hence A is invertible and its, inverse is B., , Theorem 4.15 Let A and B be two nonsingular matrices of the same order. Then AB is, non-singular and, , Theorem 4.13 The inverse of a square matrix, is unique., , ðABÞ1 ¼ B1 A1 :, , Proof: Suppose on the contrary that B and C are two, inverse of a matrix A. Then, AB ¼ BA ¼ In, , ð18Þ, , AC ¼ CA ¼ In :, , ð19Þ, , and, Thus, we have, B ¼ B In, , (property of identity matrix), , ¼ BAC, ½usingð19Þ, ¼ ðBAÞ C ðAssociative LawÞ, ¼ In C, , Proof: Since, jABj ¼ jAj jBj 6¼ 0;, it follows that AB is non-singular and so invertible., Moreover,, , , , , ðABÞ B1 A1 ¼ A B B1 A1, ¼ AIA1 ¼ AA1 ¼ I, and, , , 1, , , A B ¼ B1 I B, , ¼ B1 B ¼ I:, Hence, , , , , , ðABÞ B1 A1 ¼ I ¼ B1 A1 ðABÞ;, , ½usingð18Þ, , ¼ C:, , , , B1 A1 ðABÞ ¼ B1 A, , Hence, inverse of A is unique., , which proves that B–1 A–1 is the inverse of AB,, that is,, ðABÞ1 ¼ B1 A1, , Definition 4.66 A square matrix A is called non-singular if |A| 6¼ 0., The square matrix A will be called singular if, |A| ¼ 0., , Theorem 4.16 If A is a non-singular matrix, then, T 1 1 T, A, ¼ A, :, , Theorem 4.14 A square matrix A is invertible if and, only if it is non-singular., Proof: The condition is necessary. Let A be invertible, and let B be the inverse of A so that, Therefore,, , AB ¼ I ¼ BA:, jAj jBj ¼ jIj ¼ 1:, , Hence |A| 6¼ 0., , (Thus operations of transposing and inversion, commute)., Proof: We note that, , T , T, AT A1 ¼ A1 A ¼ I T ¼ I, and, Hence, , , , A1, , T, , , T, AT ¼ AA1 ¼ I T ¼ I:, , , T, , T, AT A1 ¼ I ¼ A1 AT
Page 124 :
n, , Matrices, , and so, , , , AT, , 1, , and so, , , T, ¼ A1 :, , Theorem 4.17 If a matrix A is invertible, then Ah is, invertible and, h 1 1 h, ¼ A, :, A, Proof: We have, h, , h , Ah A1 ¼ A1 A ¼ I h ¼ I, and, 1 h h , h, A, A ¼ A A1 ¼ I h ¼ I:, 1 h, , h, , Thus, ðA Þ is the inverse of the A ., , 4.12, , METHODS OF COMPUTING INVERSE, OF A MATRIX, , 1. Method of an Adjoint Matrix, If A is non-singular square matrix, then we have, , , , , 1, 1, adj A ¼ I ¼, adj A A:, A, jAj, jAj, This relation yields, 1, A1 ¼, adj A:, jAj, EXAMPLE 4.25, Find the inverse of the matrix, 2, 3 3, A ¼ 4 2 3, 0 1, Solution. We have, , 3 3, , jAj ¼ 2 3, 0 1, , 3, 4, 4 5:, 1, , , 4 , 4 , 1 , , ¼ 3 ð3 þ 4Þ þ 3 ð 2 Þ þ 4 ð2Þ ¼ 1:, Cofactor of the entries are, A11 ¼ 1;, A12 ¼ 2, A21 ¼ 1; A22 ¼ 3, , A13 ¼ 2, A23 ¼ 3, , A31 ¼ 0, , A33 ¼ 3:, , A32 ¼ 4, , Therefore, the cofactor matrix is, 2, 3, 1 2 2, , 3, 35, Aij ¼ 4 1, 0 4 3, , 2, , 1, adj A ¼ 4 2, 2, Hence, A1, , 1, 3, 3, , 2, 1, 1, adj A ¼ 4 2, ¼, jAj, 2, , EXAMPLE 4.26, Find A–1 if, , 4.21, , 3, 0, 4 5 :, 3, 3, 0, 4 5:, 3, , 1, 3, 3, , 2, , 3, 1, 2 1, A ¼ 43, 0, 2 5:, 4 2, 5, Solution. We have |A| ¼ –4. The cofactor matrix is, 2, 3, 4 7 6, , Aij ¼ 4 8, 9 10 5, 4 5 6, and so, 2, 3, 4 8, 4, adj A ¼ 4 7, 9 5 5:, 6 10 6, Hence, 3, 2, 2, 3, 1 2 1, 4 8, 4, 1, 6, 57, A1 ¼ 4 7, 9 5 5 ¼ 4 74 9, 4, 45, 4, 3 5, 3, 6 10 6, 2, , 2, , 2, , 2. Method Using Definition of Inverse, Let B be the inverse of matrix A, which is nonsingular. Then, AB ¼ I, that is,, 2, 32, 3, a11 a12 . . . . . . a1n, b11 b12 . . . . . . b1n, 6 a21 a22 . . . . . . a2n 7 6 b21 b22 . . . . . . b2n 7, 6, 76, 7, 6 ... ... ... ... ...7 6 ... ... ... ... ...7, 6, 76, 7, 4 ... ... ... ... ...5 4 ... ... ... ... ...5, an1 an2 . . . . . . ann, bn1 bn2 . . . . . . bnn, 3, 2, 1 0 ... ... 0, 6 0 1 ... ... 07, 7, 6, 7, ¼6, 6... ... ... ... ...7, 4... ... ... ... ...5, 0 0 ... ... 1, Multiplying the matrices on the left and then comparing the corresponding entries we can find b11,, b12,…, bnn. Then, B will be the inverse of A.
Page 125 :
4.22, , n, , Engineering Mathematics-I, , EXAMPLE 4.27, Find the inverse of 2, , 1, A ¼ 40, 0, , 2, 1, 0, , be a set of n equations in n variables x1, x2, …, xn In, matrix form, we can represent these equations by, , 3, 1, 3 5:, 1, , AX ¼ B;, where, , Solution. The given matrix is upper triangular matrix., The readers may prove that inverse of an upper, triangular matrix is also upper triangular matrix., Similarly, the inverse of a lower-triangular matrix is, again a lower-triangular matrix. So, let, 2, 3, a b c, 40 d e5, 0 0 f, be the inverse of the given matrix. Then, by definition of the inverse, we must have, 2, 32, 3, 2, 3, 1 2 1, a b c, 1 0 0, 40 1, 35 40 d e5 ¼ 40 1 05, 0 0, 1, 0 0 f, 0 0 1, or, a b þ 2d, 40, d, 0, 0, 2, , 3, 2, c þ 2e f, 1, e þ 3f 5 ¼ 4 0, f, 0, , 0, 1, 0, , 3, 0, 0 5:, 1, , a11 a12 . . ., 6 a21 a22 . . ., 6, A¼6, 6 ... ... ..., 4 ... ... ..., a, ..., a, 2 n1 3 n2, 2, x1, 6 x2 7, 6, 6, 7, 6, 7; B ¼ 6, X ¼6, ., ., ., 6, 6, 7, 4 ...5, 4, xn, , or, , or, , b þ 2d ¼ 0 so that b ¼ 2d ¼ 2, , or, , Remark 4.3 We can also find the inverse of a lower, triangular matrix by the above method., , 3. Method of Matrix Equation., Let, a11 x1 þ a12 x2 þ . . . þ a1n xn ¼ b1, a21 x1 þ a22 x2 þ . . . þ a2n xn ¼ b2, ..., ..., ..., ..., ..., , ..., , ..., , ..., , an1 x1 þ an2 x2 þ . . . þ ann xn ¼ bn, , ..., ..., ..., ..., ..., 3, b1, b2 7, 7, ...7, 7, ...5, , 3, a1n, a2n 7, 7, ... 7, 7;, ... 5, ann, , bn, , and A is called the coefficient matrix. If A is nonsingular matrix, then A–1 exists. Premultiplying the, matrix equation by A–1 we get, A1 ðAX Þ ¼ A1 B, , Equating corresponding entries, we get, a ¼ 1; d ¼ 1; f ¼ 1, e þ 3f ¼ 0 so that e ¼ 3f ¼ 3, c þ 2e f ¼ 0 so that c ¼ f 2e ¼ 1 þ 6 ¼ 7:, Hence,, 2, 3, 1 2, 7, A1 ¼ 4 0, 1 3 5:, 0, 0, 1, , 2, , , , , A1 A X ¼ A1 B, IX ¼ A1 B, X ¼ A1 B:, , Hence, if we can represent x1, x2,…, xn in terms of, b1, b2,…, bn, then the coefficient matrix of this, system will be the inverse of A., EXAMPLE 4.28, Find the inverse of, 2, , 1, A¼ 4 0, 1, , 3, 0 4, 1, 2 5:, 2, 1, , Solution. We observe that jAj ¼ 1 6¼ 0: Thus, A is, non-singular and so the inverse of A exists. We, consider the matrix equation, 2 3, 32 3, 2, b1, 1, 0 4, x1, 4 0 1, 2 5 4 x2 5 ¼ 4 b2 5;, x3, b3, 1, 2, 1
Page 126 :
Matrices, , 4.23, , Solution. Consider the augmented matrix, , which yields, x1 4x3 ¼ b1, x2 þ 2x3 ¼ b2, x1 þ 2x2 þ x3 ¼ b3 :, , 2, , x1 ¼ 5b1 þ 8b2 þ 4b3, x2 ¼ 2b1 þ 3b2 þ 2b3, x3 ¼ b1 þ 2b2 þ b3 :, In matrix form, we have, 2, 32 3, 2 3, b1, x1, 5 8 4, 4 x2 5 ¼ 4 2 3 2 5 4 b2 5;, 1 2 1, x3, b3, that is, X ¼ A1 B:, Hence, 2, 3, 5 8 4, A1 ¼ 4 2 3 2 5:, 1 2 1, , 4. Method of Elementary Transformation, (Gauss-Jordan Method), The following transformations are called elementary transformation of a matrix:, (a) Interchanging of rows (columns)., (b) Multiplication of a row (column) by a nonzero scalar., Adding/subtracting k multiple of a row, (column) to another row (column)., , Definition 4.67 A matrix B is said to be row (column), equivalent to a matrix A if it is obtained from A by, applying a finite number of elementary row (column) transformations. In such case, we write B , A. In Gauss–Jordan Method, we perform the, sequence of elementary row transformations on, A and I simultaneously, keeping them side-by-side., EXAMPLE 4.29, Using elementary row transformations, find A–1 if, 2, 3, 1, 0 2, A ¼ 4 2 1 3 5:, 4, 1 5, , 1, , 6, ½AjI ¼ 6, 42, , Solving these equations for x1, x2, and x3, we have, , (c), , n, , 2, , 4, , 1, 6, 6, 40, 0, 2, 1, 6, 6, 40, 0, 2, 1, 6, 6, 40, 0, 2, 1, 6, 6, 40, 0, 2, 1, 6, 6, 40, 0, 2, 1, 6, 6, 40, 0, Hence, , , 3, 0 2 1 0 0, , 7, 1 3 0 1 0 7, 5, , 1 50 0 1, , 3, 0, 2 1 0 0, , 7 R2 ! R2 2R1, 1 1 2 1 0 7, 5, R3 ! R3 4R1, , , 1 3 4 0 1, , 3, 0 0, 0, 2 1, , 7, 1, 1 2 1 0 7, 5R2 ! R2, , 0 1, 1 3 4, , 3, 0 0, 0, 2 1, , 7, 1, 1 2 1 0 7, 5R3 ! R3 R2, , 1 1, 0 4 6, , 3, 0, 0, 0 2 1, , 7, 1, 07, 1 1 2 1, 5 R3 ! 4 R 3, , 0 1 32 14 14, , 3, 0, 0, 0 2 1, , 7, 17, 1 0 12 34, 4 5R2 ! R2 R3, , 0 1 32 14 14, , 3, 1, 1, 0 0 2, 2, 2, , 7, 17, 1 0 12 34, 4 5R1 ! R1 2R3, , 0 1 32 14 14, 2, 6, A1 ¼ 4, , 2, 1, 2, 3, 2, , 1, 2, 34, 14, , 3, , 1, 2, 17, 4 5:, 14, , EXAMPLE 4.30, Using elementary row transformation, find the, inverse of the matrix, 2, 3, 1 3 3, A ¼ 4 1 4 3 5:, 1 3 4
Page 127 :
4.24, , n, , Engineering Mathematics-I, , Solution. Consider, 2, 1 3, 6, [AjI ] ¼ 6, 41 4, 1 3, 2, 1 3, 6, 6, 40 1, 1 3, 2, 1 3, 6, 6, 40 1, 1 3, 2, 1 0, 6, 6, 40 1, 0 0, 2, 1 0, 6, 6, 40 1, 0 0, 2, 1 0, 6, 6, 40 1, 0 0, Hence, , the augmented matrix, , 3, 3 1 0 0, , 7, 3 0 1 0 7, 5, , 40 0 1, , 3, 0, 3 1 0, , 7, 1 0 1 1 7, 5 R2 ! R 2 R 3, , 1, 40 0, , 3, 0, 3 1 0, , 7, 1 0 1 1 7, 5R3 ! R3 R1, , 1, 1 1 0, , 3, 3, 6 1 3, , 7, 1 1 7, 1 0, 5R1 ! R1 3R2, , , 0, 1, 1 1, , 3, 6 1 3 3, , 7, 1 07, 0 1, 5 R 2 ! R 2 þ R3, , 0 1, 1 1, , 3, 0 7 3 3, , 7, 1, 07, 0 1, 5R1 ! R1 6R3 :, , 1 1, 0, 1, 2, , A1, , 7 3, ¼ 4 1, 1, 1, 0, , 3, 3, 0 5:, 1, , Now we reduce the matrix A to identity matrix I3 by, elementary row transformation keeping in mind that, each such row transformation will apply to the, prefactor I3 on the right hand side., Performing R2 ? R2 – R1 and R3 ? R3 þ 2R1,, we get, 2, 3 2, 3, 1, 1, 3, 1 0 0, 40, 2 6 5 ¼ 4 1 1 0 5A:, 0 2, 2, 2 0 1, Performing R2 ! 12 R2 ; we get, 2, , 1, 40, 0, , 3 2, 3, 1, 3 5 ¼ 4 12, 2, 2, , 3, 0, 0 5A:, 1, , 0, 1, 2, , 0, , Performing R1 ? R1 – R2 and R3 ? R3 þ R2, we get,, 2, , 1, 40, 0, , 0, 1, 0, , 3 2 3, 6, 2, 3 5 ¼ 4 12, 4, 1, , 3, 12 0, 1, 5A:, 2 0, 1 1, , Performing R3 ! 14 R3 ; we get, 2, , 1, 40, 0, , 3 2 3, 1, 0, 6, 2 2, 1, 1 3 5 ¼ 4 12, 2, 14 14, 0, 1, , 3, 0, 0 5A:, 14, , Performing R1 ? R1 þ 6R3 and R2 ? R2 þ 3R3, we, get, 3, 2, 3 2, 3, 3, 1, 1 0 0, 2, 7, 40 1 05 ¼ 6, 4 54 14 34 5A:, 0 0 1, 14 14 14, , EXAMPLE 4.31, Find the inverse of the matrix, 2, 3, 1, 1, 3, A¼4 1, 3 3 5;, 2 4 4, , Thus,, , 2, , 3, , 3, , 3, 2, 7, 34 5A:, 14, , 1, , 6, I3 ¼ 4 54 14, 14 14, , by using elementary transformations., Solution. Write A ¼ I3A, that is,, 2, 3 2, 1 0, 1, 1, 3, 4 1, 3 3 5 ¼ 4 0 1, 0 0, 2 4 4, , 1, 1, 2, , Hence, 3, , 0, 0 5A:, 1, , 2, , 3, , 6, A1 ¼ 4 54, 14, , 1, 14, 14, , 3, , 3, 2, 7, 34 5:, 14
Page 128 :
Matrices, , 4.13, , RANK OF A MATRIX, , Definition 4.68 A matrix is said to be of rank r if it has, at least one non-singular submatrix of order r but, has no non-singular submatrix of order more than r., Rank of a matrix A is denoted by (A)., A matrix is said to be of rank zero if and only if all, its elements are zero., EXAMPLE 4.32, Find the rank of the matrix, 2, 1 3, A¼4 2 4, 1 5, Solution. The matrix A is, (A) 3. We note that, , 1, , , jA1 j ¼ 2, , 1, , 1, , , jA2 j ¼ 2, , 1, , 1, , , jA3 j ¼ 2, , 1, , 3, , , jA4 j ¼ 4, , 5, , 4, 6, 4, , 3, 2, 2 5:, 6, , of order 3 4. Therefore,, 3, 4, 5, 3, 4, 5, 4, 6, 4, 4, 6, 4, , , 4 , , 6 ¼ 0;, , 4, , 2 , , 2 ¼ 0;, , 6, , 2 , , 2 ¼ 0;, , 6, , 2 , , 2 ¼ 0:, , 6, , Therefore, (A) 6¼ 3. But, we have submatrix B ¼, 1 3, ; whose determinant is equal to 2 6¼ 0., 2 4, Hence, by definition, (A) ¼ 2., EXAMPLE 4.33, Find the rank of the matrix, 2, 3, 2 1 1, A ¼ 4 0 3 2 5:, 2 4 3, Solution., , Since jAj ¼ 0; ðAÞ 2. But, we note that, 2 1, , , 0 3 ¼ 6 6¼ 0: Hence, (A) ¼ 2., , n, , 4.25, , Remark 4.4 The rank of a matrix is, of course,, uniquely defined when the elements are all explicitly given numbers, but not necessarily otherwise., For example, consider the matrix, 2, 3, pffiffiffi, 4, pffiffixffi 2 5, pffiffi0ffi, A¼4 2 5 4, 5 5:, pffiffixffi, 0, 5 4x, We have, jAj ¼ ð4 xÞ3 25ð4 xÞ ¼ 0; if x ¼ 9; 4 or 1:, When x ¼ 9, we have the, 2, 5ffiffiffi, p, A ¼ 42 5, 0, , singular matrix, 3, pffiffiffi, 2 5 p0ffiffiffi, 5ffiffiffi, 5 5;, p, 5 5, , which has non-singular submatrix, pffiffiffi, 5ffiffiffi, 5, p, :, 5 5, Thus, for x ¼ 9 the rank of A is 2. Similarly, the rank, is 2 when x ¼ 4 or x ¼ –1. For other values of, x; jAj 6¼ 0 and so the rank of A is 3., Theorem 4.18 Let A be an m n matrix. Then, (A) ¼ (AT)., Proof: Suppose (A) ¼ r. Then, there is at least one, square submatrix R of A of order r whose determinant is non-zero. If RT is transpose of R, then it is, submatrix of AT. Since, the value of a determinant, does not alter by interchanging the rows and columns, |RT| ¼|R| 6¼ 0. Therefore, (AT) r., Now if AT contains a square submatrix S of, order r þ 1, then corresponding to S, ST is a submatrix of A of order r þ 1. But (A) ¼ r. Therefore,, jSj ¼ jS T j ¼ 0: Thus, AT cannot contain an (r þ 1), rowed square submatrix with non-zero determinant., Thus, (AT) r. Hence, (AT) ¼ r., Theorem 4.19 The rank of a matrix does not alter, under elementary row (column) transformations., Proof: Let A ¼ [aij] be an m n matrix of rank r. We, prove the theorem only for elementary row transformation. The proof for column transformation is, similar.
Page 129 :
4.26, , n, , Engineering Mathematics-I, , Case I. Interchange of a pair of row does not alter the, rank., Let s be the rank of the matrix B obtained from, the matrix A of rank r by elementary transformation, Rp Rq. Let B0 be any (r þ 1) rowed square submatrix of B. The (r þ 1) rows of B0 are also the rows, of some uniquely determined submatrix A0 of A., The identical rows of A0 and B0 may occur in the, same or in different relative positions. Since, the, interchange of two rows of a determinant changes, only the sign, we have, B0 j ¼ jA0 j or jB0 j ¼ jA0 j:, Since (A) ¼ r, every (r þ 1)-rowed minor of, A vanishes, that is, |A0| ¼0. Hence, | B0 | ¼ 0., Therefore, every (r þ 1) rowed minor of B vanishes., Hence, s ¼ (B) r ¼ (A). But A can also be, obtained from B by interchanging its rows., Therefore, r s. Hence r ¼ s., Case II. Multiplication of the elements of a row by a, non-zero number does not alter the rank., Let s be the rank of the matrix B obtained from the, matrix A of rank r by the elementary transformation, Rp ? kRp (k 6¼ 0) If B0 is any (r þ1)-rowed submatrix, of B, then there exists a uniquely determined submatrix, A0 of A such that | B0 | ¼ | A0 | (when pth row of B is one, of those rows which are deleted to obtain B0 from B) or, | B0 | ¼ k| A0 | (when pth row of B is retained while, obtaining B0 from B). Since (A) ¼ r, every (r þ 1)rowed submatrix has zero determinant, that is | A0 | ¼ 0., Hence, | B0 | ¼ 0. Thus every (r þ1)-rowed submatrix, of B vanishes. Hence (B) r, that is, s r. On, the other hand, A can be obtained from B by elementary transformation Rp ! 1k Rp : Therefore, we, have r s. Hence r ¼ s., Case III. Addition to the elements of a row, the, product by any number k of the corresponding, elements of any other row, does not alter the rank., Let s be the rank of the matrix B obtained from, the matrix A by elementary transformation Rp ?, Rp þ kRq. Let B0 be any (r þ1)-rowed square submatrix of B and A0 be the corresponding placed, submatrix of A. The transformation Rp ? Rp þ kRq, has changed only the pth row of the matrix A. We, know that the value of the determinant does not, change if we add to the elements of any row the, , corresponding elements of any other row multiplied, by some number. Therefore, if no row of the submatrix A0 is a part of the pth row or if two rows of A0, are parts of the pth and qth rows of A, then | B0 | ¼ |, A0 |. Since (A) ¼ r, we have | A0 | ¼ 0 and consequently | B0 |¼ 0., Again, if a row of A0 is a part of the pth row of, A, but no row is part of qth row, then, jB0 j ¼ jA0 j þ kjC0 j;, where C0 is an (r þ 1)-rowed square matrix which, can be obtained from A0 by replacing the elements, of A0 in the row which corresponds to the pth row of, A by the corresponding elements in the qth row of, A. All the (rþ1) rows of the matrix C0 are exactly, the same as the rows of some (rþ1)-rowed square, submatrix of A, though arranged in some different, order. Therefore, | C0 | ¼ ± times some (rþ1)-rowed, minor of A. Since the rank of A is r, every (rþ1)rowed minor of A is also zero, so that | A0 | ¼ 0, | C0, | ¼ 0, and so in turn | B0 | ¼ 0. Thus, every (rþ1)rowed square matrix of B has zero determinant., Hence, s r. Also, since, A can be obtained from B, by an elementary transformation, Rp ?Rp þ kRp, Therefore, as stated, r s. Hence r ¼ s., EXAMPLE 4.34, Find the rank of the matrix, 2, , 3, 2 1, 2, 6 5:, 4, 5, , 3, A ¼ 44, 7, Solution. We have, 2, , 3, , 6, A¼6, 44, 2, , 1, , 2, , 7, 1, , 6, 6, 4 4, 7, , 3, , 2, , 7, 67, 5, , 4, , 5, 0 7, , 3, , 2, , 7, 67, 5 R1 ! R 1 R 2, , 4, , 5
Page 130 :
Matrices, , 2, , 1, 44, 7, 2, 1, 4, 0, 0, 2, 1, 40, 0, 2, 1, 4, 0, 0, , 0, 2, 4, 0, 2, 4, 0, 1, 4, 0, 1, 0, , 3, 7, 6 5R1 ! R1, 5, 3, 7, R2 ! R2 4R1, 22 5, R3 ! R3 7R1, 44, 3, 7, 1, 11 5R2 ! R2, 2, 44, 3, 7, 11 5R3 ! R3 4R2 :, 0, , n, , 4.27, , matrix obtained from A by interchanging ith and jth, column. Ei (k) denotes the elementary matrix, obtained by multiplying the ith row or ith column of, a unit matrix by k., Similarly, Eij(m) denotes the elementary matrix, obtained by adding to the elements of the ith row, (column) of a unit matrix the m multiple of the, corresponding elements of the jth row (column)., We note that jEij j ¼ 1; jEi ðk Þj ¼ k 6¼ 0, j Eij ðmÞ j ¼ 1. It follows, therefore, that all the, elementary matrices are non-singular and, hence,, possess inverse., , Thus, |A| ¼ 0. Therefore (A) 6¼ 3. But, since, , , 1 0, , , 0 1 ¼ 1 6¼ 0;, , Theorem 4.20 Every elementary row (column), transformation of a matrix can be obtained by premultiplication (post-multiplication) with corresponding elementary matrix., , it follows that (A) ¼ 2., , Proof: Let B be the matrix obtained from an m n, matrix A by row transformation. If E is elementary, matrix obtained from Im by the same row transformation, it is sufficient to show that B ¼ EA., Let, 2, 3, R1, 6 R2 7, 6, 7, 7, M ¼6, 6 . . . 7; N ¼ ½C1 C2 . . . Cn :, 4...5, Rm, , 4.14, , ELEMENTARY MATRICES, , Definition 4.69 A matrix obtained from a unit matrix, by a single elementary transformation is called an, elementary matrix., For example, 2, 3, 0 0 1, 40 1 05, 1 0 0, is the elementary matrix obtained from I3 by subjecting it to C1 $ C3 : The matrix, 2, 3, 4 0 0, 40 1 05, 0 0 1, is the elementary matrix obtained from I3 by subjecting it to R1? 4R1, whereas the matrix, 2, 3, 1 2 0, 40 1 05, 0 0 1, is the elementary matrix obtained from I3 by subjecting it to R1 ? R1 þ 2 R1. The elementary matrix, obtained by interchanging the ith and jth row of a, unit matrix I is denoted by Eij. Since, we obtain, same matrix by interchanging ith and jth row or ith, and jth column, Eij will also denote the elementary, , Then, , 2, , R1 C1 R1 C2, 6 R2 C1 R2 C2, 6, MN ¼ 6, ..., 6 ..., 4 ..., ..., Rm C1 Rm C2, , ..., ..., ..., ..., ..., , 3, . . . R1 Cn, . . . R2 Cn 7, 7, ..., ...7, 7:, ..., ...5, . . . Rm Cn, , Clearly, a row transformation applied to M will be, the row transformation applied to MN. Hence, elementary row transformation of a product MN of two, matrices M and N can be obtained by subjecting, the prefactor M to the same elementary row, transformation., Similarly, every elementary column transformation of a product MN can be obtained by subjecting the post-factor N to the same elementary, column transformation.
Page 131 :
4.28, , n, , Engineering Mathematics-I, , Now, A is an m n matrix and Im is an identity, matrix of order m. Therefore, A ¼ ImA Hence, by the, preceding arguments, if we apply a row transformation to A to get a matrix B, then this can be done, by applying the same row transformation to Im., Thus, if B is obtained from A by applying a row, transformation and E is obtained from Im by using, the same row transformation, then B ¼ EA., Similarly, if B is obtained from A by subjecting, it to a column transformation and E is obtained from, I by subjecting it to the same column transformation, then B ¼ AE., EXAMPLE 4.35, Let, , 2, , 1, A ¼ 42, 5, 2, 1, B ¼ 44, 5, , and, , 3, 1, 3, 3, 2, 3, , 4.15, , 5 3 2, , is in the row reduced echelon form and its rank is, 2 (the number of non-zero rows)., Theorem 4.21 Every non-zero m n matrix of rank r, can be reduced, by a sequence of elementary, transformation, to the form, Ir, 0, , 3, 4, 35, 2, 3, 4, 65, 2, , 0, 0, , (normal form or first canonical form), where Ir is, the identity matrix of order r., , Thus, B has been obtained from A by the row, transformation R2 ? 2 R2. Now, if, E is the elementary matrix obtained from I3 by R2 ? 2 R2, then, 2, 32, 3 2, 3, 1 0 0, 1 3 4, 1 3 4, 6, 76, 7 6, 7, EA ¼ 4 0 2 0 54 2 1 3 5 ¼ 4 4 2 6 5 ¼ B:, 0 0 1, , The rank of a matrix in row reduced echelon, form is equal to the number of non-zero rows of the, matrix. For example, the matrix,, 2, 3, 0 1 3 4, 40 0 1 25, 0 0 0 0, , 0 0 1, , ROW REDUCED ECHELON FORM AND NORMAL, FORM OF MATRICES, , Definition 4.70 A matrix is said to be in row-reduced, echelon form if, (i) The first non-zero entry in each non-zero, row is 1., (ii) The rows containing only zeros occur, below all the non-zero rows., (iii) The number of zeros before the first nonzero element in a row is less than the number of such zeros in the next row., , Proof: Let A ¼ ½aij mn be a matrix of rank r. Since, A is non-zero, it has at least one element different, from zero. Suppose aij 6¼ 0. Interchanging the first, and ith row and then first and jth column we obtain, a matrix B whose leading element is non-zero,, say k., Multiplying the elements of the first row of the, matrix B by 1k ; we obtain a matrix, 2, 3, 1, c12 c13 . . . . . . c1n, 6 c21 c22 c23 . . . . . . c2n 7, 6, 7, 7, C¼6, 6 . . . . . . . . . . . . . . . . . . 7;, 4 ... ... ... ... ... ... 5, cm1 cm2 cm3 . . . . . . cmn, whose leading element is equal to 1. Subtracting suitable multiples of the first column of C from the, remaining columns, and suitable multiples of first, row from the remaining rows, we obtain a matrix, 3, 2, 1, 0, 0 ... ... 0, 7, 6, 6 0 d22 d23 . . . . . . d2n 7, 7, 6, 7, D¼6, 6 . . . . . . . . . . . . . . . . . . 7;, 7, 6, 4... ... ... ... ... ... 5, 0 dm2 dm3 . . . . . . dmn
Page 132 :
Matrices, , in which all elements of the first row and first, column except the leading element are equal to, zero. If, 2, , d22, 6 d32, 6, 6 ..., 6, 4 ..., dm2, , d23, d34, ..., ..., dm3, , ..., ..., ..., ..., ..., , ..., ..., ..., ..., ..., , we repeat the above process for this matrix and get a, matrix, 2, , 1, 6 0, 6, E¼6, 6 0, 4..., 0, , ..., ..., ..., ..., ..., , 0, 0, 1, 0, 0 e33, ... ..., 0 3m3, , 3, ... 0, ... 0 7, 7, . . . e3n 7, 7:, ... ... 5, . . . emn, , Continuing this process, we obtain a matrix, , The rank of N is k. Since, the matrix N has been, obtained from A by elementary transformations,, (N) ¼ (A), that is, k ¼ r. Hence, every non-zero, matrix can be reduced to the form, Ik, 0, , 0, 0, , by a finite chain of elementary transformations., Corollary 4.2 The rank of an m n matrix A is r if and, only if it can be reduced to the normal form by a, sequence of elementary transformations., Proof: If (A) ¼ r, then by the above theorem it can, be reduced to normal form by a sequence of elementary transformations., Conversely, let the matrix A has been reduced, Ir, 0, , PAQ ¼, , 0, 0, , Ir, 0, , 0, :, 0, , Proof: Since A is an m n matrix of rank r, it can be, I 0, reduced to normal form r, using a sequence, 0 0, of elementary transformations. Further, since the, elementary row (column) transformations are, equivalent to pre-(post) multiplication by the corresponding elementary matrices, we have, Ps Ps1 . . . P1 AQ1 Q2 . . . Qt ¼, , Ir, 0, , 0, :, 0, , Now, since, each elementary matrix is non-singular and, the product of non-singular matrices is again non-singular, it follows that Ps Ps–1…P1 and Q1 Q2… Qt are, non-singular matrices, say P and Q. Hence, , Ik 0, :, 0 0, , N¼, , 4.29, , Corollary 4.3 If A is an m n matrix of rank r, there, exist non-singular matrices P and Q such that, , 3, , . . . d2n, . . . d3n 7, 7, ... ... 7, 7 6¼ 0;, ... ... 5, . . . dmn, , n, , PAQ ¼, , Ir, 0, , 0, ;, 0, , where P and Q are non-singular matrices., , 4.16, , EQUIVALENCE OF MATRICES, , Definition 4.71 Two matrices whose elements are real, or complex numbers are said to be equivalent if and, only if each can be transformed into the other by, means of elementary transformations., If the matrix A is equivalent to the matrix B,, then we write A B, The relation of equivalence ‘’ in the set of all, m n matrices is an equivalence relation, that is, , is reflexive, symmetric, and transitive., Theorem 4.22 If A and B are equivalent matrices, then, (A) ¼ (B)., , 0, is r and we know, 0, , Proof: If AB, then B can be obtained from A by a, finite number of elementary transformations. But, elementary transformation do not alter the rank of a, matrix. Hence (A) ¼ (B)., , that rank of a matrix is not altered by elementary, transformation. Therefore, rank of A is also r., , Theorem 4.23 If two matrices A and B have, the same size and the same rank, they are equivalent., , to normal form, , mations. Now the rank of, , by elementary transforIr, 0
Page 133 :
4.30, , n, , Engineering Mathematics-I, , Proof: Let A and B be two m n matrices of the, same rank r. Then they can be reduced to, normal form by elementary transformations., Therefore,, I 0, Ir 0, A r, and B , 0 0, 0 0, or, by symmetry of the relation of equivalence of, matrices, I 0, I 0, A r, and r, B:, 0 0, 0 0, Using transitivity of the relation, ‘’ we have A B., Theorem 4.24 If A and B are equivalent matrices,, there exist non-singular matrices P and Q such that, B ¼ PAQ., Proof: If A B, then B can be obtained from A by a, finite number of elementary transformations of A., But elementary row (column) transformations are, equivalent to pre (post) multiplication by the corresponding elementary matrices. Therefore, there, are elementary matrices P1, P2,…,Ps Q1, Q2,…,Qt, such that, Ps Ps1 . . . P1 A Q1 Q2 . . . Qt ¼ B:, Since, each elementary matrix is non-singular and, the product of non-singular matrices is non-singular,, we have,, PAQ ¼ B;, where P ¼ Ps Ps–1…P1 and Q ¼ Q1 Q2…Qt are nonsingular matrices., Theorem 4.25 Any non-singular matrix of explicitly, given numbers may be factored into the product of, elementary matrices., Proof: Any non-singular matrix A of order n and the, identity matrix In have the same order and same rank., Hence A In. Therefore, by the Theorem 4.41, there, exist elementary matrices Pj and Qj such that, A ¼ Ps Ps1 . . . P1 In Q1 Q2 . . . Qt :, EXAMPLE 4.36, Reduce the matrix, 2, , 3, 1 1 2 3, 64, 1 0, 27, 7, A¼6, 40, 3 0, 45, 0, 1 0, 2, to normal form and, hence, find its rank., , Solution. We observe that, 3, 2, 1 0, 0, 0, 6 4 5 8 14 7 C2 ! C2 þ C1, 7 C3 ! C3 2C1, A6, 40 3, 0, 45, C4 ! C4 þ 3C1, 0 1, 0, 2, 3, 2, 1 0, 0, 0, 6 0 5 8 14 7, 7 R2 ! R2 4R1, 6, 40 3, 0, 45, 0, 1, 60, 6, 40, 0, 2, 1, 60, 6, 40, 0, 2, 1, 60, 6, 40, 0, 2, 1, 60, 6, 40, 0, 2, 1, 60, 6, 40, 0, 2, 1, 60, 6, 40, 0, ¼ I4 :, 2, , 1, 0, 1, 3, 5, 0, 1, 3, 5, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, , 0, 2, 3, 0, 0, 0, 27, 7 R 2 $ R4, 0, 45, 8 14, 3, 0, 0, 0, 07, 7 C4 ! C4 2C2, 0 2 5, 8, 4, 3, 0, 0, R3 ! R3 3R2, 0, 07, 7 R4 ! R4 5R2, 0 2 5, 8, 4, 3, 0, 0, 0, 07, 7 C3 $ C4, 2, 05, 4 8, 3, 0 0, 1, 0 07, 7 C3 ! 2 C3, 5, 1 0 C4 ! 18 C4, 2 1, 3, 0 0, 0 07, 7 R4 ! R4 þ 2R3, 1 05, 0 1, , Hence, (A) ¼ 4., EXAMPLE 4.37, Reduce the matrix, 2, , 3, 44, 7, , 3, 2 1, 2 6 5, 4 5, , to the normal form and, hence, find its rank.
Page 134 :
Matrices, , Solution. We note that, 2, 3, 3 2 1, 6, 7, A ¼ 44 2 6 5, 7 4 5, 2, 3, 1 0 7, 6, 7 R1 ! R1 R2, 4 4 2 6 5, 7, , 2, , 4, , 1 0, 6, 44 2, 2, , 7 4, , 2, , 0 4, , 5, , 3, 7, 7 R2 ! R2 4R1, 22 5, R3 ! R3 7R1, 44, 3, 7, 1, 7, 11 5R2 ! R2, 2, 44, 3, 7, 7, 11 5R3 ! R3 4R2, , 1 0, 6, 44 2, 1, 6, 40, 0, 2, 1, 6, 40, 2, , 0, 1, 4, 0, 1, , 0 0, , 0, , 1 0, , 0, , 6, 40 1, 2, , 5, 3, 7, 7, 6 5R1 ! R1, , 3, , 7, 11 5C3 ! C3 7C1, , 0 0, , 0, 3, , 1 0 0, 6, 7, 4 0 1 0 5C3 ! C3 þ 11C2, 0 0 0, I2 0, , :, 0 0, Hence (A) ¼ 2., EXAMPLE 4.38, For the matrix, , 3, 1, 1, 1, A ¼ 4 1 1 1 5;, 3, 1, 1, find the non-singular matrices P and Q such that, PAQ is in the normal form. Hence, find the rank of, the matrix A., Solution. We write, A ¼ I3 AI3 :, , 2, , Thus,, 2, 3 2, 1, 1, 1, 1, 4 1 1 1 5 ¼ 4 0, 3, 1, 1, 0, , 0, 1, 0, , n, , 3 2, 0, 1 0, 0 5 A4 0 1, 1, 0 0, , 4.31, 3, 0, 0 5:, 1, , We shall apply elementary transformations on A, until it is reduced to normal form, keeping in mind, that each row transformation will also be applied to, the pre-factor I3 of the product on the right and each, column transformation will also be applied to the, post-factor I3 of the product on the right., Performing R2 ? R2 – R1, R3 ? R3 – 3R1, we, get,, 2, 3 2, 3 2, 3, 1, 1, 1, 1 0 0, 1 0 0, 4 0 2 2 5 ¼ 4 1 1 0 5A4 0 1 0 5, 0 2 2, 3 0 1, 0 0 1, Performing C2 ? C2 – C1, C3 ? C3 – C1, we get, 2, 3 2, 3 2, 3, 1, 0, 0, 1 0 0, 1 1 1, 4 0 2 2 5 ¼ 4 1 1 0 5A4 0, 1, 0 5:, 0 2 2, 3 0 1, 0, 0, 1, Performing R2 ! 12 R2 ;, 2, 3 2, 1, 0, 0, 1, 0, 40, 1, 1 5 ¼ 4 12 12, 3, 0, 0 2 2, , we get, 3 2, 3, 0, 1 1 1, 0 5 A4 0, 1, 0 5:, 1, 0, 0, 1, , Performing R3 ? R3 þ 2R2, we get, 3 2, 3 2, 1 1, 1 0 0, 1, 0 0, 4 0 1 1 5 ¼ 4 1 1 0 5 A4 0, 1, 2, 2, 0 0 0, 2 1 1, 0, 0, 2, , Last, performing C3 ?, 3 2, 1 0 0, 1, 0, 40 1 05 ¼ 4 1 1, 2, 2, 2 1, 0 0 0, 2, , or, , where, 2, , I2, 0, , 0, 0, , C3 – C2,we have, 3 2, 3, 1 1, 0, 0, 0 5A4 0, 1 1 5, 1, 0, 0, 1, , ¼ PAQ;, , 3, 1 1, 0, P ¼ 4 12 12, Q ¼ 40, 1 1 5:, 2 1, 0, 0, 1, I2 0, ; we have (A) ¼ 2., Since A is equivalent to, 0 0, 1, , 0, , 3, 0, 05 ;, 1, , 3, 1, 0 5:, 1, , 2
Page 135 :
4.32, , n, , Engineering Mathematics-I, , EXAMPLE 4.39, For the matrix, , 2, , 1, A ¼ 41, 3, , 1, 1, 1, , where, 2, , 3, , 3, 0, 0 5:, 1, , As, in the above example, we shall reduce A to normal, form subjecting it to elementary transformations., Performing R2?R2 – R1, R3 – 3R1, we have, 2, 3 2, 3 2, 3, 1 1, 1, 1 0 0, 1 0 0, 41, 2, 0 5 ¼ 4 1 1 0 5A4 0 1 0 5:, 0, 4 2, 3 0 1, 0 0 1, Performing c2 ? c2 þ c1, c3 ?c3 – c1 we have, 3 2, 3 2, 3, 1 0, 0, 1 0 0, 1 1 1, 40 2, 0 5 ¼ 4 1 1 0 5A4 0 1, 0 5:, 0 4 2, 3 0 1, 0 0, 1, Performing R2 ! 12 R2 ; we have, 3 2, 0, 0, 1, 1, 0 5 ¼ 4 12, 3, 4 2, , so that, , 3 2, 0 0, 1, 1, 5 4, 2 0 A 0, 0 1, 0, , 3 2, 3 2, 1, 0, 1, 0 0, 1, 5 A4 0, 0, 0 5 ¼ 4 12, 2, 0, 1 2 1, 1, , Hence,, , I3 ¼ PAQ;, , 05, 12, , Ir 0 1, AB ¼ C 1 A, D B, 0 0, , , IrA 0 1 , ¼ C 1, D B :, 0 0, , Since C–1 is non-singular, AB has same rank as, , 1, 1, 0, , 3, 1, 0 5:, 1, , I, 0, ðD1 BÞ:But rA, 0, 0, , 0, has zeros in the last, 0, I, 0, ðD1 BÞ also has, m rA rows and, hence, rA, 0 0, only zeros in the last m rA rows. Hence, the rank, IrA 0, of, ðD1 BÞ is at most rA. It follows,, 0 0, therefore, that, ðABÞ rA, Also, ðABÞ ¼, , Performing C3 ! 12 C3 ; we get, 1 0, 40 1, 0 0, , 3, , Proof: Let A be m n and B be n p matrices with, rank rA and rB, respectively. Then, by Corollary 4.9,, there exist non-singular matrices C and D of order, m and n, respectively, such that, I, 0, CAD ¼ rA, ;, 0 0, 0, I, denotes the normal form of A. Thus, where rA, 0 0, I, 0 1, A ¼ C 1 rA, D, 0 0, , IrA, 0, , Performing R3 ? R3 – 4R2, we have, 2, 3 2, 3 2, 3, 1 0, 0, 1, 0 0, 1 1 1, 1, 5 4, 40 1, 0 5 ¼ 4 12, 05, 2 0 A 0 1, 0 0 2, 1 2 1, 0 0, 1, 2, , 1, 2, , Theorem 4.26 The rank of the product of two matrices, cannot exceed the rank of either matrix., , 2, , 1, 40, 0, , 2, 3, 1 1, 0, 0 5 and Q ¼ 4 0 1, 0 0, 1, , Since A I3, (A) ¼ (I3) ¼ 3., , find non-singular matrices P and Q such that PAQ is, in the normal form. Hence find the rank of A., , 2, , 0, , 1, P ¼ 4 12, 2, 1 2, , 1, 1 5;, 1, , Solution. Write A ¼ I3 A I3 that is,, 2, 3 2, 1 0 0, 1 0, A ¼ 4 0 1 0 5A4 0 1, 0 0 1, 0 0, , 1, , , ðABÞT, , ¼ ½BT AT , 3, 1, 1, 2, 1, 0 5:, 0 12, , But, as proved earlier,, ðBT AT Þ ðBT Þ ¼ ðBÞ ¼ rB :, Hence,, , ðABÞ rB :, , This completes the proof of the theorem.
Page 136 :
Matrices, , EXAMPLE 4.40, Let A be any non-singular matrix and B a matrix, such that AB exists. Show that, ðABÞ ¼ ðBÞ:, Solution. Let C¼AB. Since A is non-singular, therefore B ¼ A–1C. Since rank of the product of two, matrices does not exceed the rank of either matrix,, we have,, ðCÞ ¼ ðABÞ ðBÞ, and, ðBÞ ¼ ðA1 CÞ ðCÞ:, Hence, ðCÞ ¼ ðABÞ ðBÞ ðCÞ;, which yields, ðBÞ ¼ ðCÞ ¼ ðABÞ:, , 4.17, , ROW AND COLUMN EQUIVALENCE, OF MATRICES, , Definition 4.72 A matrix A is said to be row (column), equivalent to B if B is obtainable from A by a finite, number of elementary row (column) transformations of A., Row equivalence of the matrices A and B is, R, denoted by A B and column equivalence of A and, C, B is denoted by A B., Theorem 4.27 Let A be an m n matrix of rank r., Then there exists a non-singular matrix P such that, G, PA ¼, ;, 0, where G is an r n matrix of rank r and 0 is (m r), n matrix., Proof: Since A is an m n matrix of rank r, therefore, there exist non-singular matrices P and Q such that, I 0, PAQ ¼ r, :, 0 0, But every non-singular matrix can be expressed as, product of elementary matrices. So,, Q ¼ Q1 Q2 . . . Qt ;, where Q1, Q2…Qt are all elementary matrices., Thus,, Ir 0, :, PAQ1 Q2 . . . Qt ¼, 0 0, , n, , 4.33, , Since, elementary column transformation of a, matrix is equivalent to post-multiplication with the, corresponding elementary matrix, we post-multiply, the left hand side of the above expression by the ele1, 1, 1, mentary matrices Q1, t ; Qt1 ; . . . ; Q2 ; Q1 successively and effect the corresponding column, transformations in the right hand side, we get a, relation of the form, G, PA ¼, :, 0, Since elementary transformations do not alter the, rank, G, ðPAÞ ¼ ðAÞ ¼ r and so, ¼ r;, 0, which implies that (G) ¼ r since G has r rows and, G, last m r rows of, consist of zero elements only., 0, Theorem 4.28 Every non-singular matrix is row, equivalent to a unit matrix., Proof: Suppose that the matrix A is of order 1. Then, A ¼[a11] which is clearly row equivalent to a unit, matrix. We shall prove our result by induction on, the order of the matrix. Let A be of order n. Since, the result is true for non-singular matrix of order 1,, we assume that the result is true for all matrices of, order n 1., Let A ¼ [aij] be an n n non-singular matrix., The first column of the matrix A has at least one, non-zero element, otherwise | A | ¼ 0, which contradicts the fact that A is non-singular. Let a11 ¼ k 6¼ 0., By interchanging (if necessary) the pth row with the, first row, we obtain a matrix B whose leading, coefficient is k 6¼ 0. Multiplying the elements of the, first row by 1k ; we get the matrix., 2, 3, 1 c12 c13 . . . . . . c1n, 6 c21 c22 c23 . . . . . . c2n 7, 6, 7, 7, C¼6, 6 . . . . . . . . . . . . . . . . . . 7:, 4 ... ... ... ... ... ...5, cn1 cn2 cn3 . . . . . . cnn, Using elementary row transformation, we get, 3, 2, 1 d12 d13 . . . d1n, 7, 6 0, 7, 6, 7;, 6, D ¼ 6..., A1, 7, 5, 4..., 0
Page 137 :
4.34, , n, , Engineering Mathematics-I, , where A1 is (n 1) (n 1) matrix. The matrix A1, is non-singular otherwise | A1 |¼ 0 and so | D | ¼ 0., Since A D, this will imply | A | ¼ 0 contradicting, the fact that A is non-singular. By induction, hypothesis, A1 can be transformed to In–1 by elementary row transformations. Thus, we get a matrix, M such that, 3, 2, 1 d12 d13 . . . . . . d1n, 6 0, 1, 0 ... ..., 07, 7, 6, ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., .7, M ¼6, 7:, 6, 4... ... ... ... ... ...5, 0 ... ... ... ..., 1, Further, use of elementary row transformation, reduces M to the matrix., 3, 2, 1, 0, 0 ... ..., 0, 6 0, 1, 0 ... ..., 07, 7, 6, 6, In ¼ 6 . . . . . . . . . . . . . . . . . . 7, 7;, 4... ... ... ... ... ...5, 0 ..., 0 ... ..., 1, which completes the proof of the theorem., Corollary 4.4 Let A be a on-singular matrix of order n., Then there exists elementary matrices E1, E2, …, Et, such that, Et Et1 . . . E2 E1 A ¼ In :, Proof: By the Theorem 4.28, non-singular matrix A, can be reduced to In by finite number of elementary, row transformations. Since elementary row transformation is equivalent to pre-multiplication by the, elementary matrix, therefore, there exists elementary matrices E1, E2, …, Et such that Et, Et – 1 … E2, E1 A ¼ I n., Corollary 4.5 Every non-singular matrix is a product, of elementary matrices., Proof: Let A be a non-singular matrix. Then, by, Corollary 4.4, there exist elementary matrices E1,, E2,…, Et such that, Et Et1 . . . E2 E1 A ¼ In :, Pre-multiplying both sides by ðEt Et1 . . . E2 E1 Þ1 ;, we get, A ¼ E11 E21 . . . Et1:, Since, inverse of an elementary matrix is also an elementary matrix, it follows that non-singular matrix, , can be expressed as a product of elementary, matrices., Corollary 4.6 The rank of a matrix does not alter by, pre-multiplication or post-multiplication with a, non-singular matrix., Proof: Every non-singular matrix can be expressed, as a product of elementary matrices. Also we know, that elementary row (column) transformations are, equivalent to pre-(post) multiplication with the, corresponding elementary matrices. But elementary, transformations do not alter the rank of a matrix., Hence, the rank of a matrix remains unchanged by, pre-multiplication or post-multiplication with a, non-singular matrix., , 4.18, , ROW RANK AND COLUMN RANK, OF A MATRIX, , Definition 4.73 Let A be any m n matrix. Then the, maximum number of linearly independent rows, (columns) of A is called the row rank (column rank), of A., The following theorem (stated without proof), shall be used in the sequel., Theorem 4.29 The row rank, the column rank and the, rank of a matrix are equal., , 4.19, Let, , SOLUTION OF SYSTEM OF LINEAR EQUATIONS, , 9, a11 x1 þ a12 x2 þ . . . þ a1n xn ¼ b1 >, >, >, a21 x1 þ a22 x2 þ . . . þ a2n xn ¼ b2 >, =, ... ... ... ... ..., ð1Þ, >, >, >, ... ... ... ... ..., >, ;, am1 x1 þ am2 x2 þ . . . þ amn xn ¼ bm, be a system of m linear equations in n unknown x1,, x2,…, xn The matrix form of this system is, AX ¼ B;, where, 3, 2, a11 a12 . . . . . . a1n, 6 a21 a22 . . . . . . a2n 7, 7, 6, 6 ... ... ... ... ... 7, 7, 6, A ¼6, 7, 6 ... ... ... ... ... 7, 4 ... ... ... ... ... 5, am1 am2 . . . . . . amn
Page 138 :
Matrices, , n, , 4.35, , is called coefficient matrix of the system,, 2 3, x1, 6 x2 7, 6 7, X ¼ 6...7, 4...5, xn, is the column matrix of unknowns, and, 2 3, b1, 6 b2 7, 6 7, 6...7, B¼6 7, 6...7, 4...5, bm, is column matrix of known numbers or the matrix of, constants. We call the system (1) as the system of, non-homogenous equations., Any set of values of x1, x2,…, xn from a scalar, field which simultaneously satisfy (1) is called a, solution, over that field, of the system. When such a, system has one or more solutions, it is said to be, consistent, otherwise it is called inconsistent., , Solution. The matrix form of the system is AX ¼ B,, where2, 3, 2 3, 2, 3, 1, 2 3, x, 4, A ¼ 42, 3, 2 5; X ¼ 4 y 5and B ¼ 4 2 5:, 3 3 4, z, 11, We note that, jAj ¼ 1ð6Þ 2ð14Þ 3ð15Þ ¼ 67 6¼ 0:, Thus A is non-singular. Hence the required solution, is given by, X ¼ A1 B:, ð2Þ, The cofactor matrix of A is, 2, 3, 6 14 15, ½Aij ¼ 4 17, 5, 95, 13 8, 1, and so, 2, 3, 6 17 13, adj A ¼ ½Aij T ¼ 4 14 5 8 5:, 15 9 1, Hence, 2, 3, 6 17 13, 1, 1, adj A ¼ 4 14, 5 8 5:, A1 ¼, jAj, 67, 15, 9 1, , 4.20, , Substituting A–1 in (2), we get, 3, 32, 2, 2 3, 4, 6 17 13, x, 1 6, 7, 76, 6 7, 4 y 5 ¼ 4 14 5 8 54 2 5, 67, 11, 15 9 1, z, 3, 3 2, 2, 3, 201, 1 6, 7, 7 6, ¼ 4 134 5 ¼ 4 2 5:, 67, 1, 67, , SOLUTION OF NON-HOMOGENOUS LINEAR, SYSTEM OF EQUATIONS, , (A) Matrix Inversion Method., Consider the non-homogeneous system of linear, equations AX ¼ B, where A is non-singular n n, matrix. Since A is non-singular, A–1 exists. Premultiplication of AX ¼ B by A–1 yields, A1 ðAX Þ ¼ A1 B, or, , , , 1, , , , Hence x ¼ 3, y ¼ 2, and z ¼ 1., 1, , A A X ¼A B, , or, IX ¼ A1 B, or, X ¼ A1 B:, Thus if A is non-singular, then the given system of, equation can be solved using inverse of A. This, method is called the Matrix Inversion Method., EXAMPLE 4.41, Solve, , x þ 2y 3z ¼ 4, 2x þ 3y þ 2z ¼ 2, 3x 3y 4z ¼ 11, , by Matrix Inversion Method., , B. Cramer’s Rule., If |A| 6¼ 0, then AX ¼ B has exactly one solution xj ¼, jAj j, jAj ; j ¼ 1; 2; . . . ; n; where Aj is the matrix obtained, from A by replacing the jth column of A by the, column of b’s., Consider the matrix form AX ¼ B of the system of, linear equations. Again Suppose that A is non-singular., Then, pre-multiplication of AX ¼ B by A–1 yields, 2, 32 3, b1, A11 A21 . . . . . . An1, 6 A12 A22 . . . . . . An2 76 b2 7, 76 7, 1 6, 6 . . . . . . . . . . . . . . . 76 . . . 7, X ¼ A1 B ¼, 76 7, jAj 6, 4 . . . . . . . . . . . . . . . 54 . . . 5, A1n A2n . . . . . . Ann, bn
Page 140 :
Matrices, , also of rank r. Hence, rank of A and the augmented, matrix [A:B] is the same., Conversely, suppose that the matrices A and [A:B], are of the same rank r. Then the maximum number of, linearly independent columns of the matrix [A:B] is r., But the first r columns C1, C2,…, Cr of the matrix, [A: B] had already formed a linearly independent, set. Therefore, the column B should be expressed as, a linear combination of C1, C2,…, Cr Hence, there, are scalars k1, k2,…, kr, such that, k1 C1 þ k2 C2 þ . . . þ kr Cr ¼ B, or, k1 C1 þ k2 C2 þ . . . þ kr Cr, þ 0Crþ1 0Crþ2 þ . . . þ 0Cn ¼ B, ð5Þ, Comparing (3) and (5), we get, x1 ¼ k; x2 ¼ k2 ; . . . ; xr ¼ kr ; xrþ1 ¼ 0;, xrþ2 ¼ 0; . . . ; xr ¼ 0, as the solution of the equation AX ¼ B. Hence, the, given system of linear equations is consistent. This, completes the proof of the theorem., If the system of linear equations is consistent,, then the following cases arises:, Case I. m n, that is, number of equations is more, than the number of unknowns. In such a case, (i) if (A) ¼ ([A:B]) ¼ n, then the system of, equations has a unique solution, (ii) if (A) ¼ ([A:B]) ¼ r < n then the (n r), unknowns are assigned arbitrary values and, the remaining r unknowns can be determined in terms of these (n –r) unknowns., Case II. m < n, that is, the number of equations is less, than the number of unknowns. In such a case, (i) if (A) ¼ ([A:B]) ¼ m, then n m unknowns can be assigned arbitrary values, and the values of the remaining m, unknowns can be found in terms of these, n m unknowns, which have already been, assigned values, (ii) if (A) ¼ ([A:B]) ¼ r < m, then the (n r), unknowns can be assigned arbitrary values, and the values of remaining r unknowns can, be found in terms of these (n r) unknowns,, which have already been assigned values., , n, , 4.37, , EXAMPLE 4.43, Show that the system, x þ y þ z ¼ 3, 3x þ y 2z ¼ 2, 2x þ 4y þ 7z ¼ 7, of linear equations is not consistent., Solution. The matrix form of the system is, AX ¼ B, and the augmented matrix is, 2, 3, 1 1, 1 3, 6, 7, ½A:B ¼ 4 3 1 2 2 5, 2, , 2, , 1, 1, 6, 4 0 2, 0, 2, 2, 1, 1, 6, 4 0 2, 0, , 0, , 4, , 7, , 1, 5, 5, 1, 5, 0, , 3, , 7, , 3, 7 R2 ! R2 3R1, 75, R3 ! R3 2R1, 13, 3, 3, 7 R3 ! R3 þ R2, 75, 20, , Thus the number of non-zero rows in Echelon form, of the matrix [A: B] is 3. But, 2, 3, 1, 1, 1, A 4 0 2 5 5, 0, 0, 0, and so (A) ¼ 2., Thus,, ðAÞ 6¼ ð½A:BÞ:, Hence, the given system of equation is inconsistent., EXAMPLE 4.44, Show that the equations, x þ 2y z ¼ 3, 3x y þ 2z ¼ 1, 2x 2y þ 3z ¼ 2, x y þ z ¼ 1, are consistent. Also solve them., Solution. In matrix form, we have, 3, 2, 3, 2, 3, 1, 2 1 2 3, x, 7, 6, 6 3 1, 27, 74 y 5 ¼ 6 1 7:, AX ¼ 6, 4, 5, 4 2 2, 25, 3, z, 1, 1 1, 1
Page 141 :
4.38, , n, , Engineering Mathematics-I, , The augmented matrix is, 3, 2, 1, 2 1 3, 6 3 1, 2 17, 7, 6, ½A:B ¼ 6, 7, 4 2 2, 3 25, 1 1, 1 1, 3, 2, 3, 1, 2 1, R2 ! R2 3R1, 6 0 7, 5 8 7, 7, 6, 6, 7 R3 ! R3 2R1, 4 0 6, 5 4 5, R4 ! R4 R 1, 0 3, 2 4, 3, 2, 3, 1, 2 1, 6 0 1, 0 4 7, 7, 6, 6, 7 R 2 ! R2 R3, 4 0 6, 5 4 5, , 60, 6, 6, 40, , 3, 2, , 2, 1, , 1, 0, , 0, 5, , 0, 2, 1, 60, 6, 6, 40, , 0, 2, 1, , 2, 1, 0, , 0, , 1, , 0, 1, 60, 6, 6, 40, , 0, 2, 1, , 1, 1, 0, , 2, , 2, , 0, 1, , 4, 3, 3, R3 ! R3 6R2, 4 7, 7, 7 R4 ! R4 3R2, 20 5, 8, 3, 3, R3 ! 15 R3, 4 7, 7, 7 R4 ! 12 R4, 45, 4, 3, 3, 4 7, 7, 7 R4 ! R4 R3, 45, , 0, 1, 0, 0, 0, 0, The number of non-zero rows in the echelon form, is 3. Hence ([A:B]) ¼ 3. Also, 2, 3, 1, 2 1, 6 0 1, 07, 7:, A6, 40, 0, 15, 0, 0, 0, Clearly, (A) ¼ 3. Thus, (A) ¼ ([A, B]) and so the, given system is consistent. Further, r ¼ n ¼ 3., Therefore, the given system of equation has a, unique solution. Rewriting the equation from the, augmented matrix, we have, x þ 2y z ¼ 3, y ¼ 4, z¼4, and so x ¼ 1, y ¼ 4 and z ¼ 4 is the required solution., , EXAMPLE 4.45, For what values of l and m, the system of equations, xþyþz¼6, x þ 2y þ 3z ¼ 10, x þ 2y þ lz ¼ m, has (i) no solution (ii) a unique solution, and (iii) an, infinite number of solutions., Solution. The matrix form of the given system is, 32 3, 2, 1 1 1, x, 6, 76 7, AX ¼ 4 1 2 3 54 y 5, 1 2 l, z, 2 3, 6, 6 7, ¼ 4 10 5, m, ¼ B:, Therefore, the augmented matrix is, 2, 3, 1 1 1, 6, 6, 7, ½A:B ¼ 4 1 2 3 10 5, l, , m, , 1, , 2, , 1, , 1, , 1, , 6, , 6, 40, , 1, , 2, , 4, , 0, , 1, , l1, , 1, , 1, , 1, , 6, , 3, , 6, 40, , 1, , 2, , 4, , 7, 5 R3 ! R 3 R2 :, , 0, , 0, , l3, , 2, , 2, , m6, , 3, 7 R 2 ! R2 R1, 5, R3 ! R3 R1, , m 10, , If l 6¼ 3, then (A) ¼ 3 and ([A:B]) ¼ 3. Hence, the, given system of equations is consistent. Since (A), is equal to the number of unknowns, therefore, the, given system of equations possesses a unique, solution for any value of m., If l ¼ 3 and m 6¼ 10, then (A) ¼ 2 and, ([A:B]) ¼ 3. Therefore, the given system of equations is inconsistent and so has no solution., If l ¼ 3 and m ¼ 10 then (A) ¼ ([A:B]) ¼ 2., Thus, the given system of equation is consistent., Further, (A) is less than the number of unknowns,, therefore, in this case the given system of equations, possesses an infinite number of solutions.
Page 142 :
Matrices, , EXAMPLE 4.46, Determine the value of l for which the system of, equations, x1 þ x2 þ x3 ¼ 2, x1 þ 2x2 þ x3 ¼ 2, , (i) has no solution, (ii) has a unique solution., , 2, , 1, , given system is, 32 3, x1, 76 7, 54 x2 5, , 1 l5, 3, , x3, , 2, 6, 7, ¼ 4 2 5, l, , Therefore, the augmented matrix is, 2, 3, 1 1, 1, 2, 6, 7, ½A:B ¼ 4 1 2, 1, 2 5, 1, , 1, , l5, , 1, , 1, , 1, , 1, , 0, , 0, , l6, , Solution. The given system of equations is expressed, in the matrix2 form as 32 3 2 3, 1 1 1, x, 1, AX ¼ 4 1 2 4 54 y 5 ¼ 4 l 5 ¼ B:, 1 4 10, z, l2, , l2, , 1, , 4 10, , 1, , 1 1, , 6, 40, , 1 3, , 7, l1 5, , 0, , 3 9, , l2 1, , 1, , 1 1, , 1, , 3, , 6, 40, , 1 3, , l1, , 7, 5:, , 0, , 0 0, , l2 3l þ 2, , 2, , 2, , 3, , 1, , We note that, ðAÞ ¼ ð½A:BÞ if l2 3l þ 2 ¼ 0:, , ¼ B:, , 2, , 4.39, , Therefore, the augmented matrix is, 3, 2, 1 1 1, 1, 6, 7, ½A:B ¼ 4 1 2 4, l5, , x1 þ x2 þ ðl 5Þx3 ¼ l, , Solution. The matrix form of the, 2, 1 1, 1, 6, AX ¼ 4 1 2, 1, , n, , l, 2, , 3, , Thus, the given equation is consistent if l2 3l þ, 2 ¼ 0; that is if (l 2) (l 1) ¼ 0, that is, if l ¼ 2, or l ¼ 1. If l ¼ 2, then we have, 2, 3, 1 1 1 1, ½A:B 4 0 1 3 1 5, 0 0 0 0, , 7 R 2 ! R 2 R1, 4 5, :, R3 ! R 3 R1, l2, , and so the given system of equations is equivalent to, xþyþz¼1, , If l ¼ 6, then (A) ¼ 2 and ([A:B]) ¼ 3. Therefore,, the system is inconsistent and so possesses no solution., If l 6¼ 6, then (A) ¼ ([A:B]) ¼ 3. Hence, the, system is consistent in this case. Since (A) is equal, to the number of unknowns, the system has a unique, solution in this case., , These equations yields y ¼ 1 3z, and x ¼ 2z., Therefore, if z ¼ k, an arbitrary constant, then x, 2k, y ¼ 1 3k, and z ¼ k constitute the general, solution of the given equation., If l ¼ 1 then, we2have, 3, 1 1 1 1, ½A:B 4 0 1 3 0 5, 0 0 0 0, , 6, 40, 0, , EXAMPLE 4.47, Determine the value of l for which the system of, equations, xþyþz¼1, x þ 2y þ 4z ¼ l, x þ 4y þ 10z ¼ l2, possesses a solution and, hence, find its solution., , y þ 3z ¼ 1:, , and so the given system of equations is equivalent to, xþyþz¼1, y þ 3z ¼ 0:, These equations yields y ¼ 3z, x ¼ 1 þ 2z. Thus, if, c is an arbitrary constant, then x ¼ 1 þ 2c, y ¼ 3c,, and z ¼ c, constitute the general solution of the, given system of equations.
Page 143 :
4.40, , n, , Engineering Mathematics-I, , EXAMPLE 4.48, Find the value of l and m for which the system of, equations, 3x þ 2y þ z ¼ 6, 3x þ 4y þ 3z ¼ m, 6x þ 10y þ lz ¼ m, has (i) unique solution, (ii) no solution, and (iii), infinite number of solutions., Solution. The given system of equations is expressed, by the matrix equation, 2, 32 3 2 3, 3, 2 1, x, 6, AX ¼ 4 3, 4 3 54 y 5 ¼ 4 14 5 ¼ B:, 6 10 l, z, m, Therefore, the augmented matrix is, 2, 3, 3 2 1 6, ½A:B ¼ 4 3 4 3 14 5, 6 10 l m, 2, 3, 3 2, 1, 6, R2 ! R 2 R 1, 40 2, 2, 8 5, R3 ! R3 2R1, 0 6 l 2 m 12, 2, 3, 3 2, 1, 6, 40 2, 2, 8 5 R3 ! R3 3R2 :, 0 0 l 8 m 36, If l 6¼ 8, then (A) ¼ ([A:B]) ¼ 3 and so in this case, the system is consistent. Further, since (A) is equal, to number of unknowns, the given system has a, unique solution., If l ¼ 8, m 6¼ 36, then (A) ¼ 2 and ([A:B]) ¼ 3., Hence, the system is inconsistent and has no solution., If l ¼ 8, m ¼ 36, then (A) ¼ ([A:B]) ¼ 2., Therefore, the given system of equation is consistent., Since rank of A is less than the number of unknowns,, the given system of equation has infinitely many, solutions., EXAMPLE 4.49, Using consistency theorem, solve the equation, xþyþz¼9, 2x þ 5y þ 7z ¼ 52, 2x þ y z ¼ 0:, Solution. The matrix form of the given system of, equations is2, 32 3 2 3, 1 1, 1, x, 9, AX ¼ 4 2 5, 7 54 y 5 ¼ 4 52 5 ¼ B:, 2 1 1, z, 0, , Therefore,, the augmented, 2, 3 matrix is, 1 1, 1, 9, ½A:B ¼ 4 2 5, 7 52 5, 2, 1, 1, 0, 2, 3, 1, 1, 1, 9, R ! R2 2R1, 40, 3, 5, 34 5 2, R3 ! R3 2R1, 2 0 1 3 18 3, 1, 1, 1, 9, R $ R3, 4 0 1 3 18 5 2, 3, 5, 34 3, 20, 1, 1, 1, 9, R ! R3 3R2, 4, 0 1 3 18 5 3, :, 0, 0 4 20, Thus we get echelon form of the matrix [A:B]. The, number of non-zero rows in this form is 3., Therefore ([A:B]) ¼ 3. Further, since, 2, 3, 1, 1, 1, A 4 0 1 3 5:, 0, 0 4, Therefore (A) ¼ 3. Hence, (A) ¼ ([A:B]) ¼ 3., This shows that the given system of equations is, consistent. Also, since (A) is equal to the number, of unknowns, the solution of the given system is, unique. To find the solution, we note that the given, system of equation is equivalent to, 2, 32 3 2, 3, x, 9, 1, 1, 1, 4 0 1 3 54 y 5 ¼ 4 18 5, 20, 0, 0 4, z, and so, x þ y þ z ¼ 9; y 3z ¼ 18; 4z ¼ 20;, which yields z ¼ 5; y ¼ 3; and x ¼ 1 as the required, solution., , 4.22, , HOMOGENEOUS LINEAR EQUATIONS, , Consider the following system of m homogeneous, equations in n unknowns x1 ; x2 ; . . . ; xn :, a11 x1 þ a12 x2 þ . . . þ a1n xn ¼ 0, a21 x1 þ a22 x2 þ . . . þ a2n xn ¼ 0, ..., ..., am1 x1 þ am2 x2 þ . . . þ amn xn ¼ 0:, The matrix form of this system is, AX ¼ 0;
Page 145 :
4.42, , n, , Engineering Mathematics-I, , vector, that is,, X þ xrþ1 X1 þ xrþ2 X2 þ . . . þ xn Xnr ¼ 0, or, X ¼ xrþ1 X1 xrþ2 X2 . . . Xn Xnr :, Thus, every solution is a linear combination of the, n r linearly independent solution X1 ;X2 ;...;Xnr :, It follows, therefore, that the set of solution [X1, X2,, …, Xnr] form a basis of vector space of all the, solutions of the system of equations AX ¼ 0., Remark 4.6 Suppose we have a system of m linear, equations in n unknowns. Thus, the coefficient, matrix A is of order m n. Let r be the rank of A., Then, r n (number of column of A)., If r ¼ n, then AX ¼ 0 possesses n n ¼ 0, number of independent solutions. In this case, we, have simply the trivial solution (which forms a, linearly dependent system)., If r < n, then there are n r linearly independent solutions. Further any linear combination of, these solutions will also be a solution of AX ¼ 0., Hence, in this case, the equation AX ¼ 0 has infinite, number of solutions., If m < n, then since r m, we have r < n. Hence, the system has a non-zero solution. The number of, solutions of the equation AX ¼ 0 will be infinite., Theorem 4.32 A necessary and sufficient condition, that a system of n homogeneous linear equations in, n unknowns have non-trivial solutions is that, coefficient matrix be singular., Proof:, The condition is necessary. Suppose that the system of n, homogeneous linear equations in n unknowns have, a non-trivial solution. We want to show that | A, | ¼ 0. Suppose, on the contrary, | A | 6¼ 0. Then rank, of A is n. Therefore, number of linearly independent, solution is n n ¼ 0. Thus, the given system possesses no linearly independent solution. Thus, only, trivial solution exists for the given system. This, contradicts the fact that the given system of equation has non-trivial solution. Hence |A| ¼ 0., The condition is sufficient. Suppose |A| ¼ 0. Therefore,, (A) < n. Let r be the rank of A. Then the given, , equation has (n r) linearly independent solutions., Since a linearly independent solution can never be, zero, therefore, the given system must have a nonzero solution., EXAMPLE 4.50, Solve, , x þ 3y 2z ¼ 0, 2x y þ 4z ¼ 0, x 11y þ 14z ¼ 0:, , Solution. The matrix form of the given system of, homogeneous, equations3is AX ¼, 2 0,3where 2 3, 2, x, 0, 1, 3 2, A ¼ 42, 1, 4 5; X ¼ 4 y 5; 0 ¼ 4 0 5:, z, 0, 1 11 14, We note, , that, 1, 3, 2 , , 4 ¼ 30 72 þ 42 ¼ 0:, jAj ¼ 2 1, 1 11 14 , Therefore A is singular, that is (A) < n. Thus, the, given system has a non-trivial solution and will, have infinite number of solutions., The given system is, 2, 32 3, 1, 3 2, x, 42, 5, 4, 1, 4, y5 ¼ 0, 1 11 14, z, 2, 32 3, 1, 3 2, x, R2 ! R2 2R1, 40, 7, 8 54 y 5 ¼ 0;, ;, R3 ! R3 R 1, z, 0 14 16, 2, 32 3, 1, 3 2, x, 4, 5, 4, 0, 7, 8, y 5 ¼ 0; R3 ! R3 2R2, 0, 0, 0, z, and so we have, x þ 3y 2z ¼ 0, 7y þ 8z ¼ 0:, These equations yield y ¼ 87 z; x ¼ 10, 7 z: Giving, different values to z, we get infinite number of, solutions., EXAMPLE 4.51, Solve, , x1 x2 þ x3 ¼ 0, x1 þ 2x2 x3 ¼ 0, 2x1 þ x2 þ 3x3 ¼ 0:
Page 146 :
Matrices, , Solution. In matrix form, we have AX ¼ 0, where, 2 3, 2, 3, 2 3, 0, 1 1, 1, x1, A ¼ 41, 2 1 5; X ¼ 4 x2 5; 0 ¼ 4 0 5:, x3, 2, 1, 3, 0, We note that |A| ¼ 9 6¼ 0. Thus A is non-singular., Hence, the given system of homogeneous equation, has only trivial solution x1 ¼ x2 ¼ x3 ¼ 0., EXAMPLE 4.52, Solve, 2x 2y þ 5z þ 3w ¼ 0, 4x y þ z þ w ¼ 0, 3x 2y þ 3z þ 4w ¼ 0, x 3y þ 7z þ 6w ¼ 0:, Solution. The matrix form of the given system of, homogeneous equation is, 32 3, 2, x, 2 2 5 3, 6 4 1 1 1 76 y 7, 76 7, AX ¼ 6, 4 3 2 3 4 54 z 5 ¼ 0:, w, 1 3 7 6, Performing row elementary transformations to get, echelon form of A, we have, 3, 3 2, 2, 2 2 5 3, 1 3 7 6, 6 4 1 1 1 7 6 4 1 1 1 7, 7, 7 6, A¼6, 4 3 2 3 4 5 4 3 2 3 4 5R1 $ R4, 2 2 5 3, 1 3 7 6, 3, 2, 1 3, 7, 6, R ! R2 4R1, 6 4 11 27 23 7 2, 7, 6, 4, R3 ! R3 3R1, 0, 7 18 14 5, R4 ! R4 2R1, 0, 4 9 9, 2, 3, 1 3, 7, 6, 60, 7, 4, 9, 9, 7 R 2 ! R2 R3, 6, 40, 7 18 14 5, 0, 4 9 9, 3, 1 3, 7, 6, 60, 4 9 9 7, 7, 6, 4 0 28 72 56 5 R3 ! 4R3, 0, 4 9 9, 2, 3, 1 3, 7, 6, R3 ! R3 7R2, 60, 4 9 9 7, 7 R4 ! R4 R2 :, 6, 40, 0 9, 75, 0, 0, 0, 0, The above echelon form of A suggests that rank of A is, equal to the number of non-zero rows. Thus (A) ¼ 3., 2, , n, , 4.43, , The number of unknowns is 4. Thus (A) < n. Hence,, the given system possesses non-trivial solution., The number of independent solution will be, (n r) ¼ 4 3 ¼ 1., Further, the given system is equivalent to, 32 3, 2, x, 1 3, 7, 6, 76 y 7, 60, 4, 9, 9, 76 7 ¼ 0, 6, 40, 0 9, 7 54 z 5, w, 0, 0, 0, 0, and so, we have, x 3y þ 7z þ 6w ¼ 0, 4y 9z 9w ¼ 0, 9z þ 7w ¼ 0, These equations yield z ¼ 79w, y ¼ 4w, x ¼ 59w., Thus taking w ¼ t, we get x ¼ 59t, y ¼ 4t, z ¼79t,, w ¼ t as the general solution of the given, equations., EXAMPLE 4.53, Determine the value of l for which the following, equations have non-zero solutions:, x þ 2y þ 3z ¼ lx, 3x þ y þ 2z ¼ ly, 2x þ 3y þ z ¼ lz:, Solution. The matrix form of the given equation is, 2, 32 3, 1l, 2, 3, x, AX ¼ 4 3, 1l, 2 54 y 5 ¼ 0:, 2, 3, 1l, x, The given system will have non-zero solution only, if | A | ¼ 0, that is, if rank of A is less than 3., Thus for the existence of non-zero solution, we, must have , , 1 l, 2, 3 , , 3, 1l, 2 ¼ 0, , 2, 3, 1 l, or, , , 6 l 6 l 6 l, , , 3 1 l 2 ¼ 0 using R1 ! R1 þ R2 þ R3, , , 2, 3 1 l, or, , , 1, 1, 1 , , ð6 lÞ 3 1 l, 2 ¼ 0, 2, 3, 1 l
Page 147 :
4.44, , n, , Engineering Mathematics-I, , , , 1, 0, 0 , , C2 ! C2 C1, ð6 lÞ 3 2 l, 1 ¼ 0;, C3 ! C3 C1, 2, 1, 1 l , or, ð6 lÞ½l2 þ 3l þ 3 ¼ 0;, which yields, pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, 3 9 12, l ¼ 6 and, :, 2, Thus, the only real value of l for which the given, system of equation has a solution is 6., or, , 4.23, , CHARACTERISTIC ROOTS AND, CHARACTERISTIC VECTORS, , Let, 2 A3be a matrix of order n, l a scalar and X ¼, x1, 6 x2 7, 6 7 a column vector., 4...5, xn, Consider the equation, AX ¼ lX, ð10Þ, Clearly X ¼ 0 is a solution of (10) for any value of l., The question arises whether there exist scalar l and, non-zero vector X, which simultaneously satisfy the, equation (10). This problem is known as characteristic value problem. If In is unit matrix of order, n, then (10) may be written in the form, ðA lIn ÞX ¼ 0:, ð11Þ, The equation (11) is the matrix form of a system of, n homogeneous linear equations in n unknowns., This system will have a non-trivial solution if and, only if the determinant of the coefficient matrix, A lIn vanishes, that is, if, , , a11 l a12 ... .. . a1n , , , a21 a22 l ... .. . a2n , , , ... ... .. . .. . ¼ 0:, jA lIn j ¼ ..., ..., ... ... .. . .. . , , an1, an2 ... .. . ann l , The expansion of this determinant yields a polynomial of degree n in l, which is called the characteristic polynomial of the matrix A., The equation |A lIn| ¼ 0 is called the characteristic equation or secular equation of A., The n roots of the characteristic equation of a, matrix A of an order n are called the characteristic, roots, characteristic values, proper values, eigenvalues, or latent roots of the matrix A., , The set of the eigenvalues of a matrix A is, called the spectrum of A., If l is an eigenvalue of a matrix A of order n,, then a non-zero vector X such that AX ¼ lX is called, a characteristic vector, eigen vector, proper vector,, or latent vector of A corresponding to the characteristic root l., Theorem 4.33 The equation AX ¼ lX has a non-trivial, solution if and only if l is a characteristic root of A., Proof: Suppose first that l is a characteristic root of, the matrix A. Then |A lIn| ¼ 0 and consequently, the matrix A lI is singular. Therefore, the matrix, equation (A lI)X ¼ 0 possesses a non-zero solution. Hence, there exists a non-zero vector X such, that (A lI)X ¼ 0 or AX ¼ lX., Conversely, suppose that there exists a non-zero, vector X such that AX ¼ lX or (A lI)X ¼ 0. Thus,, the matrix equation (A lI)X ¼ 0 has a non-zero, solution. Hence A lI is singular and so |, A lI| ¼ 0. Hence, l is a characteristic root of the, matrix A., Theorem 4.34 Corresponding to a characteristics, value l, there correspond more than one characteristic vectors., Proof: Let X be a characteristic vector corresponding, to a characteristic root l. Then, by definition, X 6¼ 0, and AX ¼ lX. If k is any non-zero scalar, then, kX 6¼ 0. Further,, AðkX Þ ¼ kðAX Þ ¼ kðlX Þ ¼ lðkX Þ:, Therefore, kX is also a characteristic vector of A, corresponding to the characteristic root l., Theorem 4.35 If X is a proper vector of a matrix A,, then X cannot correspond to more than one characteristic root of A., Proof: Suppose, on the contrary, X be a characteristic, vector of a matrix A corresponding to two characteristic roots l1 and l2. Then, AX ¼ l1X and, AX ¼ l2 X and so (l1 l2) X ¼ 0. Since X6¼0, this, implies l1 l2 ¼ 0 or l1 ¼ l2. Hence the result, follows., Theorem 4.36 Let X1, X2, …, Xn be non-zero characteristic vectors associated with distinct characteristic roots l1, l2, … ln of a matrix A. Then X1,, X2,…, Xn are linearly independent.
Page 149 :
4.46, , n, , Engineering Mathematics-I, , Solution. The characteristic equation of the given, matrix is, , , 3l, 1, 0 , , 3l, 1 ¼ 0 ;, j A lI j ¼ 0, 0, 0, 3l , which yields (3l)3¼0. Thus 3 is the only distinct, characteristic root of A. The characteristic vectors, are given by non-zero solutions of the equation, (A3I)X¼0, that is,, 3, 2, 3, 2, 3 2, 0, 0, 1 0, x1, 40, 0 1 5 4 x2 5 ¼ 4 0 5 :, 0, x3, 0, 0 0, The coefficient matrix of the equation is of rank 2., Therefore, number of linearly independent solution, is n r ¼ 1. The above equation yields x2 ¼ 0,, x3 ¼ 0. Therefore, x1¼1, x2¼0, x3¼0 is a non-zero, solution of the above equation., 2, 3 Thus,, 1, X ¼4 0 5, 0, is an eigenvector of A corresponding to the eigenvalue 3. Also any non-zero multiple of this vector, shall be an eigenvector of A corresponding to l ¼ 3., EXAMPLE 4.55, Find the eigenvalues and the corresponding eigenvectors of the matrix, 2, 3, 6 2, 2, A ¼ 4 2, 3 1 5:, 2 1, 3, Solution. The characteristic equation of the given, matrix is, , , 6 l 2, 2 , , , j A lI j ¼ 2 3 l 1 ¼ 0, 2, 1 3 l , or, , 6 l 2, , 2 3 l, , 2, 1, or, , , , ð2 lÞ, , or, , 0, 2l, 2l, , , , , ¼ 0;, , , , 6 l 2, 2 3 l, 2, 1, , C3 ! C3 þ C2, , 0 , 1 ¼ 0, 1, , ð2 lÞ ðl 2Þ ðl 8Þ ¼ 0:, , Thus, the characteristic roots of A are l ¼ 2, 2, 8., The eigenvector of A corresponding to the eigenvalue 2 is given by (A 2I) X ¼ 0 or, 2, 32, 3, 2 3, 4 2, 2, x1, 0, 4 2, 1 1 5 4 x2 5 ¼ 4 0 5, 2 1, 1, 0, x3, or, 2, , 2, 4 4, 2, or, 2, 2, 4 0, 0, , 32, 3, 2 3, 0, 1 1, x1, 2, 2 5 4 x2 5 ¼ 4 0 5;, 1, 1, x3, 0, , R 1 $ R2, , 32 3, 2 3, 1 1, x1, 0, R ! R2 þ 2R1, 0 0 5 4 x2 5 ¼ 4 0 5; 2, R3 ! R3 þ R1, 0 0, x3, 0, , The coefficient matrix is of rank 1. Therefore, there, are nr¼3 1 ¼ 2 linearly independent solution., The above equation is, 2x1 þ x2 x3 ¼ 0:, Clearly, 2, 3, 2 3, 1, 1, X1 ¼ 4 0 5 and X2 ¼ 4 2 5, 2, 0, are two linearly independent solutions of this equation. Then X1 and X2 are two linearly independent, eigenvectors of A corresponding to eigenvalue 2. If, k1, k2 are scalars not both equal to zero, then, k1X1 þ k2X2 yields all the eigenvectors of A corresponding to the eigenvalue 2., The characteristic vectors of A corresponding, to the characteristic root 8 are given by (A 8I), X ¼ 0 or by, 2, 32 3 2 3, x1, 0, 6 8 2, 2, 4 2 8 1 5 4 x2 5 ¼ 4 0 5, 2, , 2, 4 0, 0, 2, 2, 4 0, 0, , 0, x, 2 1 3 8, 3 2 33 2 3, x1, 2 2, 0, R 2 ! R2 R 1, 3 3 5 4 x2 5 ¼ 4 0 5;, R 3 ! R3 þ R 1, 3 3, 0, x, 3 2 33 2 3, 2 2, 0, x1, 3 3 5 4 x2 5 ¼ 4 0 5; R3 ! R3 R2, 0 0, x3, 0, , The coefficient matrix is of rank 2. Therefore, number of linearly independent solution is n r ¼ 3, 2¼1. The above equations give, 2x1 2x2 þ 2x3 ¼ 0, 3x2 3x3 ¼ 0:
Page 151 :
4.48, , n, , Engineering Mathematics-I, , Multiplying these successively by An, An1,…,In and, adding, we get,, 0 ¼ a0 An þ a1 An1 þ . . . þ an1 A þ an I;, that is,, (A) ¼ 0., This completes the proof of the theorem., Corollary 4.10 If A is non-singular, then, a0 n1 a1 n2, an1, A, A, ... , I, A1 ¼, an, an, an, , 1 , a0 An1 þ a1 An2 þ . . . þ an1 I, ¼, an, Proof: By Cayley-Hamilton theorem, we have, a0 An þ a1 An1 þ . . . þ an1 A þ an I ¼ 0:, Pre-multiplication with A1 yields, a0 An1 þ a1 An2 þ . . . þ an1 þ an A1 ¼ 0:, or, a0, a1, an1, I, A1 ¼ An1 An2 . . . , an, an, an, , 1, a0 An1 þ a1 An2 þ . . . þ an1 I :, ¼, an, Remark 4.7 It follows from above that, , 1, An ¼ , a1 An1 þ a1 An2 þ . . . þ an I :, a0, Thus higher powers of a matrix can be obtained, using lower powers of A., EXAMPLE 4.59, Verify Cayley-Hamilton theorem for the matrix, 2, 3, 2 1, 1, A ¼ 4 1, 2 1 5, 1 1, 2, and hence find A1., Solution. We have , 2 l, , , jA lI j ¼ 1, , 1, , , , , , 2 l 1 , , 1, 2 l, 1, , 1, , ¼ l3 þ 6l2 9l þ 4:, Thus, the characteristic equation of the matrix A is, l3 6l2 þ 9l 4 ¼ 0:, To verify Cayley-Hamilton theorem, we have to, show that, ð19Þ, A3 6A2 þ9A 4I ¼ 0:, , We have, , 3, 6 5, 5, 6 5 5 ;, A2 ¼ 4 5, 5 5, 6, 2, 3, 22 21, 21, A3 ¼ 4 21, 22 21 5 :, 21 21, 22, Then, we note that, ", #, 0, 0 0, 3, 2, A 6A þ9A 4I ¼ 0, 0 0 ¼ 0:, 0, 0 0, Further, pre-multiplying (19) by A1, we get, A2 6A þ 9I 4A1 ¼ 0, and so, , 1, A1 ¼ A2 6A þ 9I, 4 2, 3, 3 1 1, 1 6, 7, 1 5:, ¼, 4 1 3, 4, 1 1, 3, , 4.25, , 2, , ALGEBRAIC AND GEOMETRIC MULTIPLICITY OF, AN EIGENVALUE, , Definition 4.76 If l is an eigenvalue of order m of, matrix A, then m is called the algebraic multiplicity, of l., Definition 4.77 If s is the number of linearly independent eigenvectors corresponding to the eigenvalue l, then s is called the geometric multiplicity of l., If r is the rank of the coefficient matrix of, (A lI) X ¼ 0, then s ¼ n r, where n is the number, of unknowns., The geometric multiplicity of an eigenvalue, cannot exceed its algebraic multiplicity., , 4.26, , MINIMAL POLYNOMIAL OF A MATRIX, , Definition 4.78 A polynomial in x in which the, coefficient of the highest power of x is unity is, called a monic polynomial., For example, x4 x3 þ 2x2 þ x þ 4 is a monic, polynomial of degree 4 over the field of real, numbers., Definition 4.79 The monic polynomial m(x) of lowest, degree such that m(A) ¼ 0 is called the minimal, polynomial of the matrix A.
Page 153 :
4.50, , n, , Engineering Mathematics-I, , Theorem 4.42 Every root of the characteristic equation of a matrix is also a root of the minimal, equation of the matrix., Proof: Suppose m(x) is the minimal polynomial of a, matrix A. Then m(A) ¼ 0. Let l be a characteristic, root of A. Then, by Theorem 4.58, m(l) is the, characteristic root of m(A). But m(A) ¼ 0 and, so each of its characteristic root is zero. Hence, m(l) ¼ 0, which implies that l is a root of the, equation m(x) ¼ 0. This proves that every characteristic root of a matrix A is also a root of the, minimal equation m(x) ¼ 0., Corollary 4.12 and Theorem 4.42 combined, together yield:, Theorem 4.43 A scalar l is a characteristic root of a, matrix if and only if it is a root of the minimal, equation of that matrix., Definition 4.80 An n-rowed matrix is said to be, derogatory or non-derogatory according as the, degree of its minimal equation is less than or equal, to n., It follows from the definition that a matrix is, non-derogatory if the degree of its minimal polynomial is equal to the degree of its characteristic, polynomial., Theorem 4.44 If the roots of the characteristic equation of a matrix are all distinct, then the matrix is, non-derogatory., Proof: Let A be a matrix of order n having n distinct, characteristic roots. By Theorem 4.60, each of these, roots is also a root of the minimal polynomial of A., Therefore, the minimal polynomial of A is of degree, n. Hence, by definition, A is non-derogatory., EXAMPLE 4.60, Show that the matrix, 2, , 7, 4, A¼4 4, 7, 4 4, , Therefore, roots of the characteristic equation, |A lI| ¼ 0 are l ¼ 3, 3, 12., Since each characteristic root of a matrix is also, a root of its minimal polynomial, therefore, (x 3), and (x 12) shall be factors of m(x). Let., gð xÞ ¼ ðx 3Þ ðx 12Þ ¼ x2 15x þ 36:, We have, 2, 3, 69, 60 15, 69 15 5, A2 ¼ 4 60, 60 60, 24, Then, we observe that, 2, 3, 0 0 0, gðAÞ ¼A2 15A þ 361 ¼ 4 0 0 0 5:, 0 0 0, Thus g(x) is the monomic polynomial of lowest, degree such that g(A) ¼ 0. Hence g(x) is minimal, polynomial of A. Since its degree is less than the order, of the matrix A, the given matrix A is derogatory., , 4.27, , ORTHOGONAL, NORMAL, AND UNITARY, MATRICES, , Definition 4.81 Let, 2, , 3, 3, 2, y1, x1, 6 x2 7, 6 y2 7, 7, 7, 6, 6, 7 and Y ¼ 6 . . . 7, X ¼6, ., ., ., 7, 7, 6, 6, 4 ... 5, 4 ... 5, xn, yn, be two complex n-vectors. The inner product of X, and Y denoted by (X, Y), is defined as, 3, 2, y1, 7, 6, 6 y2 7, 7, 6, 7, ðX ; Y Þ ¼ X h Y ¼ ½x1 x2 . . .xn 6, 6 ... 7, 7, 6, 4 ... 5, yn, ¼ x1 y1 þ x2 y2 þ . . . þ xn yn :, , 3, 1, 1 5, 4, , is derogatory., Solution. We have, , , 7l 4 1 , , , jAlIj ¼ 4 7l 1 ¼ ðl12Þ ð3lÞ2 :, 4 4 4l , , If X and Y are real, then their product becomes, 3, 2, y1, 7, 6, 6 y2 7, 7, 6, 7, ðX ; Y Þ ¼ X T Y ¼ ½x1 x2 . . . xn 6, 6 ... 7, 7, 6, 4 ... 5, yn, ¼ x1 y1 þ x2 y2 þ . . . þ xn yn :
Page 155 :
4.52, , n, , Engineering Mathematics-I, , By (21) and (22), we have, ), ), ), ), ), , llX h X, X h Ah AX ¼ , h, X X ¼, llX h X, , h, 1, ll X X ¼ 0, , , 1, ll ¼ 0 since X h X 6¼ 0, , ll ¼ 1, jlj ¼ 1:, , Theorem 4.47 (i) If U is unitary matrix, then absolute, value of |U| ¼1, (ii) Any two eigenvectors corresponding to, the distinct eigenvalues of a unitary matrix are, orthogonal., Proof: (i) We have , , h, , U ¼ U T ¼ U ¼ jU j, Therefore, , , , j U j2 ¼ jU j : j U j ¼ U h j U j ¼ U h U , ¼ j I j ¼ 1:, Hence, absolute value of determinant of a unitary, matrix is 1., (ii) Let l1 and l2 be two distinct eigenvalues of, a unitary matrix U and let X1, X2 be the corresponding eigenvectors. Then, UX1 ¼ l1 X1, ð23Þ, UX2 ¼ l2 X2, ð24Þ, Taking conjugate transpose of (24), we get, l2 X2h, ð25Þ, X2h U h ¼ , Post-multiplying both sides of (25) by UX1, we get, X2h U h UX1 ¼ , l2 X2h UX1, ) X2h X1 ¼ , l2 X2h l1 X1 since U h U ¼ I, and UX1 ¼ l1 X1, ) X2h X1 ¼ , l2 l1 X2h X1, , , ) 1, l2 l1 X2h X1 ¼ 0, ð26Þ, But eigenvalues of a unitary matrix are of unit, modulus., Therefore , l2 l2 ¼ 1, that is, , l2 ¼ l12 . Thus (26), reduces to, , , l1, 1, X2h X1 ¼ 0, l, 2, , , l2 l1, ), X2h X1 ¼ 0, l2, ) X2h X1 ¼ 0 since l1 6¼ l2 :, Hence, X1 and X2 are orthogonal vectors, , Theorem 4.48 The product of two unitary matrices of, the same order is unitary., Proof: Let A and B be two unitary matrices of order, n. Then, AAh ¼ Ah A ¼ I and BBh ¼ Bh B ¼ I:, We have, , , ðABÞh ðABÞ ¼ Bh Ah ðABÞ, , , ¼ Bh Ah A B, ¼ Bh I B, ¼ Bh B ¼ I:, Hence, AB is an unitary matrix of order n. Similarly,, , , ðBAÞh ðBAÞ ¼ Ah Bh ðBAÞ, , , ¼ Ah Bh B A, ¼ Ah I A, ¼ Ah A ¼ I, and so BA is unitary., Theorem 4.49 The inverse of a unitary matrix of order, n is an unitary matrix., Proof: Let U be an unitary matrix. Then, UU h ¼ I, , 1, ) UU h ¼ I 1 ¼ I, 1 1, ) Uh, U ¼I, 1 h 1, U ¼I, ) U, Hence, U 1 is also a unitary matrix., Remark 4.8 It follows from Theorem 4.66 that the set, of unitary matrices is a group under the binary, operation of multiplication. This group is called, unitary group., Definition 4.88 A square matrix P is said to be, orthogonal if PTP ¼ I., Thus, a real unitary matrix is called an orthogonal matrix., If P is orthogonal, then, PT P ¼ I, T , ) P P ¼ jI j ¼ 1, , , ) PT j P j ¼ 1, ) j Pj2 ¼ 1, ) j P j 6¼ 0:, Thus P is invertible and have inverse as PT. Hence, PTP¼I¼PPT.
Page 156 :
Matrices, , If P is an orthogonal matrix, then the transformation Y ¼ PX is called orthogonal transformation., Theorem 4.50 The product of two orthogonal matrices, of order n is an orthogonal matrix of order n., Proof: Let A and B be orthogonal matrices of order n., Therefore, A and B are invertible. Further both AB, and BA are matrices of order n. But, jABj ¼ jAj jBj 6¼ 0 and jBAj ¼ jBj jAj 6¼ 0, Therefore, AB and BA are invertible. Now, , , ðABÞT ðABÞ ¼ BT AT ðABÞ, , , ¼ BT AT A B, ¼ BT I B, ¼ BT B ¼ I:, Hence AB is orthogonal. Similarly BA is also, orthogonal., Theorem 4.51 If a matrix P is orthogonal, then P1 is, also orthogonal., Proof: Since P is orthogonal, we have, PPT ¼ I, T 1, ) PP, ¼ I 1 ¼ I, T 1 1, P ¼I, ) P, 1 T 1, ) P, P ¼ I:, Hence, P1 is also orthogonal., Remark 4.9 The above results show that the set of, orthogonal matrices form a multiplication group, called orthogonal group., Theorem 4.52 Eigenvalues of an orthogonal matrix, are of unit modulus., Proof: Since an orthogonal matrix is a real unitary, matrix, the result follows from Theorem 4.46., Remark 4.10 It follows from Theorem 4.46 that ±1, can be the only real characteristic roots of an, orthogonal matrix., Definition 4.89 A matrix A is said to be normal if and, only if AhA¼AAh., For example, unitary, Hermitian, and SkewHermitian matrices are normal. Also, the diagonal, matrices with arbitrary diagonal elements are normal., Theorem 4.53 If U is unitary, then A is normal if and, only if UhAU is normal, , n, , 4.53, , Proof: We have, h h h , , h , U AU ¼ U h Ah U, U AU, U AU, , , h h, h, ¼ U A UU AU, ¼ U h Ah I AU, ð27Þ, ¼ U h Ah AU, and similarly,, h h h, U AU, U AU ¼ U h AAh U, ð28Þ, From (27) and (28), we note that AhA¼AAh if and, only if, h h , , , , U h AU, U AU ¼ U h AU U h AU :, Hence, A is normal if and only if UhAU is normal., , 4.28, , SIMILARITY OF MATRICES, , Definition 4.90 Let A and B be matrices of order n., Then B is said to be similar to A if there exists a, non-singular matrix P such that B¼P1AP., It can be seen easily that the relation of similarity of matrices is an equivalence relation., If B is similar, 1 to A,, then 1 , , j B j ¼ P AP ¼ P j A j j P j, , , ¼ P1 j P j j A j, , , ¼ P1 P j A j, ¼ j I j j A j ¼ j A j:, Therefore it follows that similar matrices have the, same determinant., Theorem 4.54 Similar matrices have the same characteristic polynomial and hence the same characteristic roots., Proof: Suppose A and B are similar matrices. Then, there exists an invertible matrix P such that, B¼P1AP. Since, B xI ¼ P1 AP xI, ¼ P1 AP P1 ðxI ÞP, ¼ P1 ðA xI ÞP;, we have, , , j B xI j ¼ P1 ðA xI Þ P , , , ¼ P1 j P j j A xI j, 1 , ¼ P P j A xI j, ¼ j A xI j:, Thus A and B have the same characteristic polynomial and so they have same characteristic roots., Further if l is characteristic root of A, then,, AX ¼ lX ; X 6¼ 0
Page 157 :
4.54, , n, , Engineering Mathematics-I, , , , , B P1 X ¼ P1 AP P1 X, ¼ P1 AX ¼ P1 ðlX Þ, , , ¼ l P1 X, This shows that (P1X) is an eigenvector of B corresponding to its eigenvalue l., Corollary 4.14 If a matrix A is similar to a diagonal, matrix D, the diagonal elements of D are the, eigenvalues of A., Proof: Since A and D are similar, they have same, eigenvalues. But the eigenvalues of the diagonal, matrix D are its diagonal elements. Hence the, eigenvalues of A are the diagonal elements of D., , and so, , 4.29, , DIAGONALIZATION OF A MATRIX, , Definition 4.91 A matrix A is said to be diagonalizable if it is similar to a diagonal matrix., Theorem 4.55 A matrix of order n is diagonalizable if, and only if it possesses n linearly independent, eigenvectors., Proof: Suppose first that A is diagonalizable. Then A, is similar to a diagonal matrix, D¼ diag½l1 l2 . . . ln :, Therefore, there exists an invertible matrix P ¼ [X1, X2 … Xn] such that P1AP ¼ D, that is, AP ¼ PD, and so, A½X1 X2 . . . Xn ¼ ½X1 X2 . . . Xn diag½l1 l2 . . . ln , or, ½AX1 ; AX2 . . . AXn ¼ ½l1 X1 l2 X2 . . . ln Xn :, Hence, AX1 ¼ l1 X1 ; AX2 ¼ l2 X2 ; . . . ; AXn ¼ ln Xn :, Thus, X1, X2,…, Xn are eigenvectors of A corresponding to the eigenvalues l1, l2, …, ln, respectively. Since P is non-singular, its column vectors, X1, X2,…, Xn are linearly independent. Hence A has, n linearly independent eigenvectors., Conversely suppose that A possesses n linearly, independent eigenvectors X1, X2, …, Xn and let l1,, l2,…, ln be the corresponding eigenvalues. Then, AX1 ¼ l1 X1 ; AX2 ¼ l2 X2 ; . . . ; AXn ¼ ln Xn :, Let, P ¼ ½X1 ;X2 ;...;Xn and D ¼ diag ½l1 l2 ... ln :, Then, AP ¼ ½AX1 AX2 . . . AXn , ¼ ½l1 X1 l2 X2 . . . ln Xn , ¼ ½X1; X2 . . . Xn diag ½l1 l2 . . . ln ¼ PD:, , Since the column vectors X1, X2,…, Xn of the matrix, P are linearly independent, so P is invertible and, P1 exists., Therefore,, AP ¼ PD ) P1 AP ¼ P1 PD, ) P1 AP ¼ D, ) A is similar to D: )A is diagonalizable., Theorem 4.56 If the eigenvalues of a matrix of order n, are all distinct, then it is always similar to a diagonal matrix., Proof: Suppose that a square matrix of order n has n, distinct eigenvalues, l1, l2, …, ln. As eigenvectors, of a matrix corresponding to distinct eigenvalues are, linearly independent, A has n linearly independent, eigenvectors and so, by the above theorem, it is, similar to a diagonal matrix., The following result is very useful in diagonalization of a given matrix., Theorem 4.57 The necessary and sufficient condition, for a square matrix to be similar to a diagonal matrix, is that geometric multiplicity of each of its eigenvalues coincide with the algebraic multiplicity., EXAMPLE 4.61, Show that the matrix, 2, 3, 2, 3, 4, A ¼ 40, 2 1 5, 0, 0, 1, is not similar to diagonal matrix., Solution. The characteristic, equation of A is, , , 2 l, 3, 4 , , 2 l 1 ¼ 0, j A lI j ¼ 0, 0, 0, 1 l, and so, ð2 lÞ ð2 lÞ ð1 lÞ ¼ 0:, Hence the eigenvalues of A are 2, 2, and 1. The, eigenvector X of A corresponding to l¼2 is given, by (A2I) X ¼ 0, that is, by, 2, 2, 3, 3, 3 2, 0 3, 4, 0, x1, 6, 7, 6, 7, 7 6, 4 0 0 1 5 4 x2 5 ¼ 4 0 5, 2, , 0, , 0, 6, ~4 0, 0, , 0 1, , 0, x3, 3 2, 3, 2, 3, x1, 3, 4, 0, 7 6, 7, 6, 7, 0 1 5 4 x2 5 ¼ 4 0 5; R3 ! R3 R2 :, 0, 0, 0, x3
Page 158 :
Matrices, , The coefficient matrix is of rank 2. Hence number of, linearly independent solution is n r ¼ 1. Thus, geometric multiplicity of 2 is 1. But its algebraic, multiplicity is 2. Therefore, geometric multiplicity, is not equal to algebraic multiplicity. Hence A is not, similar to a diagonal matrix., EXAMPLE 4.62, Give an example to show that not every square, matrix can be diagonalized by a non-singular transformation of coordinates., Solution. Consider the matrix, 1 1, A¼, :, 0 1, The characteristic equation of A is, , , 1l, 1 , , ¼0, jA lI j ¼ , 0, 1l , or, ð1 lÞ2 ¼ 0;, which yields the characteristic roots as l ¼ 1,1., The characteristic vector corresponding to, l ¼ 1, is given by (AI) X¼0, that is, by, 0, 0, , 1, 0, , x1, x2, , ¼, , 0, :, 0, , The rank of the coefficient matrix is 1 and so that, number of linearly independent solution is n r, 2 1 ¼ 1. Thus the geometric multiplicity of, characteristic root is 1, whereas algebraic multiplicity of the characteristic root is 2. Hence, the, given matrix is not diagonalizable., EXAMPLE 4.63, Show that the matrix, 2, 3, 8 8 2, A ¼ 4 4 3 2 5, 3 4, 1, is diagonalizable. Hence, find the transforming, matrix and the diagonal matrix., Solution. The roots of the characteristic equation, , , 8 l, 8, 2 , , j A lI j ¼ , 4 3 l, 2 ¼ 0, , 3, 4 1 l , are 1, 2, 3. Since the eigenvalues are all distinct, A is, similar to a diagonal matrix. Further, algebraic multiplicity of each eigenvalues is 1. So there is only, , n, , 4.55, , one linearly independent eigenvector of A corresponding to each eigenvalues. Now the eigenvector, corresponding to l¼1 is given by (A I) X¼ 0, that, is, by, 2, 3 2 3 2 3, x1, 0, 7 8 2, 6, 7 6 7 6 7, 4 4 4 2 5 4 x2 5 ¼ 4 0 5;, 3 4, , 0, x3, 3 2 3 2, 7 8 2, x1, 0, 6, 7 6 7 6, 4 3 4 0 5 4 x2 5 ¼ 4 0, 3 4 0, 0, x3, 2, 3 2 3 2, 0, 7 8 2, x1, 6, 7 6 7 6, 4 3 4 0 5 4 x2 5 ¼ 4 0, 2, , 0, , 3, 7, 5 ; R2 ! R2 R 1, 3, 7, 5 ; R3 ! R3 þ R2 :, , 0 0 0, x3, 0, We note that rank of the coefficient matrix is 2., Therefore, there is only one linearly independent, solution. Hence geometric multiplicity of the eigenvalues 1 is 1. The equation can be written as, 7x1 8x2 2x3 ¼ 0, 3x1 þ 4x2 ¼ 0:, The last equation yields x1 ¼ 43 x2 . So taking x2¼3,, we get x1¼4. Then the first equation yields x3¼2., Hence, the eigenvector corresponding to l¼1 is, 2 3, 4, X1 ¼ 4 3 5 :, 2, Similarly, eigenvectors corresponding to l¼2 and 3, are found to be, 2, 3, 2, 3, 3, 2, X2 ¼ 4 2 5 and X3 ¼ 4 1 5:, 1, 1, Therefore, the transforming matrix is, 2, 3, 4, 3 2, P¼4 3, 2 1 5;, 2, 1 1, and so the diagonal matrix, is, 2, 3, 1, 0 0, 2 0 5:, P1 AP ¼ 4 0, 0, 0 3, EXAMPLE 4.64, Diagonalize the matrix, 2, 3, 1, 0 1, A¼4 1, 2, 1 5:, 2, 2, 3
Page 159 :
4.56, , n, , Engineering Mathematics-I, , Solution. The characteristic equation of the given, matrix is, , , 1 l, 0, 1 , , 2l, 1 ¼ 0, j A lI j ¼ 1, 2, 2, 3l , or, l3 þ 6l2 11l þ 6 ¼ 0:, The characteristic roots are l ¼ 1, 2, 3. Since the, characteristic roots are distinct, the given matrix is, diagonalizable and the diagonal elements shall be, the characteristic roots 1, 2, 3., The characteristic vectors corresponding to, l ¼21 are given3by, that is, by, 3 X¼0,, 2 3, 2 (AI), x1, 0, 0 0 1, 7 6, 7 6 7, 6, 4 1 1 1 5 4 x2 5 ¼ 4 0 5, 0, 2 2 2, x3, 3 2, 2, 3 2 3, x1, 1 1 1, 0, 7 6, 6, 7 6 7, 4 0 0 1 5 4 x2 5 ¼ 4 0 5; R1 $ R2, 2 2 2, 0, x3, 3 2 3, 3 2, 2, 0, x1, 1 1 1, 7 6 7, 7 6, 6, 4 0 0 1 5 4 x2 5 ¼ 4 0 5 R3 ! R3 2R1 :, 0, x3, 0 0 0, The rank of the coefficient matrix is 2. Therefore,, there is only 3 2 ¼ 1 linearly independent solution. The above equation yields,, x1 þ x2 þ x3 ¼ 0, x3 ¼ 0:, Hence, the characteristic vector corresponding to, 2, 3, l¼1 is, 1, 4 1 5:, 0, The characteristic vector corresponding to l¼2 is, given by (A2I) X¼0, that is, by, 2, 3 2, 3 2 3, x1, 1 0 1, 0, 7 6 7, 6, 7 6, 4 1 0 1 5 4 x2 5 ¼ 4 0 5, 0, x3, 2 2 1, 2, 3 2, 3 2 3, 1 0 1, x1, 0, 6, 7 6, 7 6 7 R2 ! R2 þ R1, 4 0 0 0 5 4 x2 5 ¼ 4 0 5 ;, R3 ! R3 R1, 1 2 0, x3, 0, 2, 3 2, 3 2 3, 1 0 1, x1, 0, 6, 7 6, 7 6 7, 4 1 2 0 5 4 x 2 5 ¼ 4 0 5 ; R2 $ R3 :, 0 0 0, , x3, , 0, , The rank of the coefficient matrix is 2. Therefore,, there is only 3 2 ¼ 1linearly independent solution. The equation implies, x1 x3 ¼ 0, x1 2x3 ¼ 0, which yields x1 ¼ 2, x2 ¼ 1, x3¼ 2. Therefore,, the characteristic vector is, 2, 3, 2, 4 1 5:, 2, The characteristic vector corresponding to l¼3 is, given, 2 by, 3 2, 3 2 3, 2 0 1, x1, 0, 6, 7 6, 7 6 7, 4 11 1 5 4 x2 5 ¼ 4 0 5, 2 2 0, x3, 0, 3 2 3, 3 2, 2, 0, x1, 2 0 1, 7 6 7, 7 6, 6, 4 11 0 5 4 x2 5 ¼ 4 0 5; R2 ! R2 þ R1 :, 0, x3, 2 2 0, The rank of coefficient matrix is 2. and so there is, 3 2 ¼ 1 independent solution. The equation, yields,, 2x1 x3 ¼ 0, x1 x2 ¼ 0:, and so the corresponding characteristic vector is, 2, 3, 1, 4 1 5:, 2, Thus, the transforming, matrix, is, 2, 3, 1, 2, 1, P ¼ 4 1 1 1 5:, 0 2 2, We have |P| ¼ 2 and the cofactors of P are, A11 ¼ 0; A12 ¼ 2; A13 ¼ 2;, A21 ¼ 2;, , A22 ¼ 2;, , A23 ¼ 2;, , A31 ¼ 1;, , A32 ¼ 0;, , A33 ¼ 1:, , Therefore,, , 2, , 0, adj P ¼ 4 2, 2, and so, P1, , 2, 1, adj P ¼ 4, ¼, jPj, , 2, 2, 2, , 3, 1, 0 5;, 1, , 0 1, 1, 1, 1 1, , 1, 2, , 3, , 0 5:, , 12
Page 160 :
Matrices, , Then we observe, that, 2, 1 3, 0 1, 2, 1, 05, P1 AP ¼ 4 1, 1 1 12, 2, 1, 2, 1, 4, 1 1 1, 0 2 2, 2, 1 3, 0 1, 2, 1, 05, ¼4 1, 1, 1, ¼ 40, 0, 2, , 2, , 5, 2, , 1, 4 1, 0, , 1 12, 3, 0 0, 2 0 5 ¼ diag ½1, 0 3, , I, 0, , Let R ¼, 4, 2, 4, 2, , 4.57, , where B is a square matrix of order n 1., Therefore, by induction hypothesis, there exists a, unitary matrix V such that, V 1 BV ¼ D1 ;, where D1 is a diagonal matrix of order n 1., , 3, 1, 15, 3, , 1 0, 41 2, 2 2, 3, , n, , 3, , 3, 3 5, 6, , 3 :, , Definition 4.92 Let A and B be square matrices of, order n. Then B is said to be unitarily similar to A if, there exists a unitary matrix U such that, B ¼ U 1AU., Theorem 4.58 (Existence Theorem). If A is Hermitian, matrix, then there exists a unitary matrix U such, that UhAU is a diagonal matrix whose diagonal, elements are the characteristic roots of A, that is,, U h AU ¼ diag½l1 l2 . . . ln :, Proof: We shall prove Theorem 4.75 by induction on, the order of A. If n = 1, then the theorem is, obviously true. We assume that the theorem is true, for all Hermitian matrices of order n 1. We shall, establish that the theorem holds for all Hermitian, matrices of order n., Let l1 be an eigenvalue of A. Thus l1 is real. Let, X1 be the eigenvector corresponding to the eigenvalues l1. Therefore AX1 = l1X1. We choose an, orthonormal basis of the complex vector space Vn, having X1 as a member. Therefore, there exists a, unitary matrix S with X1 as its first column. We now, consider the matrix S 1AS. Since X1 is the first, column of S, the first column of S 1AS is S 1AX1 =, S 1l1X1 = l1 S 1X1. But S 1X1 is the first column of, S 1S = I. Therefore, the first column of S 1AS is [l1, 0 … 0 … 0]T. Since S is unitary, S 1 = S h and so, , h, 1 h, S AS ¼ S h Ah S 1 ¼ S h Ah S ¼ S 1 AS:, Hence S-1AS is Hermitian. Therefore, the first row, of S 1AS is [l1 0… 0 … 0]. Thus,, l 0, ;, S 1 AS ¼ 1, 0 B, , 0, V, , be a matrix of order n. Then, , R is invertible and R1 ¼, , I, 0, , 0, : Now since V, V 1, , is unitary, V h = V-1 and so, I 0, I, 0, Rh ¼, ¼, ¼ R1 :, 0 Vh, 0 V 1, Hence, R is uniatary. Since R and S are unitary, matrices of order n, SR is also unitary of order n. Let, SR = U. Then, U 1 AU ¼ ðSRÞ1 AðSRÞ, , , ¼ R1 S 1 AðSRÞ, , , ¼ R1 S 1 AS R, ¼, , I, , 0, , l1, , 0, , I, , 0, , 0 V 1, 0 B 0 V, l1, I 0, l1, 0, 0, ¼, ¼, 0 V 1 B 0 V, 0 V 1 BV, l1 0, ¼, ¼ diag½l1 l2 . . . ln :, 0 D1, As an immediate consequence of this theorem, we, have, Corollary 4.15 If A is a real symmetric matrix, there, exists an orthogonal matrix U such that UTAU is a, diagonal matrix, whose diagonal elements are the, characteristic roots of A., Theorem 4.59 If l is an m-fold eigenvalue of, Hermitian matrix A, then rank of A-l In is n m., Proof: By Theorem 4.58, there exists a unitary, matrix U such that, U AU ¼ diag½ll . . . llmþ1 lmþ2 . . . ln ;, where l occurs m times and lm+1,l m+2, …, ln are all, distinct from l. Since U is unitary, subtracting lIn, from both sides of the above equation, we get, U ½A lIn U ¼ diag ½00 . . . 0ðlmþ1 lÞ, ðlmþ2 lÞ . . . ðln lÞ:, Since U is non-singular, it follows that the rank of, A lIn is same as that of the diagonal matrix on the
Page 162 :
Matrices, , Hence the transforming unitary matrix is, ", #, 12i, pffiffiffiffi, p5ffiffiffiffi, 30, 30, U ¼, :, 5, ffiffiffiffi 1þ2i, p, pffiffiffiffi, 30, , 30, , We then note that, U h AU ¼ U 1 AU ¼, , 3, 0, , 0, ¼ diag ½3 3:, 3, , EXAMPLE 4.66, Diagonalize the Hermitian matrix, 3, 2, 5 2, 0, 0, 62 2, 0, 07, 7:, A¼6, 40 0, 5 2 5, 0 0 2, 2, Solution. The characteristic equation of A is, , , 5 l, 2, 0, 0 , , 2, 2l, 0, 0 , jA lI j ¼ , ¼ 0:, 0, 0, 5, , l, 2, , , 0, 0, 2 2 l , The characteristic roots are 1, 1, 6, 6. The characteristic vector corresponding to l = 1 are given by, (A I)X = 0, that is, by, 2, 32 3, 2 3, 4 2, 0, 0, x1, 0, 62 1, 7 6 x2 7, 607, 0, 0, 6, 7 6 7 ¼ 6 7;, 40 0, 405, 4 2 5 4 x3 5, 0 0 2, 1, x4, 0, which yields, , 4x1 þ 2x2 ¼ 0, 2x1 þ x2 ¼ 0, 4x3 2x4 ¼ 0, , 2x3 þ x4 ¼ 0, with the complete solution as, 2, 3, 2 3, 1, 0, 6 2 7, 607, ;X ¼, :, X1 ¼ 4, 05 2 415, 0, 2, These vectors are already orthogonal. The normalized vectors are, 2 3, 2 1 3, 0, pffiffi, 5, 607, 6 p2ffiffi 7, 7, 6, 7, 1 7:, U1 ¼ 6, 4 05 5and U2 ¼ 6, 4 pffiffi5 5, p2ffiffi, 0, 5, , n, , 4.59, , Similarly, the normalized vectors corresponding to, l = 6 are, 3, 2, 2 2 3, 0, pffiffi, 5, 6 0 7, 6 p1ffiffi 7, 7, 6, 5 7and U4 ¼ 6 p2ffiffi 7:, U3 ¼ 6, 405, 4, 55, p1ffiffi, 0, 5, , Hence, the transforming unitary matrix is, 2, 3, p1ffiffi, 0 p2ffiffi5, 0, 5, 6 p2ffiffi, 7, 0 7, 6 5 0 p1ffiffi5, 6, 7 ¼ 0:, U ¼6, p1ffiffi, 0 p2ffiffi5 7, 4 0, 5, 5, p2ffiffi, p1ffiffi, 0, 0, 5, , 5, , h, , and U AU = diag [1 1 6 6]., , 4.30, , TRIANGULARIZATION OF AN ARBITRARY, MATRIX, , Not every matrix can be reduced to diagonal form, by a unitary transformation. But it is always possible, to reduce a square matrix to a triangular form. In this, direction, we have the following result., Theorem 4.62 (Jacobi-Thoerem). Every square matrix, A over the complex field can be reduced by a unitary transformation to upper triangular form with, the characteristic roots on the diagonal., Proof: We shall prove the theorem by induction on, the order n of the matrix A. If n = 1, the theorem is, obviously true. Suppose that the result holds for all, matrices of order n 1. Let l1 be the characteristic, root of A and U1 denote the corresponding unit, characteristic vector. Then AU1 = l1U1. Let {U1,, U2, …, Un} be an orthonormal set, that is, U = [U1,, U2, …, Un]. Then, 2 h3, U1, 6 h7, 6 U2 7, 7, 6, h, 7, U AU ¼ 6, 6 . . . 7½AU1 AU2 . . . AUn , 7, 6, 45, Uh, 2 hn 3, U1, 6 h7, 6 U2 7, 7, 6, 7, ¼6, 6 7½l1 U1 AU2 . . . AUn , 7, 6, 45, Unh
Page 165 :
4.62, , n, , Engineering Mathematics-I, , Solution. The required symmetric form is, 2, 32 3, 1 2 3, x1, 6, 76 7, T, X AX ¼ ½x1 x2 x3 4 2 0 3 54 x2 5, x3, 3 3 1, 2, 3, x1 þ 2x2 þ 3x3, 6, 7, ¼ ½x1 x2 x3 4 2x1 þ 3x2, 5, 3x1 þ 3x2 þ x3, ¼ x1 ðx1 þ 2x2 þ 3x3 Þ þ x2 ð2x1 þ 3x3 Þ, þ x3 ð3x1 þ 3x2 þ x3 Þ, ¼ x21 þ x23 þ 4x1 x2 þ 6x1 x3 þ 6x2 x3 :, , 4.32, , DIAGONALIZATION OF QUADRATIC FORMS, , We know that for every real symmetric matrix A, there exists an orthogonal matrix U such that, U T AU ¼ diag ½l1 l2 . . . ln ;, where l1, l2,…, ln are characteristic roots of A., Applying the orthogonal transformation X = UY to, the quadratic form XTAX, we have, X T AX ¼ l1 y21 þ l2 y22 þ . . . þ ln y2n :, If the rank of A is r, then n-r characteristic roots are, zero and so, X T AX ¼ l1 y21 þ l2 y22 þ . . . þ lr y2r ;, where l1, l2,…, ln are non-zero characteristic roots., Definition 4.94 A square matrix B of order n over a, field F is said to be congruent to another square, matrix A of order n over F, if there exists a nonsingular matrix P over F such that B=PTAP., The relation of “congruence of matrices” is an, equivalence relation in the set of all nn matrices, over a field F. Further, let A be symmetric matrix, and let B be congruent to A. Therefore, there exists a, non-singular matrix P such that B=PTAP. Then, BT ¼ ðPT APÞT ¼ PT AT P, ¼ P AP; since A is symmetric, T, , ¼ B:, Hence, every matrix congruent to a symmetric, matrix is a symmetric matrix., Theorem 4.65 (Congruent reduction of a symmetric, matrix). If A is any n rowed non-zero symmetric, matrix of rank r over a field F, then there exists an n, rowed non-singular matrix P over F such that, PT AP ¼, , A1, 0, , 0, ;, 0, , where A1 is a non-zero singular diagonal matrix of, order r over F and each 0 is a null matrix of a, suitable size., Proof: We prove the theorem by induction. When, n = 1, r = 1 also. The quadratic form is simply, a11x12, a11 6¼ 0 and the identity transformation y1 =, x1 is the non-singular transformation. Suppose that, the theorem is true for all symmetric matrices of, order n 1, then we first show that there exists a, matrix B = [bij]n n over F congruent to A such that, b11 6¼ 0. We take up the following cases., Case I. If a11 6¼ 0, then we take B = A., Case II. If a11 = 0, but some diagonal element of A, say, aii 6¼ 0. Then using Ri R1, Ci C1 to A, we obtain a, matrix B congruent to A such that b11 = aii 6¼ 0., Case III. Suppose that each diagonal element of A is, zero. Since A is non-zero, there exists, non-zero, element aij such that aij = aji 6¼ 0. Applying the, congruent operation Ri ? Ri + Rj, Ci ? Ci + Cj to A,, we obtain a matrix D = [dij]n n congruent to A such, that dii = aij + aji = 2aij 6¼ 0. Now, applying the, congruent operation Ri ? R1, Ci C1 to D, we obtain, a matrix B = [bij]n n congruent to D and therefore, also congruent to A such that b11 = dii 6¼ 0. Hence,, there exists a matrix B = [bij] congruent to a symmetric matrix such that the leading element of B is, non-zero. Since B is congruent to a symmetric, matrix, therefore, B itself is symmetric. Since b11 6¼, 0, all elements in the first row and first column, except the leading element can be made zero by, suitable congruent operation. Thus we have a matrix, 2, , a11, 6 0, 6, C¼4, ..., 0, , 0, , ..., B1, , 0, , 3, 7, 7, 5, , congruent to B and, therefore, congruent to A such, that B is a square matrix of order n1. Further C is, congruent to a symmetric matrix and so C is also, symmetric. Consequently B1 is also a symmetric, matrix. By induction hypothesis, B1 can be reduced, to a diagonal matrix by congruent operation. So C, can be reduced to a diagonal matrix by congruent, operations. Thus, A is congruent to a diagonal, matrix, say, diag [l1 l2 … lk … 0 0 0 0 ]. Thus there
Page 166 :
Matrices, , exists a non-singular matrix P such that, PT AP ¼ diag½l1 l2 . . . lk . . . 0 0 0 0:, Since (A) = r and we know that rank does not alter, by multiplying by a non-singular matrix, therefore,, rank of PTAP = diag[l1 l2 … lk … 0 0 0 0 ] is also r., So r elements of diag [l1 l2 … lk … 0 0 0 ] are nonzero. Thus, k = r and so, PT AP ¼ diag½l1 l2 . . . lr . . . 0 0 0 0:, Corollary 4.17 Corresponding to every quadratic form, XTAX over a field F, there exists a non-singular, linear transformation X = PY over F such that the, form XTAX transforms to, l1 y21 þ l2 y22 þ . . . þ lr y2r ;, where l1, l2, …, lr are scalars in F and r is the rank, of the matrix A., Definition 4.95 The rank of the symmetric matrix A is, called the rank of the quadratic form XTAX., EXAMPLE 4.69, Find a non-singular matrix P such that PTAP is a, diagonal matrix, where, 2, 3, 6 2, 2, A ¼ 4 2, 3 1 5:, 2 1, 3, Find the quadratic form and its rank., Solution., Write A =3IAI,2that is, 3 2, 2, 3, 6 2, 2, 1 0 0, 1 0 0, 4 2, 3 1 5 ¼ 4 0 1 0 5A4 0 1 0 5, 2 1, 3, 0 0 1, 0 0 1, Using congruent operations, we shall reduce, A to diagonal form. Performing congruent operations R2 ! R2 þ 13 R1 ; C2 ! C2 þ 13 C1 and R3 !, R3 13 R1 ; C3 ! C3 13 C1 ; we have, 3, 3 2, 2, 3 2, 6, 0, 0, 1 0 0, 1 13 13, 7, 15, 40, ¼ 4 13 1 0 5A4 0 1, 0 5:, 3 3, 1, 7, 1, 0 3, 0, 1, , 0 0, 1, 3, 3, Now performing congruent operation R3 !, R3 þ 17 R2 ; C3 ! C3 þ 17 C2 ; we have, 3 2, 3 2, 3, 2, 1 0 0, 6 0 0, 1 13 27, 60 7 07 6 1 1 07 6, 1 7:, 5 A4 0 1, 5¼4 3, 4, 3, 75, 16, 2, 1, 0 0 7, 7 7 1, 0 0, 1, Thus, diag 6, , 7 16, ¼ P1 AP;, 3 7, , where, , 2, , 1, 4, P¼ 0, 0, , 1, 3, , 27, , 1, 0, , 1, , n, , 4.63, , 3, , 1 5:, 7, , The quadratic form corresponding to the matrix A is, X T AX ¼ 6x21 þ 3x22 þ 3x23 4x1 x2 2x2 x3, þ 4x3 x1 :, , ð48Þ, , The non-singular transformation X = PY corresponding to the matrix P is, 32 3, 2 3 2, y1, x1, 1 13 27, 4 x2 5 ¼ 4 0 1 1 54 y2 5;, 7, x3, y3, 0 0 1, which yields, 1, 2, x1 ¼ y1 þ y2 y3, 3, 7, 1, x2 ¼ y2 þ y3, 7, x3 ¼ y3 :, Substituting these values in (48), we get, 7, 16, ðPY ÞT AðPY Þ ¼ 6y21 þ y22 þ y23 :, 3, 7, It contains a sum of three squares. Thus, the rank of, the quadratic form is 3., Theorem 4.66 Let A be any n-rowed real symmetric, matrix of rank r. Then there exists a real non-singular matrix P such that, PT AP ¼ diag½1 1 . . . 1 1 1 1 . . ., 1 0 0 0 . . . 0;, where 1 appears p times and 1 appears r - p times., Proof: Since A is a symmetric matrix of rank r, there, exists a non-singular real matrix Q such that, QT AQ ¼ diag½l1 l2 . . . lr . . . 0 0 0 0:, Suppose p of the non-zero diagonal elements are, positive and r p are negative. Then by using, congruence operations Ri Rj, Ci Cj, we can assume, that first p elements l1, l2, …, lp are positive and lp, +1, lp+2, …, lr are negative. Let, ", #, 1 1, 1, 1, 1, S ¼ diag pffiffiffiffiffi pffiffiffiffiffi . . . pffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi . . . pffiffiffiffiffiffiffiffi 111 :, l1 l2, lr, lp lpþ1
Page 167 :
4.64, , n, , Engineering Mathematics-I, , Then S is non-singular and ST = S. Let P = QS. Then, P is also real non-singular matrix and we have, PT AP ¼ ðQS ÞT AðQS Þ ¼ S T QT AQS, ¼ S T ðdiag½l1 l2 . . . lr 0 . . . 0ÞS, ¼ Sðdiag½l1 l2 . . . lr 0 . . . 0ÞS, ¼ diag½1 1 . . . 1 1 1 . . . 1 0 . . . 0, so that 1 appears p times and –1 appears r p times., Corollary 4.18 If XTAX is a real quadratic form of rank, r in n variables, then there exists a real non-singular, linear transformation X = PY which transform XTAX, to the form, Y T PT APY ¼ y21 þ y22 þ . . . þ y2p y2pþ1 . . . y2r ;, which is called canonical form or normal form of a, real quadratic form., The number of positive terms in the normal form of, XTAX is called the index of the quadratic form,, whereas p (r p) = 2p r is called the signature, of the quadratic form and is usually denoted by s., A quadratic form XTAX with a non-singular matrix, A of order n is called positive definite if n = r = p,, that is, if n = rank = index. A quadratic form is, called positive semi-definite if r <n and r = p., Similarly a quadratic form is called negative definite if its index is zero and n = r and called negative, semi-definite if r <n and its index is zero., EXAMPLE 4.70, Find the rank, index, and signature of the quadratic, form x2 2y2 + 3z2 4yz + 6zx., Solution. The matrix of the given quadratic form is, 2, 3, 1, 0, 3, A ¼ 4 0 2 2 5:, 3 2, 3, Write A = IAI, that is,, 3 2, 1, 0, 3, 1 0, 4 0 2 2 5 ¼ 4 0 1, 3 2, 3, 0 0, 2, , 3 2, 0, 1 0, 0 5 A4 0 1, 1, 0 0, , 3, 0, 0 5:, 1, , Performing, congruence, operations, R3 !, R3 3R1 ; C3 ! C3 3C1 ; we get, 2, 3 2, 3 2, 3, 1, 0, 0, 1 0 0, 1 0 3, 4 0 2 2 5 ¼ 4 0 1 0 5A4 0 1, 0 5:, 0 2 6, 3 0 1, 0 0, 1, , Performing, congruence, operations, R3 !, R3 R2 ; C3 ! C3 C2 ; we have,, 2, 3 2, 3 2, 3, 1, 0, 0, 1, 0 0, 1 0 3, 4 0 2, 05 ¼ 4 0, 1 0 5A4 0 1 1 5:, 0, 0 4, 3 1 1, 0 0, 1, Performing R2 ! p1ffiffi2 R2 ; C2 ! p1ffiffi2 C2 ; and R3 !, p1ffiffi R3 ; C3 ! p1ffiffi C3 ; we get,, 4, 4, 3 2, 3, 2, 3 2 1, 0 0, 1 0 32, 1, 0, 0, 1, 7 6, 6, 1, 17, 4 0 1, 0 5 ¼ 4 0 pffiffi2 0 5A4 0 pffiffi2 2 5:, 1, 0, 0 1, 32 12 12, 0 0, 2, Hence X = PY transforms the given quadratic, form to y12 y22 y32 ., The rank of the quadratic form is 3 (the number, of non-zero terms in the normal form.), The number of positive terms is 1. Hence, the, index of the quadratic form is 1., We note that 2p r = 2 3 = 1. Therefore,, signature of the quadratic form is –1., , 4.33, , MISCELLANEOUS EXAMPLES, , 2, EXAMPLE 4.71, 2, Compute the inverse of 4 0, 5, mentary transformations., , 3, 1, 1 5 using ele3, , 1, 2, 2, , Solution. Write A ¼ I3 A, that is,, 2, 3 2, 2 1 1, 1 0, 40 2 1 5 ¼ 40 1, 5 2 3, 0 0, , 3, 0, 0 5 A:, 1, , We reduce the matrix on the L.H.S of the equation, to identity matrix by elementary row transformations, keeping in mind that each row transformation, will apply to I3 on the right hand side., Interchanging R1 and R3 , we get, 2, 3 2, 3, 5 2 3, 0 0 1, 4 0 2 1 5 ¼ 4 0 1 0 5A:, 2 1 1, 1 0 0, Performing R1 ! R1 2R3 , we, 2, 3 2, 1 0 1, 2, 40 2 1 5 ¼ 4 0, 2 1 1, 1, , get, 3, 0 1, 1 05 A, 0 0
Page 168 :
Matrices, , Performing, 2, 1, 40, 0, , R3 ! R3 2R1 ; we, 3 2, 0 1, 2, 2 1 5¼4 0, 1 1, 5, , have, 0, 1, 0, , 3, , 1, 0 5A, 2, , Now performing R2 ! R2 R3 ; we get, 2, 3 2, 3, 1 0 1, 2 0 1, 4 0 1 0 5 ¼ 4 5 1 2 5 A, 0 1 0, 5 0 2, Now performing R3 ! R3 R2 ; we, 2, 3 2, 1 0 1, 2 0, 4 0 1 0 5 ¼ 4 5 1, 0 0 1, 10 1, Lastly,, 2, 1, 40, 0, , have, 3, 1, 2 5A, 4, , performing R1 ! R1 þ R3 , we have, 3 2, 3, 0 0, 8 1 3, 1 0 5 ¼ 4 5 1, 2 5 A: ¼ A1 A:, 0 1, 10 1 4, , Hence, , 2, , A1, , 8 1, ¼ 4 5 1, 10 1, , 3, 3, 2 5:, 4, , EXAMPLE 4.72, Using Cayley-Hamilton theorem, find A1 , given, the matrix, 2, 3, 13 3, 5, A¼4 0, 4, 0 5:, 15, 9 7, , 4.65, , n, , or, 1 2, ½A 10Aþ8I, 64 82, 3, 94 6 30, >, <, 1 6, 7, ¼, 4 0 16 0 5, 64 >, :, 90 18 26, 2, 3 2, 39, 13 3 5, 10 0 >, =, 6, 7 6, 7, 104 0 4 0 5 þ84 0 1 0 5, >, ;, 15 9 7, 00 1, 3, 2, 94130þ8 6þ30, 3050, 16, 7, ¼ 4, 0, 1640þ8, 0, 5, 64, 90þ150 1890 26þ70þ8, 3, 2, 28 24 20, 16, 7, ¼ 4 0 16 0 5, 64, 60 72 52, 3, 2, 7 6 5, 16, 7, ¼ 4 0 4 0 5:, 16, 15 18 13, , A1 ¼ , , EXAMPLE 4.73, For the matrix:, , 2, , 2, A ¼ 43, 1, , 1 3, 3, 1, 1, 1, , 3, 6, 2 5;, 2, , find non-singular matrices P and Q such that PAQ is, in the normal form. Hence find the rank of A., Solution. Write, , Solution. Proceeding as in Example 4.59, the characteristic equation is, , , 13 l 3, 5 , , 4l, 0 , jA lI j ¼ 0, 15, 9, 7 l , or, ð13 lÞð4 lÞð7 lÞ þ 75ð4 lÞ ¼ 0, or, l 10l þ 8l þ 64 ¼ 0:, 3, , 2, , By Cayley’s Hamilton theorem, we have, A3 10A2 þ 8A þ 64I ¼ 0, , A ¼ I3 AI4 ;, that is,, 2, 2, 3 2, 3 1, 2, 1 3 6, 1 0 0 6, 0, 4 3 3, 1, 2 5 ¼ 4 0 1 0 5 A6, 40, 1, 1, 1, 2, 0 0 1, 0, Performing elementary transformation, we get, 2, 2, 3 2, 3 1, 1 1, 1, 2, 0 0 1 6, 0, 4 3 3 1, 2 5 ¼ 4 0 1 0 5 A6, 40, 2 1 3 6, 1 0 0, 0, , 0, 1, 0, 0, , 0, 0, 1, 0, , 3, 0, 07, 7:, 05, 1, , R1 $ R 3 ,, 0, 1, 0, 0, , 0, 0, 1, 0, , 3, 0, 07, 7, 05, 1
Page 169 :
4.66, , n, , Engineering Mathematics-I, , Performing R2 ! R2 3R1 and R3 ! R3 2R1 ,, we get, 2, , 3, 0, 7, 4 5, 0 1 5 10, 2, 2, 3 1, 0 0 1, 6, 6, 7 60, ¼ 4 0 1 3 5A6, 40, 1 0 2, 0, , 1, 6, 40, , 0, 6, , 0, 2, , 0, , 0 0, , 3, , 1, 0, , 0 07, 7, 7:, 1 05, , 0, , 0 1, , Performing C2 ! C2 C1 ; C3 ! C3 C1 ; C4 !, C4 2C1 , we get, 2, , 3, 0, 7, 4 5, 0 1 5 10, 2, 2, 3 1 1, 0 0 1, 6, 6, 7 60 1, ¼ 4 0 1 3 5A6, 40 0, 1 0 2, 0 0, , 1, 6, 40, , 0, 6, , 0, 2, , 3, 1 2, 0, 0 7, 7, 7:, 1, 0 5, 0, , 1, , Now performing R3 $ R2 , we have, 2, , 1, 6, 40, , 0, 1, , 0, 5, , 3, 0, 7, 10 5, , 6 2 4, 2, 2, 3 1 1, 0 0 1, 6, 6, 7 60 1, ¼ 4 1 0 2 5A6, 40 0, 0 1 3, 0 0, , 0, , 3, 1 2, 0, 0 7, 7, 7:, 1, 0 5, 0, , 1, , Performing R2 ! R2 , we get, 2, , 3, 0, 7, 10 5, 0 6 2 4, 2, 2, 3 1, 0 0 1, 6, 6, 7 60, ¼ 4 1 0 2 5A6, 40, 0 1 3, 0, , 1, 6, 40, , 0, 1, , 0, 5, , 1, 1, , 1, 0, , 0, 0, , 1, 0, , 3, 2, 0 7, 7, 7:, 0 5, 1, , Now performing R3 ! R3 þ 6R2 , we get, 2, 3, 1 0 0, 0, 6, 7, 4 0 1 5 10 5, 0 0 28 56, 2, 3 1 1 1, 2, 0 0 1 6, 0, 7 60 1, 6, ¼ 4 1 0 2 5A6, 40 0, 1, 6 1 9, 0 0, 0, , 2, , 3, , 0 7, 7, 7:, 0 5, 1, , Performing C3 ! C3 5C2 and C4 ! C4 10C2 ,, we get, 2, 3, 1 0 0, 0, 6, 7, 0 5, 40 1 0, 0 0 28 56, 2, 3, 2, 3 1 1 4, 8, 0 0 1 6, 7, 6, 7 6 0 1 5 10 7, ¼ 4 1 0 2 5A6, 7:, 40 0, 1, 0 5, 6 1 9, 0 0, 0, 1, 1, Performing R4 ! 28, R4 , we get, 2, 3, 1 0 0 0, 6, 7, 40 1 0 05, 0 0 1 2, 2, 3 1 1, 2, 0, 0 1, 6, 7 60 1, 6, ¼ 4 1 0 2 5A6, 40 0, 3, 1, 9, 14, 28 28, 0 0, , 4, 5, 1, 0, , Performing C4 ! C4 2C3 , we get, 3, 2, 1 0 0 0, 6, 7, 40 1 0 05, 0 0 1 0, 2, 2, 3 1 1 4, 0, 0 1, 6, 6, 7 6 0 1 5, ¼ 4 1 0 2 5A6, 40 0, 1, 3, 1, 9, 14, 28 28, 0 0, 0, or, ½I3 0 ¼ PAQ;, , 8, , 3, , 10 7, 7, 7:, 0 5, 1, , 3, 8, 0 7, 7, 7, 2 5, 1
Page 170 :
Matrices, , where, , 0, 0 1, 6 1 0 2 7, P¼4, 5 and, 3, 1, 9, 14 28 28, 2, 3, 1 1 4, 0, 6 0 1 5 0 7, 6, 7, Q¼6, 7, 40 0, 1 2 5, 0, , 0, , 0, , 4.67, , 2, , 3, , 2, , n, , 1, , Also ðAÞ ¼ 3, 2, 3, EXAMPLE 4.74, 3 1 2, (a) Find the rank of the matrix 4 6, 2 4 5 by, 3, 1 2, reducing it to the normal, form, 2, 3, 1 1, 2, (b) For the matrix A = 4 1 2, 3 5,find non0 1 1, singular matrices P and Q such that PAQ is in, the normal form. Hence find the rank of A., (c) Reduce the following matrix to column echelon, and find its rank:, 3, 2, 1, 1 1 1, 6 1 1 3 3 7, 7:, A¼6, 4 1, 0, 1, 2 5, 1 1 3, 3, (d) Find all values of m for which rank of the matrix, 2, 3, m 1 0, 0, 6 0, m 1 0 7, 7, A¼6, 4 0, 0, m 1 5, 6 11 6 1, is equal to 3., Solution. (a) We have, 2, 3 2, 3, 3 1 2, 1 1 2, 6, 7 6, 7, A ¼ 4 6 2 4 5 4 10 2 4 5C1 ! C1 C3, 3 1 2, 5 1 2, 3, 2, 1 1 2, 7 R2 ! R2 þ 10R1, 6, 4 0 8 24 5, R3 ! R3 þ 5R1, 0 4 12, 3, 2, 1 1 2, 1, 7, 6, 4 0 1 3 5R2 ! R2, 8, 0 4 12, , 3, 1 1 2, 4 0 1 3 5R3 ! R3 þ 4R2, 0 0, 0, 2, 3, 1 0 0, C2 ! C2 þ C1, 4 0 1 3 5, C3 ! C3 C1, 0 0 0, 2, 3, 1 0 0, 4 0 1 0 5C3 ! C3 þ 3C2, 0 0 0, I2 0, (normal form), ¼, 0 0, Hence ðAÞ ¼ 2., (b) Expressing the given matrix in the form, A ¼ I3 AI3 , we have, ", # ", #", #", #, 1 1 2, 1 0 0 1 1 2 1 0 0, 1 2 3 ¼ 0 1 0 1 2 3 0 1 0 :, 0 1 1, 0 0 1 0 1 1 0 0 1, Using the elementary transformation R2 ! R2 R1 ,, we get, ", #", # ", #, 0, 1, 2, 1 0 0, 1 0 0, 0, 1, 1 1 1 0 A 0 1 0, 0 1 1, 0 0 1, 0 0 1, Using the elementary column transformations C2 !, C2 C1 and C3 ! C3 2C1 ; we have, ", # ", # ", #, 1, 0 0, 1 0 0, 1 1 2, 0, 1 1 ¼ 1 0 0 A 0, 1, 0, 0 1 1, 0 0 1, 0, 0, 1, Operating R3 ! R3 þ R2 ; we get, ", # ", # ", 1 0 0, 1 0 0, 1 1, 0 1 1 ¼ 1 0 0 A 0, 1, 0 0 0, 1 0 1, 0, 0, , 2, 0, 1, , Now operating C3 ! C3 C2 ; we have, ", # ", # ", 1 0 0, 1 0 0, 1 1, 0 1 0 ¼ 1 0 0 A 0, 1, 0 0 0, 1 0 1, 0, 0, , 1, 1, 1, , #, , #, , or, I2, 0, where, , 2, , 1 0, P ¼ 4 1 0, 1 0, , 0, ¼ PAQ;, 0, , 3, 2, 0, 1, 0 5 and Q ¼ 4 0, 1, 0, , 3, 1 1, 1 1 5:, 0, 1
Page 171 :
4.68, , n, , Engineering Mathematics-I, , Since elementary transformations do not alter the, rank of a matrix,, I2, 0, , ðAÞ ¼, , 0, ¼ 2:, 0, , (c) A Matrix is said to be in column echelon form if, (i) The first non-zero entry in each non-zero, column is 1., (ii) The column containing only zeros occurs, next to all non-zero columns., (iii) The number of zeros above the first nonzero entry in each column is less than the, number of such zeros in the next column., The given matrix is, 2, 3, 1, 1 1 1, 6 1 1 3 3 7, 6, 7, A¼6, 7, 4 1, 0, 1, 2 5, 1, 1, 6 1, 6, 6, 4 1, 2, , 2, , 1, 1, , 6 1, 6, 6, 4 1, 2, , 1, 1, , 6 1, 6, 6, 4 1, 2, , 1, 0, 2, 1, 2, 0, 2, 1, 2, 0, 2, 1, , 3, 0, 4, , 3, 3, 0, C2 ! C2 C1, 2 7, 7, 7 C3 ! C3 þ C1, 2, 1 5, C4 ! C4 C1, 4, 2, 3, 0 0, 4 0 7, 7, 7C4 ! C4 þ C2, 2 05, 4 0, 3, 0 0, 0 07, 7, 7C3 ! C3 þ 2C2, 0 05, , 1, 1, , 2, 0, , 0 0, 3, 0 0, 0 07, 1, 7, 7R2 ! R2 ;, 5, 2, 0 0, , 1, , 1, , 0, , 61 1, 6, 6 2, 4 1 12, , 0, , which is column echelon form. The number of nonzero column is two and therefore ðAÞ ¼ 2., (d) Similar to Remark 4.4, We are given that, 3, 2, m 1 0, 0, 6 0, m 1 0 7, 7, A¼6, 4 0, 0, m 1 5, 6 11 6 1, , Therefore, , , m, , , j Aj ¼ m 0, , 6, , 1, m, 11, , , , 0, 0 , , , , 1 þ 1 0, , , 6, 6 , , 1, m, 6, , , 0 , , 1 , , 1 , , ¼ m3 6m2 þ 11m 6, ¼ 0 if m ¼ 1; 2; 3:, For m ¼ 3, we have, 2, 3, 6 0, 6, 4 0, 6, , the singular matrix, 3, 1 0, 0, 3 1 0 7, 7;, 0, 3 1 5, 11 6 1, , which has non-singular sub-matrix, 2, 3, 3 1 0, 4 0 3 1 5:, 0 0, 3, Thus for m ¼ 3, the rank of the matrix A is 3., Similarly, the rank is 3 for m ¼ 2 and m ¼ 1. For, other values of m, we have j Aj 6¼ 0 and so ðAÞ ¼ 4, for other values of m., EXAMPLE 4.75, Solve the system of equations :, xþyþz¼6, x y þ 2z ¼ 5, 3x þ y þ z ¼ 8, 2x 2y þ 3z ¼ 7, Solution. The augmented matrix is, 3, 2, 1, 1 1 6, 7, 6, ½A : B ¼ 4 1 1 2 5 5, 3, 1 1 8, 3, 2, 1, 1, 1, 6, 7 R 2 ! R2 R1, 6, 4 0 2, 1, 1 5, R3 ! R3 3R1, 0 2 2 10, 3, 2, 1, 0, 0, 6, 7 C2 ! C2 C1, 6, 4 0 2, 1, 1 5, C3 ! C3 C1, 0 2 2 10, 3, 2, 1, 0, 0, 6, 7, 6, 40, 1 2, 1 5 C2 $ C3, 0 2 2 10
Page 172 :
Matrices, , 2, , 1, , 0, , 0, , 6, , 3, , 6, 7, 4 0 1 2, 1 5 R3 ! R3 þ 2R1, 0 0 6 12, 3, 2, 1 0, 0, 6, 7, 6, 4 0 1 2 1 5 R3 ! R3 þ 3R2, 0 0, 0 9, It follows that (A) = 2 and ½A : B ¼ 3. Hence the, given equation is inconsistent., EXAMPLE 4.76, Discuss the consistency of the system of equations:, 2x 3y þ 6z 5w ¼ 3; y 4z þ w ¼ 1;, 4x 5y þ 8z 9w ¼ l, for various values of l. If consistent, find the, solution., Solution. The matrix equation is AX ¼2B, 3, where, x, 2, 3, 2 3 6 5, 6y 7, 6 7, A ¼ 4 0 1 4 1 5; X ¼ 6 7 and, 4z 5, 4 5 8, 9, w, 2 3, 3, B ¼ 4 1 5:, l, The augmented, matrix is, 2, 3, 2 3 6 5 3, ½A : B ¼ 4 0 1 4 1 1 5, 4 5 8, 9 l, 2, 3, 2 3 6 5, 3, 4 0 1 4 1, 1 5R3 ! R3 2R1, 0 1 4 1 l 6, 2, 3, 2 3 6 5, 3, 4 0 1 4 1, 1 5R3 ! R3 R2, 0 0, 0, 0 l7, We note that ðAÞ ¼ ½A : B if l 7 ¼ 0; that is,, if l ¼ 7: Thus the given equation is consistent if, l ¼ 7: Thus if l ¼27; than we have, 3, 2 3 6 5 3, ½A : B ¼ 4 0 1 4 1 1 5, 0 0, 0, 0 0, and so the given system of equations is equivalent to, 2x 3y þ 6z 5v ¼ 3, y 4z þ v ¼ 1:, , n, , 4.69, , Therefore if w ¼ k1 ; z ¼ k2 ; then y ¼ 1 þ 4k2 k1, and x ¼ 3 þ 3k2 þ k1 : Hence the general solution of, the system is x ¼ 3 þ 3k2 þ k1 ; y ¼ 1 þ 4k2 k1 ;, z ¼ k2 ; w ¼ k1 ., EXAMPLE 4.77, Test for consistency the following set of equations, and solve if it is consistent: 5x þ 3y þ 7z ¼ 4;, 3x þ 26y þ 2z ¼ 9; 7x þ 2y þ 10z ¼ 5:, Solution. The augmented matrix is, 2, 3, 5 3, 7 4, 6, 7, ½A : B ¼ 4 3 26 2 9 5, 2, , 7, 15, , 6, 4 15, , 2, 9, 130, , 7, , 2, , 15, 6, 4 0, , 9, , 2, , 121, , 10 5, 21, , 12, , 3, , 7 R1 ! 3R1, 45 5, R2 ! 5R2, 10 5, 3, 21 12, 7, 11 33 5R2 ! R2 R1, 10, , 7, , 2, , 35, , 21, , 49, , 6, 4 0, , 11, , 1, , 28 R1 ! 73 R1, 7, 3 5 R2 ! 14 R2, , 35, , 10, , 50, , 25, , 2, , 2, , 35, 6, 4 0, 2, , 0, , 35, 6, 4 0, , 10, , 5, 3, , R3 ! 5R3, 3, , 21, , 49, , 28, , 11, , 1, , 7, 3 5 R3 ! R3 R 1, , 11, , 1, , 21, , 49, , 11, , 1, , 3, 3, 28, 7, 3 5R3 ! R3 þ R2 :, , 0, 0, 0, 0, We observe that, ðAÞ ¼ 2; ð½A : BÞ ¼ 2;, and so ðAÞ ¼ ð½A : BÞ: Hence the given system, of equation is consistent. Further, the given system, is equivalent to, 35x þ 21y þ 49z ¼ 28, 11y z ¼ 3;, 7, which yield y ¼ 3þz, and, x ¼ 11, 16, 11, 11 z:, Taking z ¼ 0, we get a particular solution as, x¼, , 7, 3, ; y ¼ ; z ¼ 0:, 11, 11
Page 174 :
Matrices, , Form the theory of equations, the sum of the roots, l1 ; l2 ; :::; ln is equal to negative of the coefficient, of ln1 . Hence, l1 þ l2 þ ::: þ ln ¼ a11 þ a22 þ ::: þ ann, ¼ Trace A:, EXAMPLE 4.80, 1, (a) Find, the eigen, 2, 3 values of A if the matrix A is, 2 5 1, 4 0 3 2 5., 0 0 4, (b) Find the eigenvalues and the corresponding vectors of the matrix, 2, 3, 1 0 0, A ¼ 4 0 2 1 5:, 2 0 3, Solution. (a) By example 4.57, the eigenvalues of, triangular matrix are the diagonal elements. Hence, the eigenvalues of A are 2, 3 and 4. Since the, eigenvalues of A1 are multiplicative inverses of, the eigenvalues of the matrix A, the eigenvalues of, A1 are 12 ; 13 and 14., (b) We have, 2, 3, 1 0 0, A ¼ 4 0 2 1 5:, 2 0 3, The characteristic equation of A is, , , 1 l, 0, 0 , , j A lI j ¼ 0, 2l, 1 ¼ 0:, 2, 0, 3l, or, l3 6l2 þ 11l 6 ¼ 0;, which yields l ¼ 1; 2; 3. Hence the characteristic, roots are 1, 2 and 3., The eigenvector corresponding to l ¼ 1 is, given by ðA IÞX ¼ 0, that is, by, 2, 32 3 2 3, 0, 0 0 0, x1, 4 0 1 1 54 x2 5 ¼ 4 0 5:, 2 0 2, 0, x3, Thus, we have, x2 þ x3 ¼ 0;, 2x1 þ 2x3 ¼ 0:, , n, , 4.71, , Hence x1 ¼ x2 ¼ x3 . Taking x3 ¼ 1, we get the, vector, 2, 3, 1, X1 ¼ 4 1 5:, 1, The eigenvector corresponding to the eigenvalue 2, is given by2ðA 2IÞX ¼, 0, that, 3 is,2by 3, 32, 0, 1 0 0, x1, 4 0 0 1 5 4 x2 5 ¼ 4 0 5 :, x3, 0, 2 0 1, This equation 2, yields, 3, 0, X2 ¼ 4 1 5 as one of the vector., 0, Similarly, the eigenvector corresponding to l ¼ 3 is, given by 2, ðA 3IÞX ¼ 03or2by 3 2 3, 0, 2 0 0, x1, 4 0 1 1 5 4 x2 5 ¼ 4 0 5;, 0, 22 3, 0 0, x2, 0, 4 1 5 as one of the solution. Hence, which2yields, 3, 1, 0, X3 ¼ 4 1 5., 1, EXAMPLE 4.81, Find the sum and product of the eigen values of the, matrix:, 2, 3, 1 2 3 4, 62 1 5 67, 6, 7, 47 4 3 25, 4 3 0 5, Solution. The given matrix, is, 3, 2, 1 2 3 4, 62 1 5 67, 7, A¼6, 4 7 4 3 2 5:, 4 3 0 5, The sum of the eigenvalues is the trace (spur) of the, matrix and so the sum is 1 þ 1 þ 3 þ 5 ¼ 10., Product of the eigenvalues is equal to j Aj., Expanding j Aj, we get the product as 262., 2, 3, EXAMPLE 4.82, 7 4 4, One of the eigenvalues of 4 4 8 1 5 is 9:, 4 1 8, Find the other two eigenvalues.
Page 176 :
Matrices, , where, , 2, 2, , 3, , 6, y1, 6, 6, 6 7, Y ¼ 4 y2 5; P ¼ 6, 6, 6, y3, 4, 2, , 3, , 1, 3, , 2, 3, , 2, 3, , 1, 3, , 2, 3, , 23, , Thus, , 3, , 2, 3, , 7, 7, 7, 2 7, 3 7 and, 7, 5, 1, 3, , x1, 6 7, X ¼ 4 x2 5:, x3, The transformation Y ¼ PX will be orthogonal if, PT P ¼ I. To show it, we observe that, 32 1, 3, 21, 2, 2, 2, 2, 6, 6, 6, PT P ¼ 6, 6, 6, 4, 2, , 3, , 3, , 2, 3, , 1, 3, , 2, 3, , 23, , 1, , 0 0, , 0, , 0 1, , 6, ¼ 40, , 3, , 76, 76, 76, 6, 23 7, 76, 76, 54, 3, , 1, 3, , 3, , 3, , 3, , 2, 3, , 1, 3, , 2, 3, , 23, , 7, 7, 7, 23 7, 7, 7, 5, 1, 3, , 7, 1 0 5 ¼ I:, , Hence Y ¼ PX is orthogonal., 0, EXAMPLE 4.86, 2 0, Diagonalise the matrix A ¼ @ 0 3, 1 0, an orthogonal transformation., , n, , 4.73, , 2 3, 1, X1 ¼ 4 0 5:, 1, , 2 3, x, Let X2 ¼ 4 y 5 be another eigenvector of A correz, sponding to the eigenvalue 3 and orthogonal to X1 ., Then, x y ¼ 0; because it satisfies equation (30), and, x þ z ¼ 0; using X2h X1 ¼ 0:, Obviously x ¼ 1; z ¼ 1 is a solution. Therefore, 2, 3, 1, X2 ¼ 4 1 5:, 1, Further eigenvector corresponding to l ¼ 1 is given, by ðA IÞX ¼ 0; that is , by, 2, 32 3, 1 0 1, x1, 4 0 2 0 5 4 x2 5 ¼ 0:, x3, 1 0 1, This equation yields, , 1, 1, 0 A through, 2, , Solution. We shall proceed as in Example 4.66. The, characteristic equation of A is, , , 2 l, 0, 1 , , 3l, 0 ¼ 0, jA lI j ¼ 0, 1, 0, 2l, or, ð2 lÞð3 lÞð2 lÞ ð3 lÞ ¼ 0, ð3 lÞ½ð2 lÞ2 1 ¼ 0, ð3 lÞ½4 þ l 4l 1 ¼ 0, , x1 þ x3 ¼ 0;, , 2x2 ¼ 0;, , x1 þ x3 ¼ 0:, , Thus x1 ¼ 1; x2 ¼ 0; x3 ¼ 1 and so, 2, 3, 1, X3 ¼ 4 0 5:, 1, Length (norm), of, the, vectors, X1 ; X2 ; X3 are, pffiffiffi pffiffiffi pffiffiffi, respectively 2; 3; 2. Hence the orthogonal, matrix is, 3, 2 1, pffiffi, p1ffiffi, p1ffiffi, 2, 3, 2, 6, p1ffiffi, 0 7, P¼4 0, 5, 3, p1ffiffi p1ffiffi p1ffiffi, 2, , 3, , 2, , and, PT AP ¼ dig½ 3, , 3, , 1 :, , 2, , ð3 lÞðl2 4l þ 3 ¼ 0, ) l ¼ 3; 3; 1:, The ch. vector corresponding to l ¼ 3 is given by, x1 þ x3 ¼ 0, ð30Þ, x1 x3 ¼ 0, , 2, 3, EXAMPLE 4.87, 3 1 1, (a) Show that the matrix A ¼ 4 2 1 2 5, is, 0 1 2, diagonalizable. Hence, find P such that P1 AP is, a diagonal matrix. Also obtain the matrix, B ¼ A2 þ 5A þ 3I.
Page 177 :
4.74, , n, , Engineering Mathematics-I, , (b) Find a matrix P which diagonalizes the matrix, 4 1, A¼, , verify P1 AP ¼ D, where D is the, 2 3, 2, 3, diagonal matrix., 3 1 1, (c) Show that the matrix A ¼ 4 2 1 2 5 is, 0 1 2, diagonalizable. Hence, find P such that P1 AP is a, diagonal matrix., Solution. (a) The characteristic equation for the given, matrix A is, , , 3 l, 1, 1 , , 2 ¼ 0;, jA lI j ¼ 2 1 l, 0, 1, 2 l, that is,, ð3 lÞðl2 3lÞ þ 4 2l þ 2 ¼ 0, or, , 2x1 x2 þ 2x3 ¼ 0, 0x1 þ x2 ¼ 0, Clearly x1 ¼ 1; x2 ¼ 0; x3 ¼ 1 is a solution. Hence, the eigenvector corresponding to l ¼ 2 is ½1 0 1T ., For l ¼ 3, we have, x2 x3 ¼ 0, 2x1 2x2 þ 2x3 ¼ 0, x2 x3 ¼ 0, Taking x3 ¼ 1, we get x2 ¼ 1 and x1 ¼ 1. Thus the, eigenvector corresponding to l ¼ 3 is ½0 1 1T ., Hence, 2, 3, 1 1 0, P ¼ 4 1 0 1 5, 1 1 1, P1 AP ¼ diag½ 1, , By inspection, l ¼ 1 is a root. The reduced equation, is l2 5l þ 6 ¼ 0, which yields l ¼ 2; 3:, Since all characteristic roots are distnict, the, given matrix A is diagonalizable. To find the nonsingular matrix P satisfying P1 AP ¼ diagð1; 2; 3Þ,, we proceed as follows:, The characteristic vectors are given by, (A lIÞX ¼ 0, that is, by, 2, 32 3 2 3, 3l, 1, 1, x1, 0, 4 2 1 l, 2 5 4 x2 5 ¼ 4 0 5, x3, 0, 1, 2l, 0, 9, ð3 lÞx1 þ x2 x3 ¼ 0, =, 2x1 þ ð1 lÞx2 þ 2x3 ¼ 0, ;, 0x1 þ x2 þ ð2 lÞx3 ¼ 0, , x1 þ x2 x3 ¼ 0, , and, , l3 6l2 þ 11l 6 ¼ 0:, , and so, , For l ¼ 2, we get from (31),, , ð31Þ, , For l ¼ 1, we get, 2x1 þ x2 x3 ¼ 0, 2x1 þ 2x3 ¼ 0, x2 þ x3 ¼ 0, We note that x1 ¼ 1; x2 ¼ 1; x3 ¼ 1 satisfy these, equations. Hence the eigenvector corresponding to, l ¼ 1 is ½ 1 1 1T ., , 2 3 ¼ D; say, , ð32Þ, , Premultiplication by P and postmultiplication by, P1 reduces (32) to, A ¼ P D P1 :, Further,, An ¼ P Dn P1 :, Thus, A2 ¼ P D2 P1 :, But, , 2, , 3, 1 0 0, 6, 7, D ¼ 4 0 2 0 5;, , 2, , 3, 1 0 0, 6, 7, D2 ¼ 4 0 4 0 5, , 0 0 9, 0 0 3, 2, 3, 2, 3, 1 1 0, 1 1 1, 6, 7, 6, 7, A2 ¼ 4 1 0 1 5 and P1 ¼ 4 2, 1 1 5:, 1 1 1, 1 0, 1, Putting these values in B ¼ A2 þ 5A þ 3I, we get, 2, 3, 25 8 8, B ¼ A2 þ 5A þ 3I ¼ 4 18 9 18 5:, 2 8 19, (b) We have, A¼, , 4 1, 2 3
Page 179 :
4.76, , n, , Engineering Mathematics-I, , Solution. The given quadratic form can be written as, , Thus, x1 þ x2 x3 ¼ 0, 2x1 x2 þ 2x3 ¼ 0, x2 ¼ 0, For this system x1 ¼ 1; x2 ¼ 0; x3 ¼ 1 is a solution., Therefore, 2 3, 1, X2 ¼ 4 0 5 :, 1, An eigenvector corresponding to l ¼ 3 is given by, ðA 3IÞX ¼ 0, that is, by, 2, 32 3 2 3, 0, x1, 0, 1 1, 4 2 2 2 5 4 x2 5 ¼ 4 0 5, x3, 0, 0, 1 1, Thus, , x2 x3 ¼ 0, 2x1 2x2 þ 2x3 ¼ 0, x2 x3 ¼ 0;, , which yields x1 ¼ 0; x2 ¼ 1; x3 ¼ 1 as one of the, solution. Thus, 2 3, 0, X3 ¼ 4 1 5 :, 1, Therefore the transforming matrix is, 2, 3, 1 1 0, P ¼ 4 1 0 1 5, 1 1 1, and so the diagonal matrix is, 3, 2, 32, 3 1 1, 1 1 1, P1 AP ¼ 4 2, 1 1 5 4 2 1 2 5, 0 1 2, 1 0, 1, 2, 3, 1 1 0, 4 1 0 1 5, 1 1 1, 2, 3, 1 0 0, ¼ 4 0 2 0 5:, 0 0 3, EXAMPLE 4.88, Reduce the quadratic form x2 þ 5y2 þ z2 þ 2xy þ, 6zx þ 2yz to a canonical form through an, Orthogonal transformation., , x2 þ xy þ yx þ 5y2 þ yz þ yz þ z2 þ 3zx þ 3xz:, The matrix of the quadratic form is, 2, 3, 1 1 3, A ¼ 4 1 5 1 5:, 3 1 1, Write A ¼ IAI; that is,, 2, 3 2, 1 1 3, 1, 41 5 15 ¼ 40, 3 1 1, 0, , 0, 1, 0, , 3 2, 0, 1, 0 5 A4 0, 1, 0, , 3, 0 0, 1 0 5:, 0 1, , Using congruent operations R2 ! R2 R1 ; C2 ¼, C2 C1 and R3 ! R3 3R1 ; C3 ! C3 C1 ; we, get, 2, 3 2, 3 2, 3, 1 0, 0, 1 0 0, 1 1 3, 4 0 4 2 5 ¼ 4 1 1 0 5 A 4 0 1, 0 5:, 0 2 8, 3 0 1, 0 0, 1, Now performing congruent operation R3 !, R3 þ 12 R3 ; C3 ! C3 þ 12 C2 ; we get, 3, 3 2, 2, 3 2, 1 0 0, 1 0 0, 1 1 72, 1 5:, 4 0 4 0 5 ¼ 4 1 1 0 5 A 4 0 1, 2, 7 1, 2 2 1, 0 0 9, 0 0, 1, Thus, diag ½1 4 9 ¼ P1 AP;, where, , 2, , 3, 1 1 72, 1 5:, P ¼ 40 1, 2, 0 0, 1, , Hence the required canonical form is, x2 þ 4y2 9z2 :, EXAMPLE 4.89, Reduce the quadratic form x2 þ y2 þ z2 2xy , 2yz 2zx to canonical form through an orthogonal, transformation., Solution. The matrix of the given quadratic form is, 2, 3, 1 1 1, A ¼ 4 1, 1 1 5:, 1 1, 1
Page 180 :
Matrices, , The characteristic equation of A is, , , 1 l 1, 1 , , jA lIj ¼ 1 1 l 1 ¼ 0, 1, 1 1 l , or, , , ð1 lÞ l2 2l þ l 2 2 þ l ¼ 0, , Hence, , 2, P¼, , p1ffiffi, 3, 6 p1ffiffi, 4 3, p1ffiffi, 3, , 2ffiffi, p, 6, p1ffiffi, 6, p1ffiffi, 6, , n, , 4.77, , 3, 0, p1ffiffi2 7, 5, p1ffiffi, 2, , and, PT AP ¼ diag½1 2 2:, , or, l3 3l2 þ 4 ¼ 0;, which yields l ¼ 1; 2; 2:, The eigenvectors will be given by, ðA lIÞX ¼ 0, which implies, 2, 3 2 3, 2 3, 1 l 1, 1, x, 0, 4 1 1 l 1 5 4 y 5 ¼ 4 0 5, 1, 1 1 l, z, 0, or, ð1 lÞ x y z ¼ 0;, x þ ð1 lÞy z ¼ 0, , and, , x y þ ð1 lÞz ¼ 0:, For l ¼ 1; we have, 2x y z ¼ 0; x þ 2y z ¼ 0, , and, , x y þ 2z ¼ 0:, Solving these equations, we get x ¼ y ¼ z ¼ 1 and, so the eigenvector is ½1 1 1T . Its normalized form, h, iT, is p1ffiffi p1ffiffi p1ffiffi :, 3, , 3, , 3, , Corresponding to l ¼ 2; we have x y , z ¼ 0; x y z ¼ 0 and x y z ¼ 0: We, note that x ¼ 2; y ¼ 1; z ¼ 1 is a solution. Thus, the eigenvector is ½2 1 1T . and its normalized, h, iT, 2ffiffi, p1ffiffi ; p1ffiffi : To find the second vector,, form is p, ;, 6, 6, 6, , EXERCISES, 1. Show that the subset {x2 1, x + 1, x 1} of, the vector space of polynomials is linearly, independent., 2. Show that the subset {(1, 1, 1, 0), (3, 2, 2, 1), (1,, 1, 3, 2), (1, 2, 6, 5), (1, 1, 2, 1)} of V4 is, linearly dependent., 3. Show that the subset {(0, 0, 1), (1, 0, 1), (1, 1,, 1), (3, 0, 1)} is not a basis for V3., 4. Show that the subset (1, x, (x1)x, x(x 1), (x 2)} form a basis for vector space of polynomials of degree 3., 5. Show that, 2 3, 2 3, 1, 0, e1 ¼ 4 0 5 and e2 ¼ 4 1 5, 0, 0, form a linearly independent set and describe its, linear span geometrically., Solution. We note that, ae1 þ be2 ¼ 0, implies, , we have, x y z ¼ 0, and, 2x þ y þ z ¼ 0, , using X2h X1 ¼ 0:, , We note that x ¼ 0; y ¼ 1, h and z ¼ 1iis a solution., 1ffiffi, p1ffiffi :, The normalized vector is 0; p, 2, , 2, , implies, , 2 3, 2 3 2 3, 1, 0, 0, a4 0 5 þ b 4 1 5 ¼ 4 0 5, 0, 0, 0, , 2 3 2 3 2 3, a, 0, 0, 405 þ 4b5 ¼ 405, 0, 0, 0
Page 181 :
4.78, , n, , Engineering Mathematics-I, , implies, , 9. Find the adjoint of the matrix, 2, 3, 1 1, 1, A ¼ 4 1 2 3 5, 2 1 3, , 2 3 2 3, a, 0, 4b5 ¼ 405, 0, 0, , Consequently, a ¼ b ¼ 0. Hence {e1 e2 } form, a linearly independent set., Further, the linear span, 2 of, 3 {e1 e2 } is the set, a, of all vectors of the form 4 b 5 which is nothing, 0, but ðx; yÞ plane and is a subset of the three, dimensional Euclidean space., 6. Show that the vectors, 2 3, 2, 3, 2, 3, 1, 0, 2, v1 ¼ 4 1 5; v2 ¼ 4 1 5; and v3 ¼ 4 1 5, 1, 2, 8, are linearly dependant., Solution. We note that the relation., 2 3, 2, 3, 2, 3 2 3, 1, 0, 2, 0, a1 4 1 5 þ a2 4 1 5 þ a3 4 1 5 ¼ 4 0 5, 1, 2, 8, 0, is satisfied if we choose a1 ¼ 2; a2 ¼ 3 and, a3 ¼ 1. Thus a1 v1 þ a2 v2 þ a3 v3 ¼ 0 is satisfied by a1 ; a2 ; a3 , where not all of these scalars, are zero. Hence the set {v1 ; v2 ; v3 } is linearly, dependent., 7. Use the principle of mathematical induction to, 1 n, 1 1, show that if A ¼, ; then An ¼, 0 1, 0 1, for every positive integer n., 8. Express the matrix, 2, , 1, A ¼ 42, 4, , 3, 1, 6, , 3, 5, 35, 5, , as the sum of a symmetric matrix and a Skew, symmetric matrix., 2, , 1, , Ans: 4 52, 9, 2, , 5, 2, , 1, 9, 2, , 9, 2, 9, 2, , 3, , 2, , 1, 5 þ 41, 2, 5, 12, , 1, 2, , 1, 2, , 3, , 0 32 5, 3, 0, 2, , and verify the result A(adj A) = (adj A)A = |A|In., 2, 3, 3 4 5, Ans: adj A ¼ 4 9, 1, 45, 5, 3, 1, 10. Find the2inverse of 3the following matrices:, 0 1 2, (i) A ¼ 4 1 2 3 5, 3 1 1, 2, 3, cos a sin a 0, (ii) B ¼ 4 sin a cos a 0 5, 0, 0, 1, 2, 3, 1 2, 2, (iii) C ¼ 4 2 1 2 5, 2 2 1, 2, 3, 1 3 3, (iv) D ¼ 4 1 4 3 5, 1 3 4, 2, 3, 1, 1, 3, (v) E ¼ 4 1, 3 3 5:, 2 4 4, 2, 3, 2, 3, 1, 1, 1, cosa sina 0, 2 2, 2, 6, 7, Ans: ðiÞ4 4 3 1 5;ðiiÞ4 sina cosa 0 5;, 5, 3 1, 0, 0 1, 2 2 2 32, 2, 3, 1, 2, 2, 7 3 3, 3, 3 3, 62 1 27, ðiiiÞ4 3 3 3 5; ðivÞ4 1 1 0 5;, 2, 2, 1, 1 0 1, 3 3 3, 2, 3, 24 8 12, 1, ðvÞ 4 10 2, 65, 8, 2 2, 2, 11. Using Gauss Jordan method, find the inverse of, the following matrices:, 2, 3, 3 3 4, (i) A ¼ 4 2 3 4 5, 0 1 1
Page 182 :
Matrices, , 2, , 3, 2 1 3, (ii) B ¼ 4 1, 1 15, 1 1 1, 2, 3, 3 2 1, (iii) C ¼ 4 4, 1 1 5, 2, 0, 1, 2, 3, 14 3 2, (iv) D ¼ 4 6 8 1 5, 0 2 7, 2, , 1 1, , 6, Ans: ðiÞ4 2, , 0, , 3, , 2, , 1, , 7 6, 3 4 5ðiiÞ4 0, , 1, 1, 2, 12, , 2, , 3, , 12 7, 5, , 1, 32, 3 3, 3, 2, 3, 1 2 3, 54 17 13, 1 6, 7, 6, 7, ðiiiÞ4 2 5 7 5ðivÞ , 4 42 98 2 5, 654, 2 4 5, 12 28 94, 2, , 2, , 12. Find the rank of the following matrices., 3, 2, 3, 2, 2 3 1 1, 0 1 3 1, 6 1 1 2 4 7, 60 0 1 17, 7, 6, 7, (i) 6, 4 3 1 0 2 5 (ii) 4 3 1 3 2 5, 6 3 0 7, 1 1 2 0, 3, 3, 2, 2, 1 2 1 2, 2 1, 3 4, 7, 6, 60, 3, 4 17, 7 (iv) 6 1 3 2 2 7, (iii) 6, 5, 42, 4, 2 4 3 45, 3, 7 5, 2, 5 11 6, 3 7 4 6, 2, 3, 1 3 4 3, (v) 4 3 9 12 9 5, 1 3 4 1, Ans. (i) 3, (ii) 3, (iii) 3, (iv) 3, (v) 2, 13. Show that no Skew-symmetric matrix can be of, rank 1., Hint: Diagonal elements are all zeros. If all, non-diagonal positive elements are zero, then, the corresponding negative elements are also, zero and so rank shall be zero, If at least one, of the elements is non-zero, then at least one, 2-rowed minor is not equal to zero. Hence, rank, is greater than or equal to 2., , n, , 4.79, , 14. Reduce the following matrices to normal form, and,2 hence, find their3ranks., 5 3 14 4, (i) 4 0 1, 2 15, 1 1 2 0, 2, 3, 1 2 1 0, (ii) 4 2 4 3 0 5, 1 0 2 8, 2, 3, 1, 1, 2, (iii) 4 1, 2, 35, 0 1 1, 2, 3, 2 2 0 6, 64, 2 0 27, 7, (iv) 6, 4 1 1 0 3 5, 1 2 1 2, 2, , 2, 3, 3, 1 00 0, 10 00, 6, 7, 6, 7, Ans: ðiÞ4 0 1 0 0 5; Rank3 ðiiÞ4 0 1 0 0 5; Rank3, 0 01 0, 00 10, 2, 3, 1 00, I3 0, 6, 7, Rank3, ðiiiÞ4 0 1 0 5Rank2; ðivÞ, 0 0, 0 00, , 15. Find the inverse of the matrix, 3, 2, 2 1 3, A ¼ 41, 1 15, 1 1 1, using elementary operations, 2, 3, 1, 1, 2, 1, 15, Ans: 4 0, 2 2, 1 12 32, 16. Using elementary transformation, find the, inverse of the matrix, 2, 3, 1 3, 3 1, 6 1, 1 1, 07, 7, A¼6, 4 2 5, 2 3 5, 1, 1, 0, 1, 2, , 0, 6 1, Ans: 6, 4 1, 1, , 2, 1, 2, 1, , 1, 1, 0, 2, , 3, 3, 2 7, 7, 15, 6
Page 184 :
Matrices, , 27. Show that the matrices A and B–1AB have the, same characteristic, roots, , , , Hint: B1 AB lI ¼ B1 AB B1 lIB, , , ¼ B1 ðA lI ÞB, , , ¼ B1 jA lI jj Bj, , , ¼ jA lI jB1 B, ¼ jA lIj:, 28. Verify Caley-Hamilton, theorem, 2, 3 for the matrix, 1 0 2, A ¼ 40 2 15, 2 0 3, and, hence, find its inverse., Ans. A satisfies A3 6A22 þ 7A þ 2I ¼30, 3 0, 2, , 1, 15, A1 ¼ A2 6A þ 7I ¼ 4 1 12, 2 :, 2, 2 0 1, 29. Find the minimal polynomial of the matrix., 2, 3, 5 6 6, 4 1, 4, 25, 3 6 4, Ans. x2 3x þ 2., 30. Show that the matrix, a þ ic b þ id, b þ id a ic, is unitary if and only if a2 + b2 + c2 + d2 = 1., 31. Show that the matrix, 2, 3, 1, 3, , 6, A ¼ 4 23, 2, 3, , 2, 3, 1, 3, 23, , 2, 3, 7, 23 5, 1, 3, , is orthogonal. Hint: Show that AT A = I., 32. Show that the matrix, 2, 3, 9 4 4, 4 8 3 4 5, 16 8 7, is diagonalizable. Obtain the diagonalizing, matrix P., , 2, , 1 0, Ans. P = 4 1 1, 1 1, , n, , 4.81, , 3, 1, 1 5diag½1;1;3, 2, , 33. Diagonalize the matrix, 2, 3, 1 6 4, 40, 4, 2 5:, 0 6 3, , 3, 1 0 0, Ans:4 0 1 0 5, 0 0 1, 34. Diagonalize the real-symmetric matrix, 2, 3, 3 1, 1, 6, 7, A ¼ 4 1, 5 1 5, 2, 6, Ans: P ¼ 6, 4, , 1 1, p1ffiffi, , p1ffiffi, 2, , 0, p1ffiffi2, , 3, p1ffiffi, 3, p1ffiffi, 3, , 2, , 3, , 3, , p1ffiffi, 6, 7, p2ffiffi6 7, 5;, 1, pffiffi, 6, , diag ½2 3 6, , 35. Find a non-singular matrix P such that PT AP is, a diagonal matrix, where, 2, 3, 0 1 2, A ¼ 41 0 35, 2 3 0, 2, 3, 0 12 3, 6, 7, 1, Ans: 4 1, 2 2 5, 0, 0, 1, 36. Reduce the quadratic form, x2+4y2 þ 9z2 + t2 12yz + 6zx = 4xy 2xt , 6zt to canonical form and find its rank and, signature, Ans. y21 y22 þ y24 , Rank:3, Signature:1, 37. Reduce the quadratic form, 6x21 þ 3x22 þ 14x23 þ 4x2 x3 þ 18x3 x1 þ 4x1 x2 to, canonical form and find its rank and signature., Ans. y21 þ y22 þ y23 , Rank:3, Signature:3
Page 185 :
Integral Calculus, 5 Beta and Gamma Functions, 6 Multiple Integrals, , UNIT, , III
Page 213 :
6.12, , n, , Engineering Mathematics-I, , Further,, 4 a, Z, Z Z a, r , 3, r sin h cos h dr dh ¼ sin h cos h dh, I¼, 4 0, 0 0, 0, , , , Z, , a4, a4 sin2 h, sin h cos h dh ¼ , ¼ 0:, ¼, 2 0, 4, 4, , Therefore,, Z1 Z1x, Z1 Z1, y, uv, exþy dy dx ¼, e u :u du dv, 0, , 0, , 0, , Z1, ¼, , 0, , e, 0, , EXAMPLE 6.24, Use the transformation x þ y ¼ u and y ¼ uv to, R1 1x, R y, show that, exþy dy dx ¼ e1, 2 ., 0, , ¼, , 0, , 1, 2, , Z1, , Z1, , Z1, u du dv ¼, , v, 0, , 0, , u2, dv, 2 0, , 1, 1, ev dv ¼ ½ev 10 ¼ ðe 1Þ:, 2, 2, , 0, , 0, , EXAMPLE 6.25, Ra Ra x dx dy, by changing into polar, Evaluate, 2, 2, 0 y x þy, coordinates., , Solution. We have, x ¼ u y ¼ u uv ¼ uð1 vÞ and, y ¼ uv:, Therefore,, , , @x @x , , , , @ ðx; yÞ @u @v 1 v u , ¼, ¼, @ ðu; vÞ @y @y v, u , , , @u @v, ¼ uð1 vÞ ðuvÞ ¼ u:, , Solution. The region of integration is shown in the, following figure:, y, yx, ya, , The Jacobian vanishes when u ¼ 0, that is, when, x ¼ y ¼ 0, but not otherwise. Also the origin (0, 0), corresponds to the whole line u ¼ 0 of the uv-plane, so that the correspondence ceases to be one-to-one., In order to exclude (0, 0), we note that the given, integral exists as the limit, when h ? 0 of the, integral over the region is bounded by, x þ y ¼ 1; x ¼ h; and y ¼ 0 where h > 0:, The transformed region is then bounded by the lines, u ¼ 1; v ¼ 0; and uð1 vÞ ¼ h:, When h ? 0, the new region of the uv-plane tends,, as its limit, to the square bounded by the lines u ¼ 1,, v ¼ 1, u ¼ 0, and v ¼ 0. Thus, the region of integration in the xy- and uv-planes are as shown in the, following figures:, y, , v, , (0, 1), v 1, , x, , y, , 1, , u 0, , xa, , y, , 0, , 0, , , , (1, 0), , x, , 0, , v0, , u, , x, , The region is bounded by x ¼ y, x ¼ a, y ¼ 0, and, y ¼ a. Changing to polar coordinates, we have x ¼ r, cos h, y ¼ r sin h, and dx dy ¼ r dr dh. Further, in the, , region of integration h varies from 0 to . Also,, 4, a, x ¼ a implies r cos h ¼ a or r ¼, . Therefore,, cos h, r varies from 0 to a . Hence,, cos h a, Za Za, Z 4 Zcos h, x dx dy, r cos h, ¼, r dr dh, 2, 2, x þy, r2, 0, , u 1, , 0, , 0, , Z4, 0, , 1, , ev, , ¼, , , , a, cos h, , Z4, , cos h½r0 dh ¼ a, 0, , dh ¼, 0, , a, :, 4
Page 214 :
Multiple Integrals, , EXAMPLE 6.26, n, RR, Evaluate, xy ðx2 þ y2 Þ2 dx dy over the positive, octant of the circle x2 þ y2 ¼ 4, supposing n þ 3 > 0., Solution. The region of integration is bounded by x ¼, 0, x ¼ 2, y ¼ 0, and y ¼ 2. Changing to polar, coordinates, we have x ¼ rcos h and y ¼ rsin h and, so, rdh dr ¼ dxdy. The limits of integration in the, first quadrant of the given circle are now r ¼ 0 to r, , ¼ 2 and h ¼ 0 to h ¼ . Hence,, 2, ZZ, , n, xy x2 þ y2 2 dx dy, ¼, 0, , a, , 0, , Thus, in the given form, we first integrate with, respect to x and then with respect to y., y, , 0, , ¼, , 2, , sin h cos h4, , Z2, , 3, rnþ3 dr5 dh, , Z, , , 2, , sin h cos h, , rnþ4, nþ4, , 0, , ¼, , 2nþ4, nþ4, , Z, , dh, 0, , , 2, , sin h cos h dh ¼, , R, , 2nþ4 1:1, 2nþ3, ¼, :, :, nþ4 2, nþ4, , CHANGE OF ORDER OF INTEGRATION, , We have seen that, in a double integration, if the, limits of both variables are constant, then we can, change the order of integration without affecting, the result. But if the limits of integration are variable, a change in the order of integration requires, a change in the limits of integration. Some integrals, are easily evaluated by changing the order of integration in them., EXAMPLE 6.27, Change the order of integration in the integral, pffiffiffiffiffiffiffiffiffi, 2, 2, Za Za y, f ðx; yÞdx dy:, I¼, a, , 0, , x, , 2, , 0, , 6.6, , (a, 0), , 0, , 0, , 0, , ¼, , S, , n, r cos h:r sin h r2 2 :r dr dh, , , , Z2, , 6.13, , Solution. The region of integration is bounded by, y ¼ a, y ¼ a, x ¼ 0, and x2 þ y2 ¼ a2. We have, 2 pffiffiffiffiffiffiffiffiffi, 3, 2, 2, Za y, Za, 6, 7, 6, f ðx; yÞdx7, I¼, 4, 5 dy:, , , , Z 2 Z2, , n, , On changing the order of integration, we first integrate with respect to y, along a vertical, ship, pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, ffi RS,, to, which, extends, from, y ¼ a2 x 2, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, y ¼ a2 x2 . To cover the whole region of integration, we then integrate with respect to x from, x ¼ 0 to x ¼ a. Thus,, pffiffiffiffiffiffiffiffiffi, Za2 x2, Za, I ¼ dx, f ðx; yÞ dy, pffiffiffiffiffiffiffiffiffi, 0, a2 x2, 2 pffiffiffiffiffiffiffiffiffi, 3, 2 x2, a, a, Z, Z, 6, 7, f ðx; yÞ dy5dx:, ¼ 4, pffiffiffiffiffiffiffiffiffi, 0, 2, 2, a x, , EXAMPLE 6.28, R1 2x, R, Change the order of integration in I ¼, xydx dy, 0 x2, and hence, evaluate the same., Solution. For a given integral, the region of, integration is bounded by x ¼ 0, x ¼ 1, y ¼ x2,
Page 215 :
6.14, , n, , Engineering Mathematics-I, , (parabola), and the line y ¼ 2 x. Thus, the region, of integration OABO is as shown in the following, figure:, y, , 1, ¼, 2, , Z1, , 1, y dy þ, 2, , Z2, yð2 yÞ2 dy, , 2, , 0, , 1, 1, , x2 y, , ¼, , B, , ¼, , 1 y3, 1 y4, y2, y3, þ4 4, þ, 2 3 0 2 4, 2, 3, , 2, 1, , 1 5, 3, þ, ¼ :, 6 24 8, , A(1, 1), , C, , x, , y, , 2, , 0, , x, , EXAMPLE 6.29, Changing the order of integration, find the value of, R1 R1 ey, the integral, dy dx., 0 x, , In the given form of the integral, we have to integrate first with respect to y and then with respect, to x. Therefore, on changing the order of integration, we first integrate the integrand, with respect to, x and then, with respect to y. The integration with, respect to x requires the splitting-up of the region, OABO into two parts OACO and the triangle ABC., For the subregion OACO, the limit of integration, pffiffiffi, are from x ¼ 0 to x ¼ y and y ¼ 0 to y ¼ 1. Thus,, the contribution to the integral I from the region, OACO is, 2 pffiffi, 3, Z1 Z y, 6, 7, I1 ¼ 4 xy dx5dy:, For the subregion ABC, the limits of integration, are from x ¼ 0 to x ¼ 2 y and y ¼ 1 to y ¼ 2. Thus,, the contribution to I from the subregion ABC is, 2, 3, Z2 Z2y, I2 ¼ 4 xy dx5dy:, 0, , 1, , Hence, on changing the order of integration, we get, 2 pffiffi, 3, 2, 3, Z2 Z2y, Z1 Z y, 6, 7, I ¼ 4 xy dx5dy þ 4 xy dx5dy, 0, , 1, pffiffi, y, 2, , Z1, ¼, , y, 0, , x, 2, , Solution. The region of integration is bounded by, x ¼ 0 and y ¼ x. The limits of x are from 0 to 1 and, those of y are from x to 1. The region of integration, is shown in the following figure:, y, , R, , yx, , S, , x, , 0, , 0, , 0, , 0, , y, , 0, , Z2, dy þ, 1, , On changing the order of integration, we first integrate the integrand, with respect to x, along a, horizontal strip RS, which extends from x ¼ 0 to, x ¼ y. To cover the region of integration, we then, integrate, with respect to y, from y ¼ 0 to y ¼ 1., Thus,, 2, 3, Z1 Z y y, Z1 y, e, e, dx5 dy ¼, ½xy0 dy, I¼ 4, y, y, 0, , Z1, , 0, , yx, 2, , ¼, , 2 2y, , dy, 0, , 0, , 0, , ey dy ¼ ½ey 1, 0 ¼, , 0, , ¼ ð0 1Þ ¼ 1:, , 1, ey, , 1, 0
Page 222 :
Multiple Integrals, , EXAMPLE 6.42, Find the area bounded by the parabolas y2 ¼ 4 x, and y2 ¼ 4 4x., Solution. The required area is given by, 0, 1, 2, Z, Z2 4y, Z2, B, C, y2 4 y2, B, C, A¼2 @, dxAdy ¼ 2, 4 , dy, 4, 4, 0, , 0, , 4y2, 4, , , Z2 , 3, y3, 3 y2 dy ¼ 2 3y , ¼2, 4, 4, 0, , 2, 0, , ¼ 2½6 2 ¼ 8:, , 6.8, , VOLUME AND SURFACE AREA AS DOUBLE, INTEGRALS, , (A) Volume as a Double Integral: Consider a, surface z ¼ f (x, y). Let the region S be the orthogonal projection of the portion S 0 of z ¼ f(x, y) on, the xy-plane. Divide S into elementary rectangles of, area x y by drawing lines parallel to the x- and yaxis. On each of these rectangles, erect a prism, which has a length parallel to Oz. Then, the volume, of the prism between S 0 and S is z x y., , n, , 6.21, , In the polar coordinates, the region S is divided into, elements of area r r h and so, the volume in that, case is given by, ZZ, f ðr cos h; r sin hÞrdr dh:, V¼, S, , (B) Volumes of Solids of Revolution: Let P(x, y) be, a point in a plane area R. Suppose that the elementary, area x y at P(x, y) revolves about the x-axis. This, will generate a ring of radius y. The elementary, volume of this ring is V ¼ 2y y x. Hence, the, total volume of the solid formed by the revolution of, the area R about the x-axis is given by, ZZ, V ¼ 2, y dy dx:, R, , Changing to polar coordinates, we get, ZZ, V ¼ 2, r sin h r dr dh, R, , ZZ, ¼ 2, , r2 sin h dr dh:, R, , y, R, P(x, y), y, , z, x, , z = f (x, y), S′, , x, , 0, 0, y, , Similarly, the volume V of the area R revolved, about the y-axis is given by, ZZ, V ¼ 2, x dx dy:, , S, x, , Therefore, the volume of the solid cylinder with S as, base, is composed of these prisms and so,, ZZ, X, z dx dy, V ¼ lim, z x y ¼, x!0, y!0, , S, , ZZ, , f ðx; yÞ dx dy:, , ¼, S, , R, , Changing to polar coordinates, we have, ZZ, V ¼ 2, r cos h r dr dh, R, , ZZ, ¼ 2, , r2 cos h dr dh:, R
Page 234 :
Hence,Z Z Z, f ðx; y; zÞdx dy dz, I¼, , Hence, ZZZ, f ðx; y; zÞdx dy dz, I¼, , R, , ZZZ, , R, , ¼, , f ðr sin h cos ; r sin h sin ; r cos hÞ, , ZZZ, ¼, , f ðr cos h; r sin h; zÞ r dr dh dz:, , R0, 2, , R, , r sin h dr dh d:, The polar spherical coordinates are useful when, the region of integration is a sphere or a part of it., If the region of integration is a whole sphere,, then 0 r a, 0 h , and 0 2. But, if the region of integration is the positive octant of, the sphere, then 0 r a, 0 h , and, , 0 ., 2, Remark 6.2. If the region of integration is a right, circular cylinder, then the Cartesian coordinates, are changed to cylindrical polar coordinates, (r, h, z) because the position of P(x, y, z) is determined by r, h, and z as shown in the following, figure:, z, , EXAMPLE 6.68, RRR 2, Evaluate I ¼, zðx þ y2 Þdx dy dz over x2 þ y2 , 1 and 2 z 3., Solution. The region of integration is, , , V ¼ ðx; y; zÞ; x2 þ y2 1; 2 z 3 :, Using the transformation, x ¼ r cos h, y ¼ r sin h, and z ¼ z (cylindrical polar, coordinates), we have, 2, 3, Z1 Z2 Z3, Z1 Z2 2 3, z, 2, 4 zr :r dz5dh dr ¼ r3, dh dr, I¼, 2 2, 0 0, , 2, , 0, , Z1, ¼, , Z2, r, , 3, , 0, , 0, , 9 4, 5, dh dr ¼, 2 2, 2, , 0, , ¼ 5, , y, , 0, M, , x, , 90°2, , , , 90°, , y, , r, r sin , , N, x, , Then,, x ¼ r cos h; y ¼ r sin h; and z ¼ z;, , , @x @x @x , , , , @r @h @z , cos h r sin h 0 , , , , @ ðx; y; zÞ @y @y @y , , ¼ sin h r cos h 0 , ¼, , @ ðr; h; zÞ @r @h @z , 0, 1, @z @z @z 0, , , , , @r @h @z, ¼ r cos2 h þ r sin2 h ¼ r:, , r3 ½h2, 0 dr, 1, , r3 dr ¼ 5, 0, , P (x, y, z), z, , Z1, 0, , Z1, , and, , 6.33, , n, , Multiple Integrals, , r4, 5, :, ¼, 4 0 4, , EXAMPLE 6.69, RRR qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, 2, 2, 2, Evaluate I ¼, 1 ax2 by2 cz2 dx dy dz over the, region, n, o, 2, 2, 2, V ¼ ðx; y; zÞ; x 0; y 0; z 0; ax2 þ by2 þ cz2 1, x, y, z, ¼ X ; ¼ Y ; and ¼ Z, a, b, c, so that dx ¼ adX, dy ¼ bdY, dz ¼ cdZ and hence, dx, dy dz ¼ abc dX dY dZ. Therefore,, ZZZ, , 1, I ¼ abc, 1 X 2 Y 2 Z 2 2 dx dy dz;, , Solution. Substituting, , over the region, V 0 ¼ fðX ; Y ; Z Þ; X, , 0; Y, , X þY þZ 1 :, 2, , 2, , 0; Z, , 0;, , 2, , Using spherical polar coordinates,, X ¼ r sin h cos ; Y ¼ r sin h sin ; and, Z ¼ r cos h;
Page 248 :
Vector Calculus, 7 Vector Calculus, , UNIT, , IV
Page 250 :
7, , Vector Calculus, , We know that scalar is a quantity that is characterized solely by magnitude whereas vector is a quantity which is characterized by both magnitude and, direction. For example, time, mass, and temperature, are scalar quantities whereas displacement, velocity,, and force are vector quantities. We represent a vector, by an arrow over it. Geometrically, we represent a, ~ where~, vector~, a by a directed line segment PQ,, a has, direction from P to Q. The point P is called the initial, point and the, a., point, Q is called the terminal point of ~, ~ , The length PQ of this line segment is the magnitude, of~, a. Two vectors~, a and~b are said to be equal if they, have the same magnitude and direction. The product, of a vector ~, a and a scalar m is a vector m ~, a with, magnitude |m| times the magnitude of ~, a with direction, the same or opposite to that of ~, a, according as, m > 0 or m < 0. In particular, if m ¼ 0, then m~, a is a, null vector ~, 0. A vector with unit magnitude is called, a unit vector. If ~, a is non-zero vector, then j~~aaj ¼ ~aa is a, unit vector having the same direction as that of~, a and, is denoted by ^a., If ~, a;~b and ~, c are vectors and m and n are, scalars (real or complex), then addition and scalar, multiplication of vectors satisfy the following, properties:, (i) ~, a þ~b ¼ ~b þ~, a (Commutative law for, addition)., , , (ii) ~, a þ ~b þ~, c ¼ ~, a þ~b þ~, c (Associative, law, for addition)., (iii) ~, m ~, a þ~b ¼ m~, a þ m~b (Distributive law, , (vi) ~, a þ ð~, aÞ ¼ ~, 0 ¼ ð~, aÞ þ~, a (Existence of, inverse for addition)., aj., (vii) jm~, aj ¼ jmj j~, (viii) mðn~, aÞ ¼ ðmnÞ~, a., (ix) nðm~, aÞ ¼ mðn~, aÞ., The unit vectors in the directions of positive x-,, y-, and z-axes of a three-dimensional, rectangular, coordinate system are called the rectangular unit, ^, vectors and are denoted, respectively, by ^i; ^j, and k., Let a1, a2, and a3 be the rectangular coordinates, of the terminal point of vector ~, a with the initial, point at the origin O of a rectangular coordinate, system in three dimensions. Then, the vectors, a1^i; a2^j; and a3 k^ are called rectangular component, vectors or simply component vectors of ~, a in the x,, y, and z directions, respectively., Z, , →, , a, a2iˆ, , a2kˆ, , O, , Y, , a2 jˆ, Z, , for addition)., (iv) ðm þ nÞ~, a ¼ m~, a þ n~, a (Distributive law, for scalars)., (v) ~, a þ~, 0 ¼~, a ¼~, 0 þ~, a (Existence of identity, for addition)., , The resultant (sum) of a1^i; a2^j; and a3 k^ is the, vector ~, a and so,, ^, ~, a ¼ a1^i þ a2^j þ a3 k:
Page 251 :
7.4, , n, , Engineering Mathematics-I, , Further, the magnitude of ~, a is, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, aj ¼ a21 þ a22 þ a23 :, j~, , called the directions cosines of ~, a. Thus, cos a ¼, ^~, ^i:~, ea ; and cos c ¼ k:, ea , where ~, ea is a, ea ; cos b ¼ ^j:~, unit vector in the direction of ~, a., , In particular, the radius vector or position vector ~r, from O to the point (x, y, z) in a three-dimension, space is expressed as, ~r ¼ x ^i þ y ^j þ z k^, and, r ¼ j~rj ¼, , pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, x2 þ y 2 þ z2 :, , The scalar product or dot product or inner product, of two vectors ~, a and ~b is a scalar defined by, , , ~, a:~b ¼ j~, aj ~b cos h;, where h is the angle between the vectors~, a and~b and, 0 h ., The scalar product satisfies the following, properties:, (i) ~, a:~b ¼ ~b:~, a:, ~, (ii) ~, a: b þ~, c ¼~, a:~b þ~, a:~, c:, , , , , (iii) t ~, a:~b ¼ t~, a:~b ¼ ~, a: t~b ¼ ~, a:~b t,, where t is a scalar., ^ k^ ¼ 1 and ^i:^j ¼ ^j:k^ ¼ k:, ^ ^i ¼ 0:, (iv) ^i:^i ¼ ^j:^j ¼ k:, ^ then ~, (v) If ~, a ¼ a1 ^i þ a2 ^j þ a3 k,, a:~, a ¼ j~, aj 2, ¼a ¼, , þ þ, (vi) If ~, a ¼ a1^i þ a2 ^j þ a3 k^ and ~b ¼ b1 ^i þ, ^ then, b2 ^j þ b3 k,, 2, , a21, , a22, , a23 ., , ~, a:~b ¼ a1 b2 þ a2 b2 þ a3 b3 :, (vii) If ~, a:~b ¼ 0 and ~, a and ~b are nonzero vectors, then cos h ¼ 0 and so, h ¼ 2. Hence,, ~, a and ~b are perpendicular., (viii) The projection of a vector~, a on a vector ~b, is a vector defined by “projection of ~, a, a:~, eb Þ~, eb ”, where h, on ~b ¼ ðacos hÞ~, eb ¼ ð~, is the angle between ~, a and ~b, and ~, eb, is a unit vector in the direction of the, vector ~b., Let a vector ~, a makes angles a, b, and c,, respectively, with positive directions of x, y, and z., Then, the numbers cos a, cos b and cos c are, , a, , θ, , b, , The vector- or cross product of two, a, vectors ~, , and~b is a vector defined by~, a ~b ¼ j~, aj~b sin h^e ¼, ab sin h^e, where h is the angle between the vectors~, a, and ~b such that 0 h and ^e is a unit vector, perpendicular to both~, a and~b. The direction of~, a ~b, is perpendicular to the plane of A and B, such that~, a,, ~b, and ~, a ~b form a right-handed triad of vectors., In particular, if~, a ¼ ~b or ~, a is parallel to ~b, then, ~, a ~b ¼ ~, 0., If ~, a ¼ a1 ^i þ a2 ^j þ a3 ^k and ~b ¼ b1 ^i þ b2 ^j þ b3 ^k,, then, , , , , ~, ~, a b ¼ , , , , ^i, , ^j, , a1, , a2, , b1, , b2, , , k^ , , a3 ;, , b3 , , where ^i ^i ¼ ^j ^j ¼ k^ k^ ¼ 0;, ^ ^j k^ ¼ ^i; and k^ ^i ¼ j; and, ^i ^j ¼ k;, ^ k^ ^j ¼ ^i; and ^i k^ ¼ ^j:, ^j ^i ¼ k;, , , ~, The magnitude ~, a b of ~, a ~b is equal in the area, of the parallelogram with sides ~, a and ~b., The vector product satisfies the following, properties:, (i) ~, a ~b¼ ~b ~, a (Anti-commutative law)., ~, (ii) ~, a b þ~, c ¼~, a ~b þ~, a ~, c, (Distributive, law over addition)., , , , ~, (iii) t ~, a b ¼ ðt~, aÞ~b ¼~, a t~b ¼ ~, a~b t;, t is a scalar.
Page 252 :
Vector Calculus, , The dot- and cross multiplication of three vectors, ~, a;~b, and ~, c follow the following laws:, , , (i) ~, a ~b ~, c 6¼ ~, a ~b ~, c :, , , (ii) ~, a ~b ~, c ¼ ~b ð~, c ~, aÞ ¼~, c ~, a ~b ., , ^ ~b ¼ b1 ^i þ b2 ^j þ, If ~, a ¼ a1 ^i þ a2 ^j þ a3 k;, ^, ^ then, ^, b3 k; and ~, c ¼ c1 i þ c2 ^j þ c3 k,, , , a1 a2 a3 , , , , , , ~, a ~b ~, c ¼ b1 b2 b3 :, , , c1 c2 c3 , , , (iii) ~, a ~b ~, c 6¼ ~, a ~b ~, c., , , c,, (iv) ~, a ~b ~, c ¼ ð~, a ~, cÞ~b ~, a ~b ~, , , ~, a:, a ~b ~, c ¼ ð~, a ~, cÞ~b ~b ~, c ~, , The product ~, a: ~b ~, c is called the scalar triple, , 7.5, , r, Since d~, dt is itself a vector depending on t, we can, further consider its derivative with respect to t. If, 2, this derivative exists, it is denoted by ddt~2r. Similarly,, higher derivatives of~r can be defined., r, Geometric Significance of d~, r ¼ ~f ðtÞ be the vector, dt : Let~, equation of a curve C in space. Let P and Q be two, neighboring points on C with position vectors~r and, ~r þ ~r. Then, OP ¼~r; OQ ¼~r þ ~r and so,, , ~ ¼ OQ, ~ OP, ~ ¼~r þ ~r ~r ¼ ~r:, PQ, r, Therefore, ~, t is directed along the chord PQ. As, t ? 0, that is, as Q ?P, the chord PQ tends to the, ~r, is, t)0 t, , r, tangent to the curve C at P. Hence, d~, dt ¼ lim, , a, , vector along the tangent to the curve at P., dr, , Q, r, r, , product or box product, and is denoted by [abc]., The product ~, a ~b ~, c is called the vector, triple product., , 7.1, , n, , O, , r, , r, , P, , DIFFERENTIATION OF A VECTOR, , A vector~r is said to be a vector function of a scalar, variable t if to each value of t there corresponds a, value of~r., A vector function is denoted by ~r ¼~rðtÞ or, ~r ¼ ~f ðtÞ. For example, the position vector ~r of a, particle moving along a curved path is a vector, function of time t. In rectangular coordinate system,, the vector function ~f can be expressed in a component form as, ^, ~r ¼ ~f ðtÞ ¼ f1 ^i þ f2 ^j þ f3 k;, , Unit Tangent Vector to a Curve: Suppose that we take an, arc length s from any point, say A, on the curve C,, up to the point P as the parameter, instead of t., Then, AP ¼ s, AQ ¼ s þ s, and so, PQ ¼ s. In, r, this case, d~, ds will be a vector along the tangent at P., Further,, , , d~r, , ¼ lim ~r ¼ lim chord PQ ¼ 1:, ds s!0 s Q!P Arc PQ, , where f1, f2, and f3 are scalar functions of t and are, called components of ~f ., Let~r ¼ ~f ðtÞ be a vector function of the scalar, variable t. If t denotes a small increment in t and, ~r the corresponding increment in~r, then, ~f ðt þ tÞ ~f ðtÞ, d~r, ~r, ¼ lim, ¼ lim, ;, t!0 t, t!0, dt, t, , Theorem 7.1. If ~, a;~b, and ~, c are differentiable vector, functions of a scalar t and is a differentiable scalar, function of t, then, , a, d~b, (i) dtd ~, a ~b ¼ d~, dt þ dt ., , ~, a ~, a ~b ¼ ~, a ddtb þ d~, (ii) dtd ~, dt b., , ~, a ~, (iii) dtd ~, a ~b ¼ ~, a ddtb þ d~, dt b., , if exists, is called the ordinary derivative of ~r with, respect to the scalar t., , r, ^, Hence, d~, ds is the unit vector t along the tangent at P., , (iv), , d, aÞ, dt ð~, , d, a, ¼ d~, a:, dt þ dt ~
Page 254 :
Vector Calculus, , 2, , , , Hence, ~f ¼ c2 or ~f ¼ c, that is, ~f has a, constant magnitude., Theorem 7.5. The necessary and sufficient condition, for a vector function~f of a scalar variable t to have a, ~, 0., constant direction is that ~f ddtf ¼ ~, Proof: Let ~, F be, a vector function of modulus unity, , for all t. Let ~f ¼ f . Then, ~f ¼ f ~, F., The condition is necessary: Suppose that ~f has a constant direction. Since~f ¼ f ~, F, it follows that~f and ~, F, have the same direction. Thus, ~, F has a constant, magnitude, equal to unity and a constant direction, ~, too and so, is a constant vector. Therefore, ddtF ¼ 0., Differentiating ~f ¼ f ~, F with respect to t, we have, d~f, df, d~, F, ¼ ~, Fþf, :, dt, dt, dt, Now,, ", #, ~f , ~, d, df, d, F, ~f ¼ f ~, ~, F , F þf, dt, dt, dt, df, d~, F, F ~, F þf 2 ~, F, ¼f ~, dt, dt, d~, F, F ~, F ¼~, 0, F ; since~, ¼~, 0þf 2 ~, dt, d~, F, ¼ f 2~, F, dt, d~, F ~, ¼ 0 (as shown earlier):, F ~, 0 ¼~, 0; since, ¼ f 2~, df, ~, The condition is sufficient: Suppose that ~f ddtf ¼ ~, 0., ~, F ddtF ¼ ~, 0 and, Therefore, as shown previously, f 2~, ~, 0. Also, since ~, F is of constant magniso ~, F ddtF ¼ ~, ~, d~, F, ~, ~, tude, F: dt ¼ 0. These two facts imply that ddtF ¼ ~, 0., Therefore, ~, F is a constant vector. But magnitude of, F is constant (unity). Therefore, ~, F has a constant, direction. But ~f ¼ f ~, F. Therefore, direction of ~, f is, also constant., , Corollary 7.1: The derivative of a vector function of a, scalar variable t having a constant direction is collinear with it., ~, Proof: Since ~f has a constant direction, ~f ddtf ¼ 0, ~, and so, ~f and ddtf are collinear. This completes the, proof of the corollary., , n, , 7.7, , From Theorems 7.3–7.5, we conclude that, d~f, dt, , ¼~, 0 if and only if~f is a constant vector, function in both magnitude and direction, ~, (ii) ~f : ddtf ¼ 0 if and only if ~f has a constant, magnitude., ~, ~, (iii) f ddtf ¼ 0 if and only if ~f has a constant, direction., (i), , Theorem 7.6. If ~f ¼ f1 ^i þ f2 ^j þ f3 k^ is a vector, function of the scalar variable t, then, d~f, ^, ¼ f10 ðtÞ^i þ f20 ðtÞ ^j þ f30 ðtÞ k:, dt, Proof: We have, ~f ¼ f1 ^i þ f2 ^j þ f3 k;, ^, where f1, f2, and f3 are scalar functions of t. Therefore,, d~f, d ^ d ^ d ^, ¼, f1 i þ, f2 j þ, f3 k, dt dt, dt, dt, d^i df1 ^, d^j df2, d k^ df3 ^, ¼ f1 þ, i þ f2 þ ^j þ f3 þ, k, dt dt, dt dt, dt, dt, df1, df2 ^ ~ df3 ^, 0þ, jþ0þ, k, ¼~, 0 þ ^i þ ~, dt, dt, dt, df1 ^ df2 ^ df3 ^, iþ, jþ, k:, ¼, dt, dt, dt, Thus, to differentiate a vector, it is sufficient to differentiate its components., Velocity and Acceleration: Let~r be the position vector, of a moving particle P, and let ~r be the displacement of the particle in time t, where t denotes time., r, Then, the vector ~, t denotes the average velocity, of the particle during the interval t of time., Therefore, the velocity vector ~, v of the particle at P, is given by, ~r d~r, ~, ¼, ;, v ¼ lim, t!0 t, dt, and its direction is along the tangent at P. Further, if, ~, v is the change in velocity ~, v during the time, interval t, then the rate of change of velocity, that, is, ~vt is the average acceleration of the particle, during the interval t. Thus, the acceleration of the, particle at P is, , ~, v d~, v d d~r, d 2~r, ~, ¼, ¼, a ¼ lim, ¼ 2:, t!0 t, dt, dt, dt dt
Page 260 :
Vector Calculus, , 7.3, , GRADIENT OF A SCALAR FIELD, , A variable quantity whose value at any point in a, region of space depends upon the position of the, point is called a point function. If for each point, P(x, y, z) of a region R, there corresponds a scalar, (x, y, z), then is called a scalar-point function for, the region R. The region R is then called a scalar field., For example, the temperature at any point within or, on the surface of the earth is a scalar-point function., Similarly, atmospheric pressure in the space is a, scalar-point function. On the other hand, if for each, point P(x, y, z), of a region R, there exists a vector, ~f ðx; y; zÞ, then~f is called a vector-point function and, the region R is then called a vector field. For, example, the gravitational force is a vector-point, function., Let f (x, y, z), be a scalar-point function., Then, the points satisfying an equation of the type, f (x, y, z) ¼ c (constant) constitute a family of, surface in a three-dimensional space. The surfaces, of this family are called level surfaces. Since the, value of the function f at any point of the surface, is the same, these surfaces are also called iso-fsurfaces., The operator r, defined by, @, @, @, r ¼ ^i þ ^j þ k^ ;, @x, @y, @z, is called the vector differential operator and is read, as del or nabla., Let be a scalar function defined and differentiable at each point (x, y, z) in a certain region of, space. Then, the vector defined by, , , @, @, @, r ¼ ^i þ ^j þ k^, ;, @x, @y, @z, ¼ ^i, , @ ^@ ^ @, þj, þk, @x, @y, @z, , is called the gradient of the scalar function and is, denoted by grad or r., Thus, grad is a vector with components, @ @, @, @x ; @y , and @z . We note that is a scalar-point, function, whereas r is a vector-point function., , 7.4, , n, , 7.13, , GEOMETRICAL INTERPRETATION OF A GRADIENT, , Let~r ¼ x^i þ y^j þ zk^ be the position vector of a point, P through which a level surface (x, y, z) ¼ c, (constant) passes. Then, differentiating (x, y, z) ¼ c, with respect to t, we get, d, ¼0, dt, or, , , or, , @ dx @ dy @ dz, : þ, : þ, : ¼ 0:, @x dt @y dt @z dt, , , , @ ^ @ ^ @ ^, dx ^ dy ^ dz ^, iþ, jþ, k :, iþ jþ k ¼0, @x, @y, @z, dt, dt, dt, , or, d~r, ¼ 0:, dt, r, Since d~, dt is the vector tangent to the curve at P and, since P is an arbitrary point on (x, y, z) ¼ c, it, follows that r is perpendicular to (x, y, z) ¼ c, at every point. Hence, r is normal to the surface, (x, y, z) ¼ c., r:, , 7.5, , PROPERTIES OF A GRADIENT, , The following theorem illustrates the properties, satisfied by a gradient., Theorem 7.7. If and ł are two scalar-point functions, and c is a constant, then,, (i) rð łÞ ¼ r rł., (ii) rðłÞ ¼ rł þ ł r., , (iii) r ł ¼ ł rrł, , provided that ł 6¼ 0., ł2, (iv) rðcÞ ¼ cr., (v) r is a constant if and only if is a, constant., Proof: (i). By the definition of a gradient, we have, r ð ł Þ, , , @ ^@ ^ @, ^, ¼ i þj þk, ð łÞ, @x @y, @z, @, @, @, ¼ ^i ð łÞ þ ^j ð łÞ þ ^k ð łÞ, @y , @z, @x, , , @, @, @, @, @, @, ¼ ^i þ ^j þ ^k, ^i þ ^j þ ^k, ł, @x @y, @z, @x @y, @z, ¼ r rł:
Page 262 :
Vector Calculus, , Similarly, directional derivatives of along, @, y- and z-axis are, respectively, @, @y and @z ., The directional derivatives of a vector-point, function ~f along the coordinate axes are similarly, @~f @~f, @x ; @y ;, , @~f, @z ;, , and, respectively., Further, if l, m, and n, are direction cosines of, AP ¼ r, then the coordinates of P are x þ lr, y þ mr,, and z þ nr and so, the directional derivative of the, scalar-point function along AP becomes, ð PÞð AÞ, AP, ðxþlr ; yþmr ; zþnrÞðx;y;zÞ, ¼lim, r!0, r, , @, @, ðx;y;zÞþ lr @, @x þmr @y þnr @z þ...ðx;y;zÞ, ¼lim, r!0, r, @, @ @, ¼l þm þn ;, @x, @y, @z, lim, , P!A, , by the application of Taylor’s Theorem for function, of several variables under the assumption that has, a continuous first-order partial derivatives., Similarly, the directional derivative of a vectorpoint function ~f along any line with direction, ~, ~, ~, cosines l, m, and n is l @@xf þ m @@yf þ n @@zf ., Theorem 7.8. The directional derivative of a scalarpoint function along the direction of unit vector ^a, is r ^a., Proof: The unit vector ^a along a line whose direction, cosines are l, m, and n is, ^, ^a ¼ l^i þ m^j þ nk:, Therefore,, , , , @ ^@ ^ @ ^, ^, þj, þk, r:^a ¼ i, li þ m^j þ nk^, @x, @y, @z, @, @, @, þm, þn, ;, ¼l, @x, @y, @z, , which is nothing but directional derivative of in, the direction of the unit vector ^a., Theorem 7.9. Grad is a vector in the direction of, which the maximum value of the directional derivative of occurs. Hence, the directional derivative, , n, , 7.15, , is maximum along the normal to the surface and the, maximum value is, jgrad j ¼ jrj:, Proof: Recall that ~, a:~b ¼ jajjbj cos h, where h is the, angle between the vectors~, a and~b. Since (grad ). ^a, gives the directional derivative in the direction of, unit vector ^a, that is, the rate of change of (x, y, z) in, the direction of the unit vector ^a, it follows that the, rate of change of (x, y, z), is zero along directions, , perpendicular to grad since cos 2 ¼ 0 and is, maximum along the direction parallel to grad ., Since grad acts along the normal direction to the, level surface of (x, y, z), the directional derivative is, maximum along the normal to the surface. The, maximum value is | grad | ¼ |r|., EXAMPLE 7.13, If~r ¼ x^i þ y^j þ zk^ and j ~r j ¼ r, show that, 0., (i) rf ðrÞ ¼ f 0 ðrÞrr and (ii) rf ðrÞ ~r ¼ ~, Solution. (i) By the definition of gradient,, @, @, @, rf ðrÞ ¼ ^i f ðrÞ þ ^j f ðrÞ þ k^ f ðrÞ, @x, @y, @z, @r, @r ^ 0 @r, ¼ ^if 0 ðrÞ þ ^jf 0 ðrÞ þ kf, ðr Þ, @x, @y, @z, , , @r, @r, @r, ¼ f 0 ðrÞ ^i þ ^j þ k^, ¼ f 0 ðrÞrr:, @x, @y, @z, (ii) As in part (i), we have, , , @r ^@r ^ @r, ^, :, rf ðrÞ ¼ f ðrÞ i þ j þ k, @x, @y, @z, 0, , Since r ¼ j ~r j ¼, , pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, x2 þ y2 þ z2 , we have, , @r, 1, x, x, pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ;, ¼, 1 ð2xÞ ¼, 2, 2, 2, @x 2ðx2 þ y2 þ z2 Þ2, r, x þy þz, @r, z, and similarly, @y, ¼ yr and @r, @z ¼ r. Therefore,, x, ~r, y, z, rf ðrÞ ¼ f 0 ðrÞ ^i þ ^j þ k^ ¼ f 0 ðrÞ :, r, r, r, r
Page 267 :
7.20, 7.7, , n, , Engineering Mathematics-I, , DIVERGENCE OF A VECTOR-POINT FUNCTION, , If we want to consider the rate of change of a vectorpoint function~f , there are two ways of operating the, vector operator r to the vector ~f . Thus, we have, two cases to consider, namely,, , Similarly, the changes in the mass of the fluid for the, other two pairs of faces are, @vx, x y z, @x, Z, , r ~f and r ~f :, These two cases lead us to the two concepts called, Divergence of a Vector Function and curl of a, Vector Function. If we consider a vector field as a, fluid flow, then at every point in the flow, we need, to measure the rate of flow of the fluid from that, point and the amount of spin possessed by the, particles of the fluid at that point. The above two, concepts provide respectively, the two measures, called divergence of ~f and curl of ~f ., Let ~f ¼ f1^i þ f2^j þ f3 k^ be a vector function,, where f1, f2, and f3 are scalar-point functions, which, is defined and differentiable at each point of the, region of space. Then, the divergence of~f , denoted, by r ~f or div ~f , is a scalar given by, , , @, @, @ ^, r ~f ¼ ^i þ ^j þ k^, : f1 i þ f2^j þ f3 k^, @x, @y, @z, ¼, , @f1 @f2 @f3, þ, þ, :, @x @y @z, , The vector ~f is called Solenoidal if r ~f ¼ 0., , 7.8 PHYSICAL INTERPRETATION OF DIVERGENCE, Consider the steady motion of the fluid having velocity, ~, v ¼ vx^i þ vy^j þ vz k^ at a point P(x, y, z) Consider a, small parallelopiped with edges x, y, and z parallel, to the axes, with one of its corner at P(x, y, z). The, mass of the fluid entering through the face PQRS per, unit time is vy x z and the mass of the fluid that, flows out through the opposite face ABCD is vy þ y, x z. Therefore, the change in the mass of fluid, flowing across these two faces is equal to, , , @vy, :y x z, vyþy x z vy xz ¼ vy þ, @y, @vy, vy x z ¼, y x z:, @y, , @vz, x y z:, @z, , and, , R, , x, , C, , y, , S, vy, , z, , D, vy y, A, , P, Q, , B, Y, , O, , X, , Therefore, the total change in the mass of the fluid, inside the parallelopiped per unit time is equal to, , , @vx @vy @vz, þ, þ, x y z:, @x, @y, @z, Hence, the rate of change of the mass of the fluid, per unit time per unit volume is, @vx @vy @vz, þ, þ, ¼ r ~, v;, @x, @y, @z, by the definition of divergence. Hence, div ~, v gives, the rate at which the fluid (the vector field) is, flowing away at a point of the fluid., EXAMPLE 7.24, ^, Find div~, v, where ~, v ¼ 3x2 y^i þ z^j þ x2 k., Solution. We know that, div ~, v¼, , @v1 @v2 @v3, þ, þ, :, @x, @y, @z, , Here, v1 ¼ 3x2 y, v2 ¼ z, and v3 ¼ x2. Therefore,, div ~, v ¼ 6xy:
Page 277 :
7.30, , n, , Engineering Mathematics-I, , EXAMPLE 7.44, r, If~r ¼ ~, 0 when t ¼ 0 and d~, u when t ¼ 0, find the, dt ¼ ~, 2, a , where ~, a, value of~r satisfying the equation ddt~2r ¼ ~, is a constant vector., a with respect to t, we get, Solution. Integrating ddt~2r ¼ ~, 2, , integral is a scalar and is also called the tangential, line integral of ~f along the curve C., ^, If ~f ¼ f1 ^i þ f2 ^j þ f3 k^ and~r ¼ x ^i þ y ^j þ z k;, ^, ^, ^, then d ~r ¼ idx þ jdy þ kdz and so,, Z, Z, ~f :d~r ¼ ðf1 dx þ f2 dy þ f3 dzÞ, , d~r, ¼~, at þ ~, c;, dt, , C, , , Zb , dx, dy, dz, f1 þ f2 þ f3, ¼, dt;, dt, dt, dt, , where ~, c is a constant vector of integration. When, r, t ¼ 0, d~, u. Therefore,, dt ¼ ~, ~, u¼~, að0Þ þ~, c and so ~, c ¼~, u:, Therefore,, d~r, ¼~, at þ~, u:, dt, Integrating again with respect to t, we get, 1 2, ~r ¼ ~, a t þ~, u t þ~, p;, 2, where~, p is the constant vector of integration. When, t ¼ 0,~r ¼ ~, 0. Therefore, ~, 0 ¼~, p. Hence,, 1 2, ~r ¼ ~, at :, ut þ ~, 2, , 7.14, , LINE INTEGRAL, , An integral which is evaluated along a curve is, called a line integral. Note, however, that a line, integral is not represented by the area under the, curve., Consider any arc of the curve C enclosed, between two points A and B. Let a and b be the, values of the parameter t for A and B, respectively., Partition the arc between A and B into n parts as, given in the following equation:, A ¼ P0 ;, , n!1, !, , j ri j!0, , i¼1, , if exists, is called a line integral of~f along C and is, R, R, denoted by ~f :d~r or ~f : d~r dt. Thus, the line, C, , a, , where a and b are, respectively, values of the, parameter t at the points A and B., If we replace the dot product in (1) by a vector, product, then the vector line integral is defined as, R, ~f d~r , which is a vector., C, , If C is a simple closed curve, then the tangent, line integral of the vector function ~f around C is, called the circulation of~f around C and denoted by, H, ~f :d~r., C, , The vector function ~f is said to be irrotational, in a region R if the circulation of ~f around any, closed curve in R is zero., EXAMPLE 7.45, R, If~f ¼ ð3x2 þ 6yÞ^i 14yz^j þ 20xz2 ^k, evaluate ~f :d~r,, C, , where C is given by x ¼ t, y ¼ t2, and z ¼ t3, and, t varies from 0 to 1., Solution. The parametric equation of C is, x ¼ t; y ¼ t2 ; and z ¼ t3 ;, where t varies from 0 to 1:, , P1 ; . . . ; Pn ¼ B:, , Let ~r0 ; ~r1 ; . . . ;~rn be the position vectors of the, points P0, P1,…, Pn, respectively. Let i be any point, on the subarc P1–1 Pi and let ~ri ¼~ri ~ri1 ., Let ~f ð~rÞ be a continuous vector-point function., Then,, n, X, ~f ði Þ : ~ri ;, ð1Þ, lim, , C, , dt, , C, , ^ Therefore,, Now,~r ¼ x ^i þ y ^j þ z k^ ¼ t ^i þ t2 ^j þ t3 k., d~r ^, ^, ¼ i þ 2t ^j þ 3t2 k:, dt, Further,, , , ~f ¼ 3x2 þ 6y ^i 14yz ^j þ 20xz2 k^, , , ¼ 3t2 þ 6t2 ^i 14t2 :t3 ^j þ 20t:t6 k^, ^, ¼ 9t2^i 14t5 ^j þ 20t7 k:
Page 283 :
7.36, , n, , 7.16, , SURFACE INTEGRAL, , Engineering Mathematics-I, , An integral evaluated over a surface is called a, surface integral. Two types of surface integral, exist:, RR, (i), f ðx; y; zÞdS, S, , and, RR, RR, (ii) ~f ð~rÞ : ^n dS ¼ ~f ð~rÞ:d ~, S:, S, , S, , In case (i), we have a scalar field f, whereas, in case (ii), we have a vector field ~f ð~rÞ,, vector element of area d ~, S ¼ ^n dS, and ^n, the outward-drawn unit normal vector to the, element dS., (i) Let f (x,y,z) be a scalar-point function defined, over a surface S of finite area. Partition the area S, into n subareas S1, S2,…, Sn. In each area Si,, choose an arbitrary point Pi(xi, yi, zi). Define f (Pi), n, P, ¼ f (xi, yi, zi) and form the sum f ðx1 ; yi ; zi ÞSi ., , that any line perpendicular to the coordinate plane, chosen meets the surface S in not more than one, point. However, if S does not satisfy this condition,, then S can be subdivided into surfaces satisfying this, condition., Let S be the surface such that any line perpendicular to the xy-plane meets S in not more than one, point. Then, the equation of the surface S can be, written as z ¼ h(x,y). Let R1 be the projection of S on, the xy-plane. Then, the projection of dS on the, xy-plane is dS cosc, where c is the acute angle which, the normal ^n at P to the surface S makes with z-axis., Therefore,, dS cos c ¼ dx dy:, z, s, , i¼1, , Then, the limit of this sum as n ? 1 in such a, way that the largest of the subarea Si approaches, zero is called the surface, RR integral of f (x,y,z) over, S and is denoted by f ðx; y; zÞdS., , 0, , y, , S, , (ii) Now, let ~f be a vector-point function defined, and continuous over a surface S. Let P be any, point on the surface S and let ^n be the unit vector, at P in the direction of the outward-drawn normal to the surface S at P. Then, ~f . ^n is the, normal component of~f at P. The integral of~f . ^n, ~, over S is called the normal, RR surface integral of f, ~, over S and is denoted by f :^n dS. This integral, S, , is also known as flux of~f over S. If we associate, with the differential of surface area dS, a vector, d~, S, with magnitude dS, and whose direction is, that of ^n, then d~, S ¼ ^ndS and hence,, ZZ, ZZ, ~f :^n dS ¼, ~f :d ~, S:, S, , The surface integrals are easily evaluated by expressing them as double integrals, taken over an orthogonal projection of the surface S on any of the, coordinate planes. But, the condition for this is, , R1, x, dxdy, , , , But cos c ¼ ^n: k^, where~k is, as usual, a unit vector, along the z-axis. Thus,, dxdy, :, dS ¼ , ^n : k^, Hence,, ZZ, ZZ, ~f :^n dS ¼, ~f :^n dxdy :, ^n : k^, S, , R1, , Similarly, if R2 and R3 are projections of S on the, yz, and zx-plane, respectively, then, ZZ, ZZ, ~f : ^n dS ¼, ~f : ^n dxdy;, ^n :^i, R2, S, and, ZZ, ZZ, ~f : ^n dS ¼, ~f : ^n dxdy :, ^n : ^j, S, , R3
Page 296 :
Vector Calculus, , The Green’s Theorem is useful in changing a line, integral around a closed curve C into a double, integral over the region R enclosed by C., @f, Theorem 7.11. (Green’s Theorem). Let f, g, @f, @y, and @y, are continuous in a region R, which can be split up, in finite number of regions quadratic with respect to, either axis. Then,, , ZZ , I, @g @f, , ½ f ðx; yÞdx þ g ðx; yÞdy ¼, dx dy;, @x @y, R, , C, , where the integral on the left is a line integral, around the boundary C of the region, taken in such a, way that the interior of the region remains on the, left as the boundary is described., Proof: Consider the region R bounded by the curves, x ¼ a, x ¼ b, y ¼ (x), and y ¼ ł(x), such that (x), ł(x) for all x 2 [a, b]. Let f be a real-valued, continuous function defined in R, and let @f, @y exists, and is continuous in R. Then,, , Therefore,, I, , ZZ, , @f, dx dy:, @y, , f ðx; yÞdx ¼ , R, , C, , n, , Similarly, it can be shown that, I, ZZ, @g, dx dy:, g ðx; yÞdy ¼, @x, , 7.49, , ð1Þ, , ð2Þ, , R, , C, , Adding (1) and (2), we obtain, I, ZZ, ½ f ðx; yÞdx þ g ðx; yÞdy ¼, R, , C, , @g @f, , dx dy:, @x @y, , Deductions:, (i) If f (x, y) ¼ y and g (x, y) ¼ x, then by Green’s, Theorem, we have, ZZ, I, ðxdy ydxÞ ¼, ð1 þ 1Þdxdy, ZR Z, , C, , dxdy ¼ 2A;, , ¼2, R, , y, , where A denotes the area of the region R. Thus,, I, 1, ½xdy ydx:, A¼, 2, , y (x), , y (x), , C, , xa, , (ii) Putting f (x, y) ¼ y and g (x, y) ¼ 0, the, Green’s Theorem implies, I, ZZ, dxdy ¼ Area of the region R:, ydx ¼, , xb, , R, , ZZ, , 0, , @f, dxdy ¼, @y, , R, , Zb, , ¼, , Zb, , 2, 6, 4, , a, , Zð xÞ, , łð x Þ, , @f 7, dy5 dx, @y, , Zb, , f ðx; :ðxÞÞdx , a, , 3, , x, , f ðx; łðxÞÞdx, a, , Za, ¼, , Zb, f ðx; ðxÞÞdx , , f ðx; łðxÞÞdx, a, , b, , 2 b, 3, Z, Za, ¼ 4 f ðx; łðxÞÞdx þ f ðx; ðxÞÞdx5, a, , b, , I, ¼, , f ðx; yÞdx:, C, , (iii) Putting g (x, y) ¼ x and f (x, y) ¼ 0, we get, I, ZZ, xdy ¼, dx dy ¼ Area of the region R:, R, , C, , Hence, the area of a closed region R is given by any, of the three formulae, I, I, I, 1, xdy; ydx; or, ðxdy ydxÞ;, 2, C, , C, , C, , where C denotes the boundary of the closed region, R described in the positive sense., EXAMPLE 7.70, Verify, Green’s theorem in the plane for, H, ½ð xy þ y2 Þdx þ x2 dy, where C is the closed curve, C, , of the region bounded by y ¼ x and y ¼ x2.
Page 301 :
7.54, , n, , Engineering Mathematics-I, , Adding (2), (3), and (4), we get, I, ðf1 dx þ f2 dy þ f3 dzÞ, C, , Thus,, , , , @f3 @f2, , cos a, @y @z, S, , , , , @f1 @f3, @f2 @f1, , , þ, cos b þ, cos c dS:, @z @x, @x @y, ZZ, , This completes the proof of the theorem., Remark 7.1. The equivalent statement of Stoke’s, Theorem is that, The line integral of the tangential component of a, vector-point function ~f taken around a simple, closed curve C is equal to the surface integral of the, normal component of the curl of ~f taken over any, surface S having C as its boundary., EXAMPLE 7.76, Verify Stoke’s Theorem for the function~f ¼x2^iþxy^j,, integrated around the square in the plane z ¼ 0,, whose sides are along the lines x ¼ 0, x ¼ a, y ¼ 0,, and y ¼ a., Solution. Since ~f ¼ x2 ^i þ xy ^j, we have, , , , ~f :d ~r ¼ x2 ^i þ xy ^j : ^i dx þ ^j dy ¼ x2 dx þ xydy:, ~f :d ~r ¼, , C, , I, , , , C, , þ, OA, , Z, , AB, , þ, BC, , :, CA, , 0, , AB, , 0, , AB, , Along BC, we have y ¼ a and so, dy ¼ 0. Thus,, Z, Z0, 3, ~f :d ~r ¼ x2 dx ¼ a :, 3, a, , BC, , Along CO, we have x ¼ 0 and so, dx ¼ 0. Thus,, Z, Z0, ~f :d ~r ¼ 0dy ¼ 0:, a, , CO, , Hence, (1) yields, I, 3, 2, 3, 2, ~f :d ~r ¼ a þ a a ¼ a :, 3, 2, 3, 2, C, , On the other hand,, , , ^i, @, ~, curl f ¼ @x, x2, , and so,, ZZ, , B(a, a), , ^j, , @, @y, , xy, , curl ~f :^n dS ¼, , , k^ , @ , ^, @z ¼ y k:, , 0, , Za Za, , Za, ydx dy ¼, , 0, , 0, , 0, , ¼, , x5a, , 2, , a, 2, , y50, , I, , X, A(a, 0), , C, , ~f : d ~r ¼, , ZZ, , dx ¼, , S, , curl ~f : ^n dS:, , a, , y2, dx, 2 0, , Za, 0, , Hence,, O (0, 0), , ð1Þ, , Along AB, x ¼ a and so, dx ¼ 0. Thus,, Z, Za, 2, ~f : d ~r ¼ aydy ¼ a :, 2, , S, , x50, , Z, þ, , Since the square (surface) lies in the xy-plane,, ^ Therefore,, ^n ¼ k., ^ k^ ¼ y, curl ~f :^n ¼ y k:, , Y, y5a, , Z, , Z, , Along OA, we have y ¼ 0 and so, dy ¼ 0. Thus,, Z, Za, 3 a, 3, ~f :d ~r ¼ x2 dx ¼ x ¼ a :, 3 0 3, , , x2 dx þ xydy ;, , where C is the square shown in the figure., , C(0, a), , ~f :d ~r ¼, , C, , ¼, , Therefore, I, , I, , a3, :, 2
Page 315 :
7.68, , n, , Engineering Mathematics-I, , R, R, R, varies from 2 to 3. Therefore, ~f : d~r ¼, þ ,, C, , OA, , AB, , which will come out to be 4 þ 7 ¼ 11., 40. Find the circulation of ~f around the curve C,, where~f ¼ y^i þ z^j þ xk^ and C is the circle x2 þ, y2 ¼ 1 and z ¼ 0., Hint: Parametric equations of C are x ¼ cos t,, y ¼ sin t, and z ¼ 0, where t varies from 0 to 2., R2, H, Then, ~f : d~r ¼ 1cos 2t dt ¼ ., 0, , C, , 2, , 41. Find the work done when a force ~f ¼, ðx2 y2 þ xÞ^i ð2xy þ yÞ^j moves a particle in, xy-plane from (0, 0) to (1, 1) along the parabola, y2 ¼ x., Hint: Proceed as in Example 7.49., Ans. 23., 42. Compute the work done by a force ~f ¼, x^i z^j þ 2yk^ to displace a particle along a, closed path C consisting of the segments C1,, C2, and C3, such that, 0 x 1; y ¼ x; z ¼ 0 on C1 ;, 0 z 1; x ¼ 1; y ¼ 1 on C2 ; and, 0 x 1; y ¼ z ¼ x on C3 :, Ans. 32., 43. Find the work done in moving a particle once, around a circle C in the xy-plane, if the circle, has its center at the origin with a radius 3, and if, the force field is given by ~f ¼ ð2xyþzÞ^iþ, ðxþyz2 Þ^jþ ð3x2yþ4zÞ^k., Hint: Parametric equations of C are x ¼ 3 cos t,, y ¼ 3 sin t, and 0 t 2., Ans. 18., , Surface Integrals, 44. Evaluate, , RR, , ~f : ^n ds, where ~f ¼ 12x2 y^i 3yz^j þ, , S, , 2zk^ and S is the portion of the plane x þ y þ, z ¼ 1, included in the first octant., ^ ^ ^, , Hint: ^n ¼ iþpjþffiffi3 k and ~f :^n ¼ p1ffiffi3 ð12x2 y3yzþ2zÞ, 1, ¼ pffiffiffi ½12x2 y3yð1xyÞþ2ð1xyÞ:, 3, ^n:k^ ¼ p1ffiffi3. Evaluate, , RR, , ^f : ^n ds., Ans., , 49, 120., , RR, ~f : ^n dS, where ~f ¼ ðx þ y2 Þ^i , 45. Evaluate, S, 2x^j þ 2yzk^ and S is the surface of the plane 2x, þ y þ 2z ¼ 6, in the first octant., Hint: Proceed as in Example 7.56. Ans. 81., ^, RR, 46. Evaluate ~f : ^ndS, where f ¼ 4xy^i þ yz^j , S, xy ^k and S is the surface bounded by the planes, x ¼ 0, x ¼ 2, y ¼ 0, y ¼ 2, z ¼ 0, and z ¼ 2., Hint: Proceed, RR as in Example 7.59. Ans. 40., 47. Evaluate ^n dS, where ¼ 38 xyz and S is, S, , the surface of the cylinder x2 þ y2 ¼ 16,, included in the first octant between z ¼ 0 and, z ¼ 5., ^iþ2y^j, ,, Hint: rðx2 þy2 16Þ ¼ 2x^iþ2y^j; ^n ¼ p2xffiffiffiffiffiffiffiffiffiffiffiffi, 4x2 þ4y2, and, , , x^iþy^j, y, ^, ^n : j ¼, :^j ¼ Therefore;, 4, 4, ZZ, ZZ, dxdz, ^n ds ¼, ^n , ^n :^j, S, S, , , ZZ, x^iþy^j dxdz, 3, ¼, xyz, : y, 4, 8, 4, R, ZZ, , , 3, ¼, xz x^iþy^j dxdz, 8, R, , Z5 Z4 , pffiffiffiffiffiffiffiffiffiffiffiffiffiffi, , , 3, x2 z^iþxz 16x2^j dxdz ¼ 100 ^iþ^j :, ¼, 8, 0, , 0, , 48. Evaluate, , RR, , ~f : ^n dS, where ~f ¼ y^i þ 2x ^j zk^, , S, , and S is the surface of the plane 2x þ y ¼ 6, in, the first octant cut off by the plane z ¼ 4., Ans. 108., , Volume Integral, 49. Evaluate, , RR, , dV , where ¼ 45x2 y and V is the, , S, , region bounded by the planes 4x þ 2y þ z ¼ 8,, x ¼ 0, y ¼ 0, and z ¼ 0., Ans. 128., RRR, 50. Evaluate ð2x þ yÞdV , where V is the closed, V, , region bounded by the cylinder z ¼ 4 x2 and, the planes x ¼ 0, y ¼ 0, y ¼ 2, and z ¼ 0., Hint: The limits of integration are x ¼ 0 to x ¼, 2, y ¼ 0 to y ¼ 2, and z ¼ 0 to z ¼ 4 x2., Ans. 80, 3.
Page 318 :
Vector Calculus, , xy-plane. Then,, , RR, , ^n: ^k dxdy, ¼, j^n: ^k j, R, , RR, , dxdy ¼ area of, , R, , R ¼ (1)2 ¼ ., RR, 68. Transform the integral curl~f : ^ndS into a line, S, , integral, if S is a part of the surface of the, paraboloid z ¼ 1 x2 y2 for which, z 0 and, ~f ¼ y^i þ z^j þ xk., ^, , n, , 7.71, , Hint: Surface S is x2 þ y2 ¼ 1 and z ¼ 0 with, parametric equations x ¼ cos h, y ¼ sin h, z ¼, 0, and 0 h < 2. Use Stoke’s Theorem to, transform the given integral into a line integral., The value of the line integral will come out to, be .
Page 319 :
EXAMINATION PAPERS WITH SOLUTIONS, B.TECH, (SEM I) ODD SEMESTER THEORY EXAMINATION, 2009–10, MATHEMATICS–I, Time: 3 Hours, Note : Attempt all questions., , Total marks: 100, , 1. Attempt any two parts of the following:, (a) Reduce the matrix:, , 2, , 1, 6 1, A¼6, 4 1, 1, , 1, 1, 0, 1, , 3, 1, 1, 3 3 7, 7, 1, 25, 3, 3, , to column echelon form and find its rank., (b) Verify the Cayley-Hamilton theorem for the matrix:, 2, 3, 1 0 4, A¼4 0 5, 45, 4 4, 3, And hence find A1 ., (c) Find the eigenvalues and the corresponding eigen, 2, 1, A ¼ 40, 2, , vectors of the matrix, 3, 0 0, 2 15, 0 3, , 2. Attempt any two parts of the following:, (a) If y ¼ ðx2 1Þn , prove that, ðx2 1Þynþ2 þ 2xynþ1 nðn þ 1Þyn ¼ 0
Page 321 :
Examination Papers, , n, , Q.3, , SOLUTIONS, 1. (a) Please see Example 4.74(c). Matrix is said to be in column echelon form if, (i) The first non-zero entry in each non-zero column is 1., (ii) The column containing only zeros occurs next to all non-zero columns., (iii) The number of zeros above the first non-zero entry in each column is less than the number of such, zeros in the next column., The given matrix is, 2, , 1, 1, 6 1, 1, A¼6, 4 1, 0, 1 1, , 3 2, 1, 1, 1, 6 1, 3 3 7, 76, 1, 25 4 1, 1, 3, 3, 2, 1, 6 1, 6, 4 1, 1, 2, 1, 6 1, 6, 4 1, 1, 2, 1, 61, 2, 6, 4 1, 1, , 3, 0, 0, C2 ! C2 C1, 4 2 7, 7 C3 ! C3 þ C1, 2, 15, C4 ! C4 C1, 4, 2, 3, 0 0, 4 0 7, 7C ! C4 þ C2, 2 05 4, 4 0, 3, 0 0, 0 07, 7C ! C3 þ 2C2, 0 05 3, 0 0, 3, 0 0, 0 07, 7R ! 1 R2 ;, 0 05 2, 2, 0 0, , 0, 2, 1, 2, 0, 2, 1, 2, 0, 2, 1, 2, 0, 1, 12, 1, , which is column echelon form. The number of non-zero column is two and therefore ðAÞ ¼ 2., (b) Cayley-Hamilton theorem states that ‘‘every matrix satisfies its characteristic equation’’. We are, given that, ", #, 1 0 4, A¼, 0 5, 4 :, 4 4, 3, Therefore, ", A ¼, 2, , 1, 0, 4, , 0, 5, 4, , 4, 4, 3, , #", , 1 0, 0 5, 4 4, , 4, 4, 3, , #, , ", ¼, , 17, 16, 16, , 16, 41, 32, , and, 2, , 81, A3 ¼ A2 :A ¼ 4 144, 180, , 144, 333, 324, , 3, 180, 324 5:, 315, , 16, 32, 41, , #
Page 322 :
Q.4, , n, , Engineering Mathematics-I, , On the other hand, , 1 l, , j A lI j ¼ 0, 4, , 0, 5l, 4, , , 4 , , 4 , 3 l, , ¼ ð1 lÞ½ð5 lÞð3 lÞ 16 4½4ð5 lÞ, ¼ ð1 lÞðl2 8l 1Þ 80 þ 16l, ¼ l3 þ 9l2 þ 9l 81:, Therefore, the characteristic equation of the matrix A is, l3 9l2 9l þ 81 ¼ 0:, , ð1Þ, , To verify Cayley Hamilton Theorem, we have to show that, A3 9A2 9A þ 81I ¼ 0:, We note that, , 2, , 3, 2, 144 180, 17 16, 333, 324 5 94 16, 41, 324, 315, 16, 32, 2, 3 2, 3, 1 0 0, 0 0 0, þ 814 0 1 0 5 ¼ 4 0 0 0 5:, 0 0 1, 0 0 0, , 81, 4 144, 180, , 3, 2, 16, 1, 32 5 94 0, 41, 4, , ð2Þ, 0, 5, 4, , 3, 4, 45, 3, , Hence A satisfies its characteristic equation., (c) We have, , 2, , 1 0, A ¼ 40 2, 2 0, The characteristic equation of A is, , , 1 l, , j A lI j ¼ 0, 2, , 3, 0, 1 5:, 3, , , 0, 0 , 2l, 1 ¼ 0, 0, 3 l, , or, l3 6l2 þ 11l 6 ¼ 0;, which yields l ¼ 1; 2; 3. Hence the characteristic roots are 1, 2 and 3., The eigenvector corresponding to l ¼ 1 is given by ðA IÞX ¼ 0, that is, by, 2, 32 3 2 3, 0 0 0, x1, 0, 4 0 1 1 54 x2 5 ¼ 4 0 5:, 2 0 2, x3, 0, Thus, we have, x2 þ x3 ¼ 0;, 2x1 þ 2x3 ¼ 0:
Page 328 :
U.P. TECHNICAL UNIVERSITY, LUCKNOW, B.TECH. (C.O.), FIRST SEMESTER EXAMINATION, 2008–2009, MATHEMATICS–I, (PAPER ID: 9916), Time : 3 Hours, Note: Attempt all Questions., , Total Marks : 100, , 1. Attempt any two parts of the following:, (a) Find all values of m for which rank of the matrix., 3, 2, m 1, 0, 0, 6 0, m 1, 07, 7 is equal to 3:, A¼6, 4 0, 0, m 1 5, 6 11 6, 1, 2, , 3, 0, 1 5, then show that An ¼ An2 þ A2 I for n 3., 0, 2, 3, 3 1 1, (c) Show that the matrix A ¼ 4 2 1, 2 5 is diagonalizable. Hence, find P such that P1 AP is a, 0 1, 2, diagonal matrix., , 1, (b) If A ¼ 4 1, 0, , 0, 0, 1, , 2. Attempt any two parts of the following:, (a) Find ðyn Þ0 when y ¼ sinða sin1 xÞ., @2u, (b) If u ¼ exyz , show that @x@y@z, ¼ ð1 þ 3xyz þ x2 y2 z2 Þexyz :, 2, (c) Trace the curve y ða þ xÞ ¼ x2 ð3a xÞ:, 3. Attempt any two parts of the following:, (a) Show that the functions u ¼ x2 þ y2 þ z2 ; v ¼ x þ y þ z; w ¼ yz þ zx þ xy are not independent of, one another.
Page 329 :
Examination Papers, , n, , Q.11, , (b) The height h and the semi-vertical angle a of a cone are measured, and from them A, the total, surface area of the cone, including the base, is calculated. If h and a are in error by small quantities, h and a respectively, find corresponding error in the area. Show further that, a ¼ 6, an error of, þ1 percent in h will be approximately compensated by an error of 0 :33 in a., (c) Determine the points where the function x2 þ y3 3axy has a maximum or minimum., (d) Find the point upon the plane ax þ by þ cz ¼ p at which the function f ¼ x2 þ y2 þ z2 has a, mnimum value and find this minimum f., 4. Attempt any two parts of the following:, RR, (a) Evaluate: R xy dx dy, where R is the quadrant of the circle x2 þ y2 ¼ a2 where x 0 and y 0., (b) Find the volume common to the cylinders x2 þ y2 ¼ a2 and x2 þ z2 ¼ a2, RRR, (c) Evaluate: R ðx 2y þ zÞdxdydz where R: 0 x 1; 0 y x2 ; 0 z x þ y., 5. Attempt any two parts of the following:, (a) Find the directional derivative of f ðx; y; zÞ ¼ 2x2 þ 3y2 þ z2 at the point Pð2; 1; 3Þ in the direction, of the vector ^a ¼ ^i 2^k., RR, F ^ndS ¼ 32 ; where ~, F ¼ 4xz^i y2^i þ yz^k and S is the surface of the cube bounded by, (b) Show that S ~, the planes x ¼ 0; x ¼ 1; y ¼ 0; y ¼ 1; z ¼ 0; z ¼ 1., R, (c) Use the Stoke’s theorem to evaluate C ½ðx þ 2yÞ dx þ ðx zÞ dy þ ðy zÞdz where C is the, boundary of the triangle with vertices ð2; 0; 0Þ; ð0; 3; 0Þ and ð0; 0; 6Þ oriented in the anti-clockwise, direction., , SOLUTIONS, 1. (a) Similar to Remark 4.5, We are given that, , 2, , m, 6 0, A¼6, 4 0, 6, Therefore, , , m, , , j Aj ¼ m 0, , 6, , , , 0, 1, 0 , , , , m 1 þ 1 0, , , 6, 11 6 , , 1, m, 6, , For m ¼ 3, we have the singular matrix, 2, 3, 6 0, 6, 4 0, 6, , 1, m, 0, 11, , 0, 1, m, 6, , 3, 0, 07, 7, 1 5, 1, , , 0 , , 1 ¼ m3 6m2 þ 11m 6 ¼ 0 if m ¼ 1; 2; 3:, , 1, , 1, 3, 0, 11, , 0, 1, 3, 6, , 3, 0, 07, 7;, 1 5, 1
Page 330 :
Q.12, , n, , Engineering Mathematics-I, , which has non-singular sub-matrix, , 2, , 3, 1, 0, 3 1 5:, 0, 3, , 3, 40, 0, , Thus for m ¼ 3, the rank of the matrix A is 3. Similarly, the rank is 3 for m ¼ 2 and m ¼ 1. For other, values of m, we have j Aj 6¼ 0 and so ðAÞ ¼ 4 for other values of m., (b) We have, , Then, , 2, , 32, 3 2, 3, 1 0 0, 1 0 0, 1 0 0, A2 ¼ 4 1 0 1 5 4 1 0 1 5 ¼ 4 1 1 0 5 :, 0 1 0, 0 1 0, 1 0 1, 2, 3 2, 3 2, 3 2, 1 0 0, 1 0 0, 1 0 0, 1 0, A þ A2 I ¼ 4 1 0 1 5 þ 4 1 1 0 5 4 0 1 0 5 ¼ 4 2 0, 0 1 0, 1 0 1, 0 0 1, 1 1, , Also, , 2, , 1 0, A 3 ¼ A2 A ¼ 4 1 1, 1 0, , 32, 1 0, 0, 05 41 0, 1, 0 1, , 3 2, 0, 1, 15 ¼ 42, 0, 1, , 0, 0, 1, , 3, 0, 15, 0, , 3, 0, 15, 0, , Hence for n ¼ 3, the relation, An ¼ An2 þ A2 I, holds. We want to show that it holds for n, We have, , ð1Þ, , 3 We prove the result using mathematical induction., , Anþ1 ¼ An :A ¼ ½An2 þ A2 IA, ¼ Aðnþ1Þ2 þ A3 AI, ¼ Aðnþ1Þ2 þ ½A þ A2 I A, ¼ Aðnþ1Þ2 þ A2 I:, Hence, by mathematical induction, the result holds for all n, (c) The characteristic matrix of the given matrix A is, , 3l, 1, , jA lI j ¼ 2 1 l, 0, 1, , 3., , , 1 , 2 ¼ 0, 2 l, , or, ð3 lÞ½ð1 lÞð2 lÞ 2 1½4 þ 2l 1ð2Þ ¼ 0, or, ð3 lÞð1 lÞð2 lÞ 6 þ 2l þ 4 2l þ 2 ¼ 0, or, ð3 lÞð1 lÞð2 lÞ ¼ 0:
Page 337 :
Examination Papers, , Indicate True of False for the following statements:, (f) (i) If jAj ¼ 0, then at least on eigen value is zero., (ii) A, , 1, , n, , Q.19, , (True/False), , exists if 0 is an eigen value of A, , (True/False), , (iii) If jAj 6¼ 0, then A is known as singular matrix, , (True/False), , (iv) Two vectors X and Y is said to be orthogonal Y ; X Y ¼ Y X 6¼ 0:, (g) (i) The curve y2 ¼ 4ax is symmetric about x-axis., T, , T, , (True/False), (True/False), , (ii) The curve x3 þ y3 ¼ 3axy is symmetric about the line y ¼ x, (iii) The curve x2 þ y2 ¼ a2 is symmetric about both the axis x and y., , (True/False), (True/False), , (iv) The curve x3 y3 ¼ 3axy is symmetric about the line y ¼ x., , (True/False), , Pick the correct answer of the choice given bolow:, (h) If r ¼ x^i þ y^j þ z^k is position vector, then value of rðlog rÞ is, (i), , r, r, , (iii) rr3, (i) The Jacobian, (i) 1, , (ii), @ðu;vÞ, @ðx;yÞ, , r, r2, , (iv) None of the above, for the function u ¼ ex sin y; v ¼ ðx þ log sin yÞ is, (ii), , sin x sin y xy cos x cos y, x, , (iii) 0, (iv) ex ., (j) The volume of the solid under the surface az ¼ x2 þ y2 and whose base R is the circle x2 þ y2 ¼ a2, is given as, (i) =2a, (iii) 43 a3, , (ii), (iv), , a3 =2, None of the above., , SECTION B, 2. Attempt any three parts of the following:, , (3 10 = 30), , (a) If y ¼ ðsin1 xÞ2 , prove that yn ð0Þ ¼ 0 for b odd and yn ð0Þ ¼ 2:22 :42 :62 . . . ðn 2Þ2 ; n 6¼ 2 for n is, even., (b) Find the dimension of rectangular box of maximum capacity whose surface area is given when, (a) box is open at the top (b) box is closed., 4 1, (c) Find a matrix P which diagonalizes the matrix A ¼, , verify P1 AP ¼ D where D is the, 2 3, diagonal matrix., a b, (d) Find the area and the mass contained in the first quadrant enclosed by the curve ax þ by ¼ 1, pffiffiffiffiffi, where a > 0; b > 0 given that density at any point ðx; yÞ is k xy., RR, (e) Using the divergence theorem, evaluate the surface integral S ðyz dy dz þ zx dz dx þ xy dy dxÞ, where S: x2 þ y2 þ z2 ¼ 4.
Page 341 :
Examination Papers, ^, , ^, , ^, , But r ¼ x i þy j þz k . Therefore, , ! pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, , r ¼ r ¼ x2 þ y2 þ z2, , or, r 2 ¼ x 2 þ y 2 þ z2 :, Thus, 2r, , @r, ¼ 2x, @x, , or, @r x, ¼ :, @x r, Similarly,, @r y, ¼, @y r, Therefore, , and, , @r z, ¼ :, @z r, , ^ 1 y, ^ 1 z, 1 x, þj, þk, r r, r r, r r, ^ y, ^ z, ^ x, ¼ i 2 þ j 2 þk 2, r, r, r, , !, ^, ^, ^, r, 1, ¼ 2 x i þy j þ2 k ¼ 2 :, r, r, ^, , rðlog rÞ ¼ i, , Hence choice (ii) is true, (i) We have, v ¼ x þ log sin y:, , u ¼ ex sin y;, Therefore, , , @ðu; vÞ , ¼, @ðx; yÞ , , @u, @x, @v, @x, x, , @u, @y, @v, @y, , , ex sin y, , ¼, , 1, , ¼ e cos y ex cos y ¼ 0, Hence the choice (iii) is correct, (j) (iii) is correct., 2. (a) We have, y ¼ ðsin1 xÞ2 :, , , ex cos y , cot y , , n, , Q.23
Page 344 :
Q.26, , n, , Engineering Mathematics-I, , and, , 2, , 13, , 1, 3, , 2, 3, , 1, 3, , 7 4, 5, 2, , 1, 3, , P1 ¼ 4, , Then, , 2, 6, P1 A P ¼ 4, 2, 6, ¼4, , 13, , 1, 3, , 2, 3, 23, , 1, 3, 2, 3, , 10, 3, , (d) The equation of the curve is, , x, , y, þ, a, b, , 5, , 3, 1, 2, , 1, 1, , ¼, , 2, , 0, , 0, , 5, , 3, 7, 5, , 1, , 1, , 2, , 1, , 5, 3, , a, , 3, , b, , ¼ 1; a; b > 0:, , The parametric form of the curve is, 2, , 2, , x ¼ a cosa t; y ¼ b sinb t:, Therefore, the required area is, Z, , Z0, A¼, , y dx ¼, , 2, , Z0, ¼, , 2, , y, , 2, , dx, dt, dt, , , , 2, 2, 2a, ðb sinb tÞ cosða1Þ t sin t dt, a, , , 2ab, ¼, a, , Z2, , sinðbþ1Þ t cosða1Þ t dt, 2, , 2, , 0, , 2 2, 3, 2, ðbþ2Þ, ða1þ1Þ, 2, 2ab 6 2 , 7, ¼, 2þ1þ21þ2, 4, 5, a, b, a, 2, 2, 2 , 3, 1, 1, 2ab 4 a b 5, , ¼, 2ab 1 þ 1 þ 1, a, b, 1 1, ab a b, , ¼, aþb F 1þ1, a, , b, , :
Page 348 :
Q.30, , n, , Engineering Mathematics-I, , Therefore, , @u, @x, , , , @ðu; v; wÞ @v, 0, ¼, J ¼, @ðx; y; zÞ @x, , , , @w, @x, , @u, @y, , @v, @y, , @w, @y, , , , , 1, y, 2 2, x þy, @v , @z , , , 0, , @w , @u, @z, , 0, x, x2 þy2, , 0, , , 0, , , 0 , , , 1, , @z, , ¼, , x, 1, i, ¼ h, x2 þ y2 x 1 þ y2, x, , ¼, , 1, y, ; since ¼ tan v, uð1 þ tan2 vÞ, u, , ¼, , 1, :, u sec2 v, , Hence, J J 0 ¼ 1; which proves the chain rule:, , (c) We have, , sffiffiffi, l, :, T ¼ 2, g, , Taking logarithm, we get, 1, 1, log T ¼ log 2 þ log l log g:, 2, 2, Differentiating (1), we get, 1, 1 l 1 g, T ¼, , T, 2l 2 g, or, T, 1 l, 1g, 100 ¼, 100 , 100, T, 2 l, 2 g, 1, ¼ ½2 0 ¼ 1:, 2, Hence the approximate error is 1%., 5. (a) The give system of equation is, x þ y þ 3z ¼ 0, 2x þ y þ 2z ¼ 0, 4x þ 3y þ bz ¼ 0:, , ð1Þ
Page 349 :
Examination Papers, , The system in matrix form is, , 2, , 1, 42, 4, , 1, 1, 3, , n, , Q.31, , 32 3, 2 3, 3, x, 0, 25 4y5 ¼ 405, b, z, 0, , This homogenous system will have a non-trivial solution only if j Aj ¼ 0. Thus for non-trivial solution, , , 1 1 3, , , 2 1 2 ¼ 0, , , 4 3 b, or, 1ðb 6Þ 1ð2b 8Þ þ 3ð6 4Þ ¼ 0, or, b þ 8 ¼ 0; which yields b ¼ 8:, Thus for non-trivial solution b ¼ 8. The coefficient matrix for non-trivial solution is, 3 2, 2, 3, 1, 1, 3, 1 1 3, 7 6, 6, 7 R2 ! R2 2R1, 4 2 1 2 5 4 0 1 4 5, R3 ! R3 4R1, 0 1 4, 4 3 8, 3, 2, 1, 1, 3, 7, 6, 4 0 1 4 5R3 ! R3 R2, 0, 0, 0, The last matrix is of rank 2. Thus the given system is equivalent to, x þ y þ 3z ¼ 0, y 4z ¼ 0:, Hence y ¼ 4z and then x ¼ z. Taking z ¼ t the general solution is, x ¼ t; y ¼ 4t; z ¼ t:, , (b) We have, A¼, , The characteristic equation is, , 1, 2, :, 2 1, , , 1 l, jA lI j ¼ , 2, , , 2 , ¼0, 1 l
Page 350 :
Q.32, , n, , Engineering Mathematics-I, , or, l2 5 ¼ 0:, , ð1Þ, , We note that, A2 ¼, , 1, 2, , 2, 1, , 1, 2, 2 1, , 5, 0, , ¼, , 0, :, 5, , Then, A2 5I ¼, , 5, 0, , 0, 5 0, 0 0, , ¼, :, 5, 0 5, 0 0, , Hence A satisfies its characteristic equation. Premultiplication by A1 yields, A 5A1 ¼ 0, or, A1, , (c) The characteristic equation is, , 2, 1, ¼ A¼4, 5, , , 5 l, jA lI j ¼ , 2, , 1, 5, , 2, 5, , 2, 5, , 15, , 3, 5:, , , 2 , ¼0, 2 l , , or, ð5 lÞð2 lÞ 4 ¼ 0, or, l2 þ 7l þ 6 ¼ 0, or, l¼, , 7 , , pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi, 49 24, ¼ 6; 1:, 2, , The characteristic vector corresponding to l ¼ 2 is given by ðA þ IÞX ¼ 0, that is, by, 4, 2, x1, 0, ¼, 2 1, x2, 0, or by, 4x1 þ 2x2 ¼ 0, 2x1 x2 ¼ 0:
Page 355 :
Index, Acceleration of the particle 7.7, , Radial and transverse 7.8, Tangential and normal 7.8, Adjoint of a matrix 4.18, Algebraic structure 4.3, Asymptote of a curve 2.9, Asymptotes of rational algebraic curve 2.3, Asymptotes parallel to axes 2.3, Augmented matrix 4.36, , Jacobian 3.33, Lagrange’s condition 3.24, Lagrange’s method of undetermined multipliers 3.29, Laplacian operator 7.22, Leibnitz’s theorem 1.8, Level surfaces 7.13, Linear span 4.7, Linearly independent set 4.7, Liouville’s theorem 5.13, , Beta function 5.3, Cancellation law 4.4, Change of order of integration 6.13, Characteristic root 4.44, Consistency theorem 4.36, Curl of a vector-point function 7.21, Diagonalization of quadratic form 4.62, Division ring 4.6, Double integral 6.1, , Change of variable 6.9, Evaluation of 6.2, Eigenvalue of a matrix 4.44, Envelope of the family of curves 3.7, Equivalent matrices 4.29, Evaluation of double integrals 6.2, 6.7, Extreme values 3.23, Field 4.6, Gamma function 5.7, Geometric multiplicity of an eigenvalue 4.48, Group 4.3, , Abelian 4.3, Finite 4.3, Group homomorphism 4.5, Harmonic function 7.22, Higher-order partial derivatives 3.2, Homogeneous function 3.9, Integral domain 4.6, Integration of vector functions 7.29, Intersection of a curve and its asymptotes 2.7, Inverse of a matrix 4.19, , Maclaurin’s theorem 3.20, Matrix 4.9, , Derogatory 4.50, Diagonal 4.10, Hermitian 4.14, Idempotent 4.14, Involutory 4.14, Lower triangular 4.18, Nilpotent 4.13, Normal 4.50, Null 4.10, Orthogonal 4.52, Scalar 4.10, Singular 4.20, Square 4.10, Elementary 4.23, Symmetric 4.14, Unit 4.10, Unitary 4.51, Upper triangular 4.18, Matrix algebra 4.10, Minimal polynomial 4.48, Multiplication of matrices 4.11, Normal form of a matrix 4.28, Normal form of a real quadratic form 4.64, Orthogonal vectors 4.51, Partial derivatives 3.2, Physical interpretation of curl 7.21, Physical interpretation of divergence 7.20, Properties of Beta function 5.3, Properties of divergence and curl 7.24, Properties of gamma function 5.7
Page 356 :
I.2, , n, , Index, , Quadratic forms 4.61, , Index of 4.64, Negative definite 4.64, Positive definite 4.64, Rank of 4.63, Semi-definite 4.64, Signature of 4.64, Rank of a matrix 4.25, Relation between Beta and Gamma functions 5.7, Ring 4.5, , Commutative 4.5, Without zero divisor 4.5, Ring homomorphism 4.6, Ring isomorphism 4.6, Row reduced echelon form 4.28, Saddle point 3.24, Similarity of matrices 4.53, Stoke’s theorem 7.52, , Subgroup 4.4, Surface integral 7.36, Taylor’s theorem for functions of several variables 3.19, Transpose of a matrix 4.14, Transverse acceleration 7.8, Triple integral 6.27, Unit tangent vector to a curve 7.5, Vector differential operator (del) 7.13, Vector function 7.5, , Ordinary derivative of 7.5, Vector line integral 7.30, Vector point function 7.13, Vector space 4.6, Vector triple product 7.5, Velocity vector 7.7, Volume integral 7.41, Work done by a force 7.33