Blog | Similishttp://simil.is/blog/2014-10-06T03:57:16+00:00&Mu;&eta;&delta;&epsilon;ί&sigmaf; &alpha;&gamma;&epsilon;&omega;&mu;έ&tau;&rho;&eta;&tau;&omicron;&sigmaf; &epsilon;&iota;&sigma;ί&tau;&omega; &mu;&omicron;&iota; &tau;&eta;&nu; &sigma;&tau;έ&gamma;&eta;&nu;Representing Compositions of Linear Transformations as Matrices2014-10-06T03:57:16+00:00tyr/blog/author/tyr/http://simil.is/blog/representing-compositions-of-linear-transformations-as-matrices/<p>Let S:U-&gt;V and T:V-&gt;W be linear transformations, and let A = [T][&alpha;,&beta;] and B = [S][&beta;,&gamma;] with respective bases of U, V, and W given as &alpha; = {u_1,..,u_n}, &beta; = {v_1,..,v_m}, and &gamma; = {w_1,..,w_p}; [1 &lt; n,m,p &lt; &infin;] &isin;&nbsp;<strong>Z</strong>+. The product of these two matrices AB = [TS][&alpha;,&gamma;].</p> <p>Then we define T(v_k) and S(u_j) as follows:</p> <p>T(v_k) = &sum;(i=1, p)(A_ik * w_i), and S(u_j) = &sum;(k=1, m)(B_kj * v_k).</p> <p>[TS][&alpha;,&gamma;] is given as follows:</p> <p>TS(u_j) = T(&sum;(k=1, m)(B_kj * v_k))&nbsp;<br />= &sum;(k=1, m)(B_kj) * T(v_k)&nbsp;<br />= &sum;(k=1, m)(B_kj) * &sum;(i=1, p)(A_ik * w_i)&nbsp;<br />= &sum;(i=1, p)(&sum;(k=1, m)(A_ik * B_kj))(w_i)&nbsp;<br />= &sum;(i=1, p)(C_ij * w_i),</p> <p>where C_ij = &sum;(k=1, m)(A_ik * B_kj).</p> <p>This result motivates the more general definition of matrix multiplication, which states that the product of A and B is given as follows:</p> <p id="yui_3_13_0_2_1412568993495_105">(AB)_ij = &nbsp;&sum;(k=1, n)(A_ik * B_kj); [1 &le; i&nbsp;&le; m,&nbsp;1 &le; j &le; p] &isin;&nbsp;<strong>Z</strong>+.</p>Proving Properties of Compositions of Linear Transformations2014-10-06T03:15:46+00:00tyr/blog/author/tyr/http://simil.is/blog/proving-properties-of-compositions-of-linear-transformations/<p><span style="line-height: 1.428571429;">Theorem: Let <strong>V</strong> be a <strong>vector space</strong>. Let <strong>R</strong>, <strong>S</strong>, and <strong>T</strong> &isin; <strong>L</strong>(<strong>V</strong>) be <strong>linear transformations</strong>. Then:<br /></span><span style="line-height: 1.428571429;">(a) R(S + T) = RS + RT and (S + T)R = SR + TR<br /></span><span style="line-height: 1.428571429;">(b) R(ST) = (RS)T<br /></span><span style="line-height: 1.428571429;">(c) RI = IR = R<br /></span><span style="line-height: 1.428571429;">(d) c(ST) = (cS)T = S(cT) &forall; c &isin; <strong>R</strong></span></p> <p>-----------------</p> <p>Let R,S,T,W &isin; L(V) be linear transformations, let u = {u_1, u_2} and v = {v_1, v_2}, and let c &isin;&nbsp;<strong>R</strong>. Let I be the multiplicative identity transformation such that IL = LI = L &forall; L &isin; L(V). Then:</p> <p>R(S(cu) + T(v)) =&nbsp;IR(S(cu) + T(v))&nbsp;<br />= RI(S(cu_1, cu_2) + T(v_1, v_2)) = RcIS(u_1, u_2) + RIT(v_1, v_2)&nbsp;<br />= cRS(u_1, u_2)I + RT(v_1, v_2)I = cRS(u) + RT(v).</p> <p>Similarly,</p> <p>(S + cR)(T(u)W(v)) = (S + cR)(T(u_1, u_2)W(v_1, v_2))&nbsp;<br />= S(T(u_1, u_2)W(v_1, v_2)) + cR(T(u_1, u_2)W(v_1, v_2))&nbsp;<br />= (ST(u_1, u_2))(W(v_1, v_2)) + (RcT(u_1, u_2))(W(v_1, v_2))&nbsp;<br />= (ST(u_1, u_2)W(v_1, v_2)) + (RT(cu_1, cu_2))W(v_1, v_2)&nbsp;<br />= ST(u_1, u_2)W(v_1, v_2) + R(T(u_1, u_2)cW(v_1, v_2))&nbsp;<br />= ST(u_1, u_2)W(v_1, v_2) + (RT(u_1, u_2)W(cv_1, cv_2)) = ST(u)W(v) + RT(u)W(cv),</p> <p>which completes the proof.&nbsp;<strong>[]</strong></p>Linear Dependence and Independence2014-09-22T05:05:34+00:00tyr/blog/author/tyr/http://simil.is/blog/linear-dependence-and-independence/<p>Theorem: Let <strong>V</strong> be a <strong>vector space</strong>, and let <strong>S_1</strong> &sube; <strong>S_2</strong> &sube; <strong>V</strong>. If <strong>S_1</strong> is <strong>linearly dependent</strong>, then <strong>S_2</strong> is<strong> linearly dependent</strong>.<br />Corollary: Let <strong>V</strong> be a <strong>vector space</strong>, and let <strong>S_1</strong> &sube; <strong>S_2</strong> &sube; <strong>V</strong>. If <strong>S_2</strong> is <strong>linearly independent</strong>, then <strong>S_1</strong> is <strong>linearly independent</strong>.<br />-----------------&nbsp;<br /><span style="line-height: 1.428571429;">Theorem:&nbsp;If&nbsp;given S_1 &sube; S_2 &sube; V: S_1 is linearly dependent then S_2 is linearly dependent we begin by assuming S_1 is linearly dependent. Then for some arbitrary set of vectors&nbsp;&nbsp;u_1,...,u_n in S_2, then for all a_1,...,a_n in&nbsp;</span><strong style="line-height: 1.428571429;">R</strong>,&nbsp;with at least some a_i &ne; 0,<span style="line-height: 1.428571429;">&nbsp;1 &le; i &le; n:</span></p> <p>0 =&nbsp;a_1*u_1 + ... + a_n*u_n.</p> <p>Since&nbsp;S_1 &sube; S_2 then&nbsp;u_1,...,u_n &isin; S_2, hence S_2 must be linearly dependent.<br />------<br />Corollary:&nbsp;<span style="line-height: 1.428571429;">In order to prove that given S_1 &sube; S_2 &sube; V: S_2 is linearly independent then&nbsp;S_1 is also linearly independent one first assumes that S_2 is linearly independent. If so, then that means that the arbitrary set of vectors u_1,...,u_n in S_2 are all unique (ie: &exist;!&nbsp;u_1,...,u_n &isin; S_2). Therefore: if S_2 &supe; S_1 then all of the vectors of S_1 are also unique. Hence this proves the corollary. <strong>[]</strong></span></p>Span(S) is the set of all 2 x 2 Symmetric Matrices2014-09-22T01:52:51+00:00tyr/blog/author/tyr/http://simil.is/blog/spans-is-the-set-of-all-2-x-2-symmetric-matrices/<p>Let W_2 be the set of all 2 x 2 real symmetric matrices such that for all A in M_(2 x 2)(<strong>R</strong>), A_ij = A_ji; for all 1 &le; i,j &le; 2.</p> <p>W_2&nbsp;&le; M_(2 x 2)(<strong>R</strong>)</p> <p>We define A in W_2 such that A = [[a, c], [c, b]]; for all a,b,c in&nbsp;<strong>R</strong>.<br />Note: A = A^t, det(A) = ab - c^2.&nbsp;</p> <p>Let S = {M_1, M_2, M_3} = {[[1, 0],[0, 0]] , [[0, 0],[0, 1]] , [[0, 1],[1, 0]]}</p> <p>To show that Span(S) =&nbsp;W_2&nbsp;it is necessary to show that Span(S) &supe; W_2 and Span(S) &sube; W_2.</p> <p>i)For all x,y,z in&nbsp;<strong>R</strong>:</p> <p>[[a, c],[c,b]] = (x,y,z) ∙ S = x[M_1] + y[M_2] + z[M_3] =&gt; [[a, c] , [c,b]] = x[[1, 0],[0, 0]] + y[[0, 0],[0, 1]] + z[[0, 1],[1, 0]]&nbsp;=&gt; a = x, b = y, c = z.</p> <p>Hence&nbsp;Span(S)&nbsp;&supe; W_2.</p> <p>ii)For all a,b,c in A:</p> <p>{S *&nbsp;(a,b,c)} = {{M_1, M_2, M_3} * (a,b,c)} = {[M_1]a, [M_2]b, [M_3]c} = {[[1, 0],[0, 0]]a, [[0, 0],[0, 1]]b, [[0, 1],[1, 0]]c} = {[[a, 0],[0, 0]], [[0, 0],[0, b]], [[0, c],[c, 0]]}&nbsp;=&gt; [[a, c], [c, b]] = A.</p> <p>Hence Span(S) &sube; W_2.</p> <p>Therefore&nbsp;Span(S)&nbsp;=&nbsp;W_2.&nbsp;<strong>[]</strong></p>Symmetric Matrices as a Subspace of all Square Matrices2014-09-21T06:12:32+00:00tyr/blog/author/tyr/http://simil.is/blog/symmetric-matrices-as-a-subspace-of-all-square-matrices/<p>Theorem:&nbsp;<i>A&nbsp;<b>subspace</b></i>&nbsp;<b>W</b>&nbsp;<i>of a&nbsp;<b>vector space</b></i>&nbsp;<b>V</b>&nbsp;<i>over&nbsp;<b>R</b>&nbsp;is a&nbsp;<b>subset</b>&nbsp;of</i>&nbsp;<b>V</b>&nbsp;<i>which also has the properties that</i>&nbsp;<b>W</b>&nbsp;<i>is closed under addition and scalar multiplication. That is, For all</i>&nbsp;x<i>,</i>&nbsp;y&nbsp;<i>in</i>&nbsp;<b>W</b><i>,</i>&nbsp;x&nbsp;<i>and</i>&nbsp;y&nbsp;<i>are in</i>&nbsp;<b>V</b>&nbsp;<i>and for any</i>&nbsp;c&nbsp;<i>in <b>R</b>,&nbsp;cx + y is in</i>&nbsp;<b>W</b><i>.</i><br />-----------------<br />Let W_n be the subset of all M_(n x n)(<strong>R</strong>) such that A_ij = A_ji (ie. the set of all real symmetric square matrices).</p> <p>To show that W_n is a subspace of M_(n x n)(<strong>R</strong>) it is sufficient to demonstrate that the three conditions of the theorem are satisfied.</p> <p>S1) In M_(n x n)(<strong>R</strong>) there exists a matrix Z:{z_ij = 0, for all 1 &lt; i,j &lt; n} (ie. a matrix such that for every row i or column j consists of the zero vector).</p> <p>Since z_ij = z_ji it is a symmetric matrix and must also be in W_n.</p> <p>To satisfy S2,3) it is necessary to show the property of transpose as it relates to addition and scalar multiplication of matrices.</p> <p>Let A, B be matrices in M_(n x n)(<strong>R</strong>), let a,b be scalars in <strong>R</strong>. Then let O = aA + bB and let P = a(A^t) + b(B^t). Then:</p> <p>O_ij = a(A_ij) + b(B_ij) = P_ji =&gt; P_ij = O_ji</p> <p>S2) If A,B are in W_n, with A = A^t and B = B^t then:</p> <p>(A + B)^t = A^t + B^t = A + B =&gt; A + B is in W_n.</p> <p>Hence W_n is closed under addition.</p> <p>S3) If A is in W_n then for all a in <strong>R</strong>:</p> <p>(aA) = a(A^t) = aA =&gt; aA is in W_n.</p> <p>Hence W_n is closed under scalar multiplication.</p> <p>Thus by theorem we see that W_n &le; M_(n x n)(<strong>R</strong>).<strong> []</strong></p>