Blog | Similishttp://simil.is/blog/2014-10-06T03:57:16+00:00&Mu;&eta;&delta;&epsilon;ί&sigmaf; &alpha;&gamma;&epsilon;&omega;&mu;έ&tau;&rho;&eta;&tau;&omicron;&sigmaf; &epsilon;&iota;&sigma;ί&tau;&omega; &mu;&omicron;&iota; &tau;&eta;&nu; &sigma;&tau;έ&gamma;&eta;&nu;Representing Compositions of Linear Transformations as Matrices2014-10-06T03:57:16+00:00tyr/blog/author/tyr/http://simil.is/blog/representing-compositions-of-linear-transformations-as-matrices/<p>Let S:U-&gt;V and T:V-&gt;W be linear transformations, and let A = [T][&alpha;,&beta;] and B = [S][&beta;,&gamma;] with respective bases of U, V, and W given as &alpha; = {u_1,..,u_n}, &beta; = {v_1,..,v_m}, and &gamma; = {w_1,..,w_p}; [1 &lt; n,m,p &lt; &infin;] &isin;&nbsp;<strong>Z</strong>+. The product of these two matrices AB = [TS][&alpha;,&gamma;].</p> <p>Then we define T(v_k) and S(u_j) as follows:</p> <p>T(v_k) = &sum;(i=1, p)(A_ik * w_i), and S(u_j) = &sum;(k=1, m)(B_kj * v_k).</p> <p>[TS][&alpha;,&gamma;] is given as follows:</p> <p>TS(u_j) = T(&sum;(k=1, m)(B_kj * v_k))&nbsp;<br />= &sum;(k=1, m)(B_kj) * T(v_k)&nbsp;<br />= &sum;(k=1, m)(B_kj) * &sum;(i=1, p)(A_ik * w_i)&nbsp;<br />= &sum;(i=1, p)(&sum;(k=1, m)(A_ik * B_kj))(w_i)&nbsp;<br />= &sum;(i=1, p)(C_ij * w_i),</p> <p>where C_ij = &sum;(k=1, m)(A_ik * B_kj).</p> <p>This result motivates the more general definition of matrix multiplication, which states that the product of A and B is given as follows:</p> <p id="yui_3_13_0_2_1412568993495_105">(AB)_ij = &nbsp;&sum;(k=1, n)(A_ik * B_kj); [1 &le; i&nbsp;&le; m,&nbsp;1 &le; j &le; p] &isin;&nbsp;<strong>Z</strong>+.</p>Span(S) is the set of all 2 x 2 Symmetric Matrices2014-09-22T01:52:51+00:00tyr/blog/author/tyr/http://simil.is/blog/spans-is-the-set-of-all-2-x-2-symmetric-matrices/<p>Let W_2 be the set of all 2 x 2 real symmetric matrices such that for all A in M_(2 x 2)(<strong>R</strong>), A_ij = A_ji; for all 1 &le; i,j &le; 2.</p> <p>W_2&nbsp;&le; M_(2 x 2)(<strong>R</strong>)</p> <p>We define A in W_2 such that A = [[a, c], [c, b]]; for all a,b,c in&nbsp;<strong>R</strong>.<br />Note: A = A^t, det(A) = ab - c^2.&nbsp;</p> <p>Let S = {M_1, M_2, M_3} = {[[1, 0],[0, 0]] , [[0, 0],[0, 1]] , [[0, 1],[1, 0]]}</p> <p>To show that Span(S) =&nbsp;W_2&nbsp;it is necessary to show that Span(S) &supe; W_2 and Span(S) &sube; W_2.</p> <p>i)For all x,y,z in&nbsp;<strong>R</strong>:</p> <p>[[a, c],[c,b]] = (x,y,z) ∙ S = x[M_1] + y[M_2] + z[M_3] =&gt; [[a, c] , [c,b]] = x[[1, 0],[0, 0]] + y[[0, 0],[0, 1]] + z[[0, 1],[1, 0]]&nbsp;=&gt; a = x, b = y, c = z.</p> <p>Hence&nbsp;Span(S)&nbsp;&supe; W_2.</p> <p>ii)For all a,b,c in A:</p> <p>{S *&nbsp;(a,b,c)} = {{M_1, M_2, M_3} * (a,b,c)} = {[M_1]a, [M_2]b, [M_3]c} = {[[1, 0],[0, 0]]a, [[0, 0],[0, 1]]b, [[0, 1],[1, 0]]c} = {[[a, 0],[0, 0]], [[0, 0],[0, b]], [[0, c],[c, 0]]}&nbsp;=&gt; [[a, c], [c, b]] = A.</p> <p>Hence Span(S) &sube; W_2.</p> <p>Therefore&nbsp;Span(S)&nbsp;=&nbsp;W_2.&nbsp;<strong>[]</strong></p>Symmetric Matrices as a Subspace of all Square Matrices2014-09-21T06:12:32+00:00tyr/blog/author/tyr/http://simil.is/blog/symmetric-matrices-as-a-subspace-of-all-square-matrices/<p>Theorem:&nbsp;<i>A&nbsp;<b>subspace</b></i>&nbsp;<b>W</b>&nbsp;<i>of a&nbsp;<b>vector space</b></i>&nbsp;<b>V</b>&nbsp;<i>over&nbsp;<b>R</b>&nbsp;is a&nbsp;<b>subset</b>&nbsp;of</i>&nbsp;<b>V</b>&nbsp;<i>which also has the properties that</i>&nbsp;<b>W</b>&nbsp;<i>is closed under addition and scalar multiplication. That is, For all</i>&nbsp;x<i>,</i>&nbsp;y&nbsp;<i>in</i>&nbsp;<b>W</b><i>,</i>&nbsp;x&nbsp;<i>and</i>&nbsp;y&nbsp;<i>are in</i>&nbsp;<b>V</b>&nbsp;<i>and for any</i>&nbsp;c&nbsp;<i>in <b>R</b>,&nbsp;cx + y is in</i>&nbsp;<b>W</b><i>.</i><br />-----------------<br />Let W_n be the subset of all M_(n x n)(<strong>R</strong>) such that A_ij = A_ji (ie. the set of all real symmetric square matrices).</p> <p>To show that W_n is a subspace of M_(n x n)(<strong>R</strong>) it is sufficient to demonstrate that the three conditions of the theorem are satisfied.</p> <p>S1) In M_(n x n)(<strong>R</strong>) there exists a matrix Z:{z_ij = 0, for all 1 &lt; i,j &lt; n} (ie. a matrix such that for every row i or column j consists of the zero vector).</p> <p>Since z_ij = z_ji it is a symmetric matrix and must also be in W_n.</p> <p>To satisfy S2,3) it is necessary to show the property of transpose as it relates to addition and scalar multiplication of matrices.</p> <p>Let A, B be matrices in M_(n x n)(<strong>R</strong>), let a,b be scalars in <strong>R</strong>. Then let O = aA + bB and let P = a(A^t) + b(B^t). Then:</p> <p>O_ij = a(A_ij) + b(B_ij) = P_ji =&gt; P_ij = O_ji</p> <p>S2) If A,B are in W_n, with A = A^t and B = B^t then:</p> <p>(A + B)^t = A^t + B^t = A + B =&gt; A + B is in W_n.</p> <p>Hence W_n is closed under addition.</p> <p>S3) If A is in W_n then for all a in <strong>R</strong>:</p> <p>(aA) = a(A^t) = aA =&gt; aA is in W_n.</p> <p>Hence W_n is closed under scalar multiplication.</p> <p>Thus by theorem we see that W_n &le; M_(n x n)(<strong>R</strong>).<strong> []</strong></p>