<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>NIPALS Archives - CenterSpace</title>
	<atom:link href="https://www.centerspace.net/tag/nipals/feed" rel="self" type="application/rss+xml" />
	<link>https://www.centerspace.net/tag/nipals</link>
	<description>.NET numerical class libraries</description>
	<lastBuildDate>Sat, 17 Jul 2021 20:09:29 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.1</generator>
<site xmlns="com-wordpress:feed-additions:1">104092929</site>	<item>
		<title>Principal Components Regression: Part 3 – The NIPALS Algorithm</title>
		<link>https://www.centerspace.net/principal-components-regression</link>
					<comments>https://www.centerspace.net/principal-components-regression#respond</comments>
		
		<dc:creator><![CDATA[Steve Sneller]]></dc:creator>
		<pubDate>Tue, 29 Nov 2016 19:23:13 +0000</pubDate>
				<category><![CDATA[Statistics]]></category>
		<category><![CDATA[Theory]]></category>
		<category><![CDATA[NIPALS]]></category>
		<category><![CDATA[PCR]]></category>
		<category><![CDATA[PCR c#]]></category>
		<category><![CDATA[PCR estimator]]></category>
		<category><![CDATA[principal component analysis C#]]></category>
		<category><![CDATA[Principal Components Regression]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/?p=7075</guid>

					<description><![CDATA[<p>In this final entry of our three part series on Principle Component Regression (PCR) we described the NIPALS algorithm used to compute the principle components.  This is followed by a theoretical discussion of why the NIPALS algorithm works that is accessible to non-experts. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/principal-components-regression">Principal Components Regression: Part 3 – The NIPALS Algorithm</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2>Principal Components Regression: Recap of Part 2</h2>



<p>Recall that the least squares solution <img src="https://s0.wp.com/latex.php?latex=%5Cbeta&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;beta" class="latex" /> to the multiple linear problem <img src="https://s0.wp.com/latex.php?latex=X+%5Cbeta+%3D+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X &#92;beta = y" class="latex" /> is given by<br>(1) <img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D+%3D+%28X%5ET+X%29%5E%7B-1%7D+X%5ET+y+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta} = (X^T X)^{-1} X^T y " class="latex" /></p>



<p>And that problems occurred finding <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}" class="latex" /> when the matrix<br>(2) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /></p>



<p>was close to being singular. The Principal Components Regression approach to addressing the problem is to replace <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> in equation (1) with a better conditioned approximation. This approximation is formed by computing the eigenvalue decomposition for <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> and retaining only the r largest eigenvalues. This yields the PCR solution:<br>(3) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r%3D+V_1+%5CLambda_1%5E%7B-1%7D+V_1%5ET+X%5ET+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r= V_1 &#92;Lambda_1^{-1} V_1^T X^T y" class="latex" /></p>



<p>where <img src="https://s0.wp.com/latex.php?latex=%5CLambda_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Lambda_1" class="latex" /> is an r x r diagonal matrix consisting of the r largest eigenvalues of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X%2CV_1%3D%28v_1%2C...%2Cv_r+%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X,V_1=(v_1,...,v_r )" class="latex" /><br>are the corresponding eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" />. In this piece we shall develop code for computing the PCR solution using the NMath libraries.</p>


<p>[eds: This blog article is final entry of a three part series on principal component regression. The first article in this series, &#8220;Principal Component Regression: Part 1 – The Magic of the SVD&#8221; is <a href="https://www.centerspace.net/theoretical-motivation-behind-pcr">here</a>. And the second, &#8220;Principal Components Regression: Part 2 – The Problem With Linear Regression&#8221; is <a href="https://www.centerspace.net/priniciple-components-regression-in-csharp">here</a>.]</p>



<h2>Review: Eigenvalues and Singular Values</h2>



<p>In order to develop the algorithm, I want to go back to the Singular Value Decomposition (SVD) of a matrix and its relationship to the eigenvalue decomposition. Recall that the SVD of a matrix X is given by<br>(4) <img src="https://s0.wp.com/latex.php?latex=X%3DU+%5CSigma+V%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X=U &#92;Sigma V^T" class="latex" /></p>



<p>Where U is the matrix of left singular vectors, V is the matrix of right singular vectors, and Σ is a diagonal matrix with positive entries equal to the singular values. The eigenvalue decomposition of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> is given by<br>(5) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X%3DV+%5CLambda+V%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X=V &#92;Lambda V^T" class="latex" /></p>



<p>Where the eigenvalues of X are the diagonal entries of the diagonal matrix <img src="https://s0.wp.com/latex.php?latex=%5CLambda&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Lambda" class="latex" /> and the columns of V are the eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> (V is also composed of the right singular vectors of X).<br>Recall further that if the matrix X has rank r then X can be written as<br>(6) <img src="https://s0.wp.com/latex.php?latex=X%3D+%5Csum_%7Bj%3D1%7D%5E%7Br%7D+%5Csigma_j+u_j+v_j%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X= &#92;sum_{j=1}^{r} &#92;sigma_j u_j v_j^T" class="latex" /></p>



<p>Where <img src="https://s0.wp.com/latex.php?latex=%5Csigma_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;sigma_j" class="latex" /> is the jth singular value (jth diagonal element of the diagonal matrix <img src="https://s0.wp.com/latex.php?latex=%5CSigma&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Sigma" class="latex" />), <img src="https://s0.wp.com/latex.php?latex=u_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="u_j" class="latex" /> is the jth column of U, and <img src="https://s0.wp.com/latex.php?latex=v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_j" class="latex" /> is the jth column of V. An equivalent way of expressing the PCR solution (3) to the least squares problem in terms of the SVD for X is that we’ve replaced X in the solution (1) by its rank r approximation shown in (6).</p>



<h2>Principal Components</h2>



<p>The subject here is Principal Components Regression (PCR), but we have yet to mention principal components. All we have talked about are eigenvalues, eigenvectors, singular values, and singular vectors. We’ve seen how singular stuff and eigen stuff are related, but what are principal components?<br>Principal component analysis applies when one considers statistical properties of data. In linear regression each column of our matrix X represents a variable and each row is a set of observed value for these variables. The variables being observed are random variables and as such have means and variances. If we center the matrix X by subtracting from each column of X its corresponding mean, then we’ve normalized the random variables being observed so that they have zero mean. Once the matrix X is centered in this way, the matrix <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> is then proportional to the variance/covariance for the variables. In this context the eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> are called the Principal Components of X. For completeness (and because they are used in discussing the PCR algorithm), we define two more terms.<br>In the SVD given by equation (4), define the matrix T by<br>(7) <img src="https://s0.wp.com/latex.php?latex=T%3DU%5CSigma&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T=U&#92;Sigma" class="latex" /></p>



<p>The matrix T is called the <em>scores</em> for X. Note that T is orthogonal, but not necessarily orthonormal. Substituting this into the SVD for X yields<br>(8) <img src="https://s0.wp.com/latex.php?latex=X%3DTV%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X=TV^T" class="latex" /></p>



<p>Using the fact that V is orthogonal we can also write<br>(9) <img src="https://s0.wp.com/latex.php?latex=T%3DXV&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T=XV" class="latex" /></p>



<p>We call the matrix V the <em>loadings</em>. The goal of our algorithm is to obtain the representation given by equation (8) for X, retaining all the most significant principal components (or eigenvalues, or singular values – depending on where your heads at at the time).</p>



<h2>Computing the Solution</h2>



<p>Using equation (3) to compute the solution to our problem involves forming the matrix <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> and obtaining its eigenvalue decomposition. This solution is fairly straight forward and has reasonable performance for moderately sized matrices X. However, in practice, the matrix X can be quite large, containing hundreds, even thousands of columns. In addition, many procedures for choosing the optimal number r of eigenvalues/singular values to retain involve computing the solution for many different values of r and comparing them. We therefore introduce an algorithm which computes only the number of eigenvalues we need.</p>



<h2>The NIPALS Algorithm</h2>



<p>We will be using an algorithm known as NIPALS (Nonlinear Iterative PArtial Least Squares). The NIPALS algorithm for the matrix X in our least squares problem and r, the number of retained principal components, proceeds as follows:<br>Initialize <img src="https://s0.wp.com/latex.php?latex=j%3D1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=1" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=X_1%3DX&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_1=X" class="latex" />. Then iterate through the following steps –</p>



<ol><li>Choose <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> as any column of <img src="https://s0.wp.com/latex.php?latex=X_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_j" class="latex" /></li><li>Let <img src="https://s0.wp.com/latex.php?latex=v_j+%3D+%28X_j%5ET+t_j%29+%2F+%5Cleft+%5C%7C+X_j%5ET+t_j+%5Cright+%5C%7C&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_j = (X_j^T t_j) / &#92;left &#92;| X_j^T t_j &#92;right &#92;|" class="latex" /></li><li>Let <img src="https://s0.wp.com/latex.php?latex=t_j%3D+X_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j= X_j v_j" class="latex" /></li><li>If <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> is unchanged continue to step 5. Otherwise return to step 2.</li><li>Let <img src="https://s0.wp.com/latex.php?latex=X_%7Bj%2B1%7D%3D+X_j-+t_j+v_j%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_{j+1}= X_j- t_j v_j^T" class="latex" /></li><li>If <img src="https://s0.wp.com/latex.php?latex=j%3Dr&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=r" class="latex" /> stop. Otherwise increment j and return to step 1.</li></ol>



<h2>Properties of the NIPALS Algorithm</h2>



<p>Let us see how the NIPALS algorithm produces principal components for us.<br>Let <img src="https://s0.wp.com/latex.php?latex=%5Clambda_j+%3D+%5Cleft+%5C%7C+X%5ET+t_j+%5Cright+%5C%7C&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;lambda_j = &#92;left &#92;| X^T t_j &#92;right &#92;|" class="latex" /> and write step (2) as<br>(10) <img src="https://s0.wp.com/latex.php?latex=X%5ET+t_j+%3D+%5Clambda_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T t_j = &#92;lambda_j v_j" class="latex" /></p>



<p>Setting <img src="https://s0.wp.com/latex.php?latex=t_j+%3D+X+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j = X v_j" class="latex" /> in step 3 yields<br>(11) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X+v_j%3D+%5Clambda_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X v_j= &#92;lambda_j v_j" class="latex" /></p>



<p>This equation is satisfied upon completion of the loop 2-4. This shows that <img src="https://s0.wp.com/latex.php?latex=%5Clambda_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;lambda_j" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=p_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="p_j" class="latex" /> are an eigenvalue and eigenvector of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" />. The astute reader will note that the loop 2-4 is essentially the power method for computing a dominant eigenvalue and eigenvector for a linear transformation. Note further that using <img src="https://s0.wp.com/latex.php?latex=t_j%3DX+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j=X v_j" class="latex" /> and equation (11) we obtain<br>(12)</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=t_j%5ET+t_j%3D+v_j%5ET+X%5ET+Xv_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j^T t_j= v_j^T X^T Xv_j" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+v_j%5ET+%28X%5ET+Xv_j+%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= v_j^T (X^T Xv_j )" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+%5Clambda_j+v_j%5ET+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= &#92;lambda_j v_j^T v_j" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+%5Clambda_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= &#92;lambda_j" class="latex" /></li></ul>



<p>After one iteration of the NIPALS algorithm we end up at step 5 with <img src="https://s0.wp.com/latex.php?latex=j%3D1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=1" class="latex" /> and<br>(13) <img src="https://s0.wp.com/latex.php?latex=X%3D+t_1+v_1%5ET%2B+X_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X= t_1 v_1^T+ X_2" class="latex" /></p>



<p>Note that <img src="https://s0.wp.com/latex.php?latex=t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_1" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=X_2%3DX+-+t_1+v_1%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_2=X - t_1 v_1^T" class="latex" /><br>are orthogonal:<br>(14)</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=%28X-+t_1+v_1%5ET+%29%5ET+t_1+%3D+X%5ET+t_1-+v_1+t_1%5ET+t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="(X- t_1 v_1^T )^T t_1 = X^T t_1- v_1 t_1^T t_1" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+X%5ET+X+v_1-+v_1+%5Clambda_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= X^T X v_1- v_1 &#92;lambda_1" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D0&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="=0" class="latex" /></li></ul>



<p>Furthermore, since <img src="https://s0.wp.com/latex.php?latex=t_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_2" class="latex" /> is initially picked as a column of <img src="https://s0.wp.com/latex.php?latex=X_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_2" class="latex" />, it is orthogonal to <img src="https://s0.wp.com/latex.php?latex=t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_1" class="latex" />. Upon completion of the algorithm we form the following two matrices:</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" />, whose columns are the vectors <img src="https://s0.wp.com/latex.php?latex=t_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_i" class="latex" />, <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> is orthogonal</li><li><img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" /> whose columns are the <img src="https://s0.wp.com/latex.php?latex=v_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_i" class="latex" />, <img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" /> is orthonormal.</li></ul>



<p>(15) <img src="https://s0.wp.com/latex.php?latex=X_r%3DT_r+V_r%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_r=T_r V_r^T" class="latex" /></p>



<p>If r is equal to the rank of X then, using the information obtained from equations (12) and (14), it follows that (15) yields the matrix decomposition (8). The idea behind Principal Components Regression is that after choosing an appropriate r the important features of X have been captured in <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" />. We then perform a linear regression with <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> in place of X,<br>(16) <img src="https://s0.wp.com/latex.php?latex=T_r+c%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r c=y" class="latex" />.</p>



<p>The least squares solution then gives<br>(17) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7Bc%7D%3D+%28T_r%5ET+T_r+%29%5E%7B-1%7D+T_r%5ET+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{c}= (T_r^T T_r )^{-1} T_r^T y" class="latex" /></p>



<p>Note that since <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> is diagonal it is easy to invert. Also note that we left out the loadings matrix <img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" />. This is due to the fact that the scores <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> are linear combinations of the columns of X, and the PCR method amounts to singling out those combinations that are best for predicting y. Finally, using (9) and (16) we rewrite our linear regression problem <img src="https://s0.wp.com/latex.php?latex=X+%5Cbeta%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X &#92;beta=y" class="latex" /> as<br>(18) <img src="https://s0.wp.com/latex.php?latex=XV_r+c%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="XV_r c=y" class="latex" /></p>



<p>From (18) we see that the PCR estimation <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r" class="latex" /> is given by<br>(19) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r%3D+V_r+%5Chat%7Bc%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r= V_r &#92;hat{c}" class="latex" />.</p>



<p>Steve</p>



<p></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/principal-components-regression">Principal Components Regression: Part 3 – The NIPALS Algorithm</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/principal-components-regression/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7075</post-id>	</item>
	</channel>
</rss>
