# Lab12.rnw

% % NOTE -- ONLY EDIT THE .Rnw FILE!!! The .tex file is % likely to be overwritten. % % \VignetteIndexEntry{Lab 12} %\VignetteDepends{wavethresh,ts,Rwave} %\VignetteKeywords{Microarray} \documentclass[12pt]{article}

\usepackage{amsmath,pstricks} \usepackage[authoryear,round]{natbib} \usepackage{hyperref}

\textwidth=6.2in \textheight=8.5in %\parskip=.3cm \oddsidemargin=.1in \evensidemargin=.1in \headheight=-.3in

\newcommand{\scscst}{\scriptscriptstyle} \newcommand{\scst}{\scriptstyle}

\newcommand{\Rfunction}[1]{{\texttt{#1}}} \newcommand{\Robject}[1]{{\texttt{#1}}} \newcommand{\Rpackage}[1]{{\textit{#1}}}

\bibliographystyle{plainnat}

\title{Lab 12: Dimension reduction in R}

\begin{document}

\maketitle

In this lab, we present some dimension reduction algorithms that are available in R and that are useful for classification of microarray data. The first example in this section will rely on the data reported in Hedenfalk et al. (2001) where one of the goals is to find genes that are differentially expressed between BRCA1-mutation-positive tumors and BRCA2-mutation-positive tumors by obtaining several microarrays from each cell type.

We first load the neccessary libraries for this lab.

<

Expression measures are stored in an \Robject{hedenfalk} object available through the \Rpackage{brca} package.

<

We formulate the effects of gene expression on class type using the {\em multinomial logistic regression model}: $$ \log \frac{\PP (Y_{i}=r)}{\PP (Y_{i}= 0)} = \mathbf{X}_{i\cdot} \bbeta_{r0} ,\quad r=1,\dots, G-1,$$ where $\bbeta_{r0}$ is a $p$-dimensional vector of unknown regression coefficients. Since $p >> n$ it is not possible to estimate the parameters of the above model using standard statistical methods. A principal component study becomes then a suitable first step to reduce the dimension of $\bbeta_{r0}$.

We will first apply a singular value decomposition based logistic regression method to classify the cells. It is much like principal component regression analysis (PCR). In order to setup the model we define first the design matrix for discrimination.

<

We now apply the SVD logistic procedure, made by Ghosh (2001) and available in the \Rpackage{Milan}.

<

SVD produces orthogonal class descriptors that reduce the high dimensional data (supergenes). This is achieved without regards to the response variation and may be inefficient. This way of reducing the regressor dimensionality is totally independent of the output variable. Another method is PLS, where the PLS components are chosen so that the sample covariance between the response and a linear combination of the $p$ predictors (genes) is maximum.

<

However, PLS is really designed to handle continuous responses and especially for models that do not really suffer from conditional heteroscedasticity as it is the case for binary or multinomial data, as it is the case here. An extension of the standard PLS algorithm to Generalized linear Modelsis also available from Marx. Here is an example on how to use such a procedure.

<

The purpose of the following commands is to apply other type of dimension reduction methods to the BRCA data. They are based on sliced inverse regression methods such as those implemented in \Rpackage{dr}. The following commands estimate the central dimension reduction subspace and perform a two-class discrimination. Note that since we are dealing with binary responses the number of slices must be two.

<

When X are normally distributed, SIR is equivalent to Linear Discriminant Analysis in the sense that they both estimate the same discriminant linear combinations of the predictors and SAVE is equivalent to Quadratic Discriminant Analysis.

<

We have applied sliced inverse regression methods to binary data but note that SIR and SAVE can also be applied to problems with multinomial or multi-valued responses.

\end{document}