Contents

1 Introduction

Welcome to the ORFik package. ORFik is an R package for analysis of transcript and translation features through manipulation of sequence data and NGS data. This vignette will walk you through how to how to download annotation and align data with ORFik.

2 Download and align: Yeast

Here we will show a full example of aligning RNA-seq from yeast using the SacCer3 genome.

2.1 Specify output folders

First specify where you want to save the different data types:

  library(ORFik)                        # This package
  # Output folders:
  # 1. where do you want the annotation ?
  annotation.dir <- "~/Bio_data/annotations/Yeast_SacCer3/"
  # 2. where do you want the fastq files ?
  fastq.dir <-  "~/Bio_data/raw_data/RNA-seq/Yeast_SRP012047/"
  # 3. where do you want the aligned bam files ?
  bam.dir <- "~/Bio_data/processed_data/RNA-seq/Yeast_SRR453566/"

2.2 Download RNA-seq NGS data

We need some data to align, if you have in-lab data, you don’t need this step, since you already have access to the files.

On the other hand, if you want to use published data, you need to download it. I here just show what would work for the paired end RNA-seq experiment run SRR453566.

ORFik comes with a SRA run downloader, just specificy the SRR numbers, or a SRA experiment information csv file containing a column called ‘Run’. We will now show how to get data from SRA.

  1. Download using metadata table: the good thing here is that you can specify a project, and it will find all SRR numbers for you, but here we tell it only to download the 2 runs called SRR453566 and SRR453571. We will also only subset to download the 50000 first reads of the libraries, so you can replicate this faster.
info <- download.SRA.metadata("SRP012047", fastq.dir)
# Let's take 2 first runs in this experiment:
info <- info[1:2,]

download.SRA(info, fastq.dir, subset = 50000)

We now have the RNA-seq run, separated into 2 files, since this is paired end data. We could for ease also just have specified the SRR number in download.SRA, but then we get no meta-data csv file, which is handy for auto-detection of paired end data, the organism name etc. This is shown below:

organism <- info$ScientificName[1]
is_paired_end <- all(info$LibraryLayout == "PAIRED")

2.3 Download genome and gtf files

To download annotation we use the getGenomeAndAnnotation function. We need to decide 3 things:

  • organism: Give scientific name of organism, with either " " or "_" between genus(saccharomyces) and species (cerevisiae).
  • output.dir: Where to output the annotation
  • assembly_type: If using ensembl as db argument, you need to decide if you want primary_assembly or toplevel. The uncompressed toplevel file of human genome is > 70 GB, so for big genomes you should usually set to primary_assembly. But for small organisms like yeast, they don’t have a primary assembly so use “toplevel”.
  annotation <- getGenomeAndAnnotation(
                      organism = organism, 
                      output.dir = annotation.dir,
                      assembly_type = "toplevel"
                      )

The function will also create a gtf.db object so speed up loading of annotation, and index your genome to a .fai file.

If you run this function again after you have run this function and downloaded the data once, it will not re-download, but just output the correct object, this makes it easy to rerun the script, when you have some steps already finished.

If you you want to remove contaminants: phix, non coding RNAs, ribosomal RNAs, or tRNAs, also specify these in the function. By default it will download phix from refseq and the other contaminants are within the genome of the species, so they are extracted from the .gtf file. Note that some species does not have well annotated rRNAs, tRNAs etc, so you can then manually download and add the sequences from the Silva database, tRNAs from tRNA scan or similar databases. If the gtf does not have Non coding RNAs, they can be extracted by setting ncRNA = “auto”, it will then check if the species exists in the NONCODE database and automatically download them for you if they exists.

2.4 RNA-seq alignment

ORFik uses the STAR aligner, which is splice aware and fast. This will only work on unix systems (Linux or Mac) for now. To align the data we need two steps, the indexing of the genome step and the alignment to the genome step.

2.4.1 Indexing

To index the genome just give it the annotation output from previous step. This will also make an index for each of the depletion steps like phix, if you specified them in the earlier step.

index <- STAR.index(annotation)

If you run this function again after index exists in current file location, it will not re-index, but just output the correct object. Do remake = TRUE if you want to re-index.

2.4.2 Aligning the data

ORFik uses the fastp for trimming reads, this also only works on unix (Linux or Mac OS). If you are on windows, or you want to trim the reads yourself, just run the trimming and give the folder with the trimmed reads as input in next step. Also if you are unsure of what the 3’ adapter was, run first FASTQC to which adapters are detected. The great thing with fastp is that it has auto detection and removal of adapters, if you check out the resulting files you will see fastp has auto removed the Illumina adapters.

Now let’s see what we need as inputs for the alignment pipeline: We need usually 9 arguments (more are possible if you need them):

  • input.dir.rna: directory with fastq files (or trimmed files on mac)
  • output.dir.rna: output directory for bam files
  • index: the STAR index from previous step
  • paired.end: “yes” in this case, or “no” if single end.
  • steps: steps of depletion and alignment wanted: (a string: which steps to do? (default: “tr-ge”, write “all” to get all: “tr-co-ge”) tr: trimming (only for unix), co: deplete contaminants included, ph: phix depletion, rR: rrna depletion, nc: ncrna depletion, tR: trna depletion, ge: genome alignment) Write your wanted steps, seperated by “-”. Order does not matter. To just do trim and alignment to genome write “tr-ge”
  • adapter.sequence “auto”, or if you know add it, usually more secure with manual. Presets are “illumina”, “small_RNA” and “nextera”.
  • max.cpus How many cpus maximum to use
  • trim.front How many bases to trim front. Only if you believe there are low quality reads in front.
  • min.length minimum length of reads that pass to the bam file.
alignment <- 
  STAR.align.folder(fastq.dir, bam.dir, index,
                    paired.end = is_paired_end,
                    steps = "tr-ge", # (trim needed: adapters found, then genome)
                    adapter.sequence = "auto",
                    max.cpus = 30, trim.front = 3, min.length = 20)

If you used the fastp (tr step), you will get a pre-alignment QC report. Just like FastQC in html format. You will also get a MultiQC report from STAR runs made by ORFik for you.

3 Create an ORFik experiment of the Yeast data

To simplify coding and sharing of your work, you should make a ORFik experiment, check out the ORFik experiment vignette if you are unfamiliar with this class. You should first rename the bam files to more meaningful names, like RNA_WT_1 etc. Remember to keep a table of which SRA numbers correspond to which new file name. You do not need to do this, but this will make the ORFik experiment able to guess correctly what the data is. If there are replicates etc.

We can now easily make an ORFik experiment from the data we have:

txdb_file <- paste0(annotation["gtf"], ".db") # Get txdb file, not raw gtf
fa <- annotation["genome"]
create.experiment(exper = "yeast_exp_RNA",
                  dir = paste0(bam.dir, "/aligned/"),
                  txdb = txdb_file, fa = fa, 
                  organism = organism,
                  viewTemplate = FALSE, 
                  pairedEndBam = is_paired_end # True/False per bam file
                  )

The files is now saved to default directory which is: saveDir = “~/Bio_data/ORFik_experiments/”

df <- read.experiment("yeast_exp_RNA")

If you are not happy with the libtype, stage, replicates and so on for the file, you can edit the ORFik experiment in Libre Office, Excel or another spreadsheet viewer.

3.1 Convert libraries to new formats

Now you have an experiment, but bam files are big and slow to load. Let’s convert to some faster formats.

If you want optimzed format identical to bam file, use .ofst. (Fastest, not readable in IGV)

  remove.experiments(df)
  convertLibs(df, type = "ofst")

If you want peaks only, use wig files (Fast, readable in IGV)

  remove.experiments(df)
  convertLibs(df, type = "wig")

As an example of how to load the data to R in the optimized format .ofst.

3.2 Outputting libraries to R

This will output the libraries to the environment specified, default .GlobalEnv (the default R environment). The files are named from the experiment table RNA_1_WT, RNA_1_treated etc.

remove.experiments(df)
outputLibs(df, type = "ofst")

If I rather want the wig format files:

  remove.experiments(df)
  outputLibs(df, type = "wig")

3.3 Post alignment QC report

See ?QCreport for details of what you will get as output

  QCreport(df)

3.4 FPKM values (normalized counts)

After you have run QCreport you will have count tables of peaks over the mrna’s, 5’ UTRs, CDS and 3’ UTRs.

Let’s do an example to find the ratio between fpkm of between the CDS and mRNAs transcript regions.

  mrna <- countTable(df, region = "mrna", type = "fpkm")
  cds <- countTable(df, region = "cds", type = "fpkm")
  ratio <- cds / mrna

We now have a ratio of fpkm values between CDS and mrna.