AME requires a series of input sequences to scan for motif enrichment.
runAme() accepts sequence input in the following formats:
XStringSet inputs can be easily generated for DNA sequences from a GRanges object using the
AME scans input sequences against a database of known motifs and tests for enrichment of each motif in the database.
runAme() can accept a database in the following formats:
memes can be configured to use a default .meme format file as the query database, which it will use if the user does not provide a value to
database when calling
runAme(). The following locations will be searched in order.
meme_dboption, defined using
options(meme_db = "path/to/database.meme")
meme_dboption can also be set to an R object, like a universalmotif list.
MEME_DBenvironment variable defined in
MEME_DBvariable will only accept a path to a .meme file
NOTE: if an invalid location is found at one option,
runAme() will fall back to the next location if valid (eg if the
meme_db option is set to an invalid file, but the
MEME_DB environment variable is a valid file, the
MEME_DB path will be used.
runAme() supports running AME using three modes:
|Vs Shuffled||Input vs Shuffled Sequence||
|Discriminative||Input vs Control Sequence||
|Partitioning||Rank Input by fasta score||
To run AME using partitioning mode, the fasta header must contain a score value for each entry in the form: “>entry_name score”. The
score argument allows users to set the score value to a column value from input regions.
If using a list input to
runAme(), it will dispatch multiple AME runs for each object in the list.
If the input to
runAme() is a named list of
control can be set to one or more values from
names(input) to use those regions as background. It will skip running those regions as the input. The following code will result in these comparisons:
If multiple names are used in the
control section, they will be combined together to make a single control set which will be used for all comparisons. Here, we use “Static” and “Decreasing” sites as the control, which will result in only running 1 comparison: Increasing vs Static+Decreasing.
AME will return different output formats depending on the
method used. For detailed information about these values see the AME Output description webpage. As a general rule of thumb,
runAme() will return the same column names described in the webpage, except dashes are removed and all column names are lowercase.
runAme() is run with
method = "fisher", the sequences output can be added to the results by setting
sequences = TRUE. This will be added as a list column named
sequences that can be unnested using
plot_ame_heatmap() function provides a method to easily generate visualizations of AME results.
To plot results from multiple runs together, they must first be joined into 1 data frame. The
ame_by_behavior_vs_static object is a list whose names correspond to the E93 response (Increasing or Decreasing). The list can be combined into a data.frame using
.id = "behavior creates a new column
behavior that contains the names from the
ame_by_behavior_vs_static list. In this way, the resulting data.frame contains all AME results for each run, which can be distinguished by the
ame_by_behavior_vs_static %>% # AME results in list format are easily combined using dplyr::bind_rows # .id will specify a column to hold the list object names dplyr::bind_rows(.id = "behavior") %>% # setting group to a column name will split the results on the y-axis plot_ame_heatmap(group = behavior)
There are several nuances when making heatmap visualizations of these data. The following examples highlight some of these issues and provide alternative approaches and solutions.
We start by using different binding site categories as input.
It is possible to aggregate results from multiple runs into a heatmap by setting the
group parameter in
This is too many hits to properly view in this vignette, but you can see that the heatmap will plot motifs by their overlap across groups, where unique motifs are on the left, and shared motifs are on the right.
The dynamic range of p-values in these data varies between groups. For this reason, a simple heatmap scaled using all data values will make it more difficult to interpret within groups with a lower dynamic range of values. In other words, because the dynamic range of values are different between experiments, placing them on the default scale for comparison may not always be the most optimal visualization.
We can partially overcome this limitation by filling the heatmap with the normalized rank value for each TF, which accounts for differences in total number of discovered motifs between AME runs. Although it does not completely abrogate differences, the signal values for high-ranked motifs within groups will be more comparable. However, the normalized rank visualization eliminates all real values related to statistical significance! Instead, this visualization represents the relative ranks of hits within an AME run, which already pass a significance threshold set during
runAME(). This means that even if several motifs have similar or even identical p-values, their heatmap representation will be a different color value based on their ranked order in the results list. This tends to only be useful when there are a large number of hits (>=100). Both visualizations can be useful and reveal different properties of the data to the user. If in doubt, prefer the
Below is a comparison of the distribution of values when using
-log10(adj.pvalue) (A) vs normalized ranks (B). Because orphan sites tend to have smaller p-values overall, the heatmap scale will be skewed towards the high values in the orphan data, making ectopic and entopic heat values lighter by comparison.
To use the normalized rank value, set
value = "normalize" in
This plot reveals that the motifs which tend to be shared across all 3 categories tend to be higher ranks in the output than the motifs unique to the different categories, which tend to come from lower ranks. This suggests that although there are differences in motif content across the three categories, they may be largely similar in motif makeup. We will investigate this question in more detail in the “Denovo motif similarity” section.
library(ggplot2) (normalize_heatmap <- ame_res %>% dplyr::group_by(binding_type, motif_alt_id) %>% dplyr::filter(adj.pvalue == min(adj.pvalue)) %>% plot_ame_heatmap(group = binding_type, id = motif_alt_id, value = "normalize") + # All ggplot functions can be used to extend or edit the heatmap plots ggtitle("value = \"normalize\""))
An additional third option exists to rescale the
-log10(adj.pvalue) heatmap to change the heatmap’s maxiumum color value. This allows the user to maintain values which represent significance, but rescale the data to capture the lower end of the dynamic range. Using the cumulative distribution plot above, a reasonable cutoff is anywhere between 7 & 10, which captures > 90% of the data for ectopic and entopic sites.
A comparison of all three methods can be seen below.
pval_heatmap <- ame_res %>% dplyr::group_by(binding_type, motif_alt_id) %>% dplyr::filter(adj.pvalue == min(adj.pvalue)) %>% plot_ame_heatmap(group = binding_type, id = motif_alt_id) + ggtitle("value = -log10(adj.pvalue)") scale_heatmap <- ame_res %>% dplyr::group_by(binding_type, motif_alt_id) %>% dplyr::filter(adj.pvalue == min(adj.pvalue)) %>% plot_ame_heatmap(group = binding_type, id = motif_alt_id, scale_max = 7.5) + ggtitle("value = -log10(adj.pvalue) (scale capped at 7.5)")
Below is a comparison using the
normalize methods for plotting the heatmap. Note how the different plots highlight different data properties. The
-log10(adj.pvalue) plot shows overall significance of each hit, while
normalize method shows the relative rank of each hit within a
binding_type. Lowering the maximum scale value in C) does a better job than A) at visualizing differences in significance along the ectopic and entopic rows at the cost of decreasing the dynamic range of the orphan row. Selecting a visualization for publication will depend heavily on context, but if in doubt, prefer one which includes information of statistical significance as in A) or C).
importAme() can be used to import an
ame.tsv file from a previous run on the MEME server or on the commandline. Details for how to save data from the AME webserver are below.
Optionally, if AME was run on the commandline with
--method fisher, the user can pass a path to the
sequences.tsv file to the
sequences argument of
importAme() to append the sequence information to the AME results.
To download TSV data from the MEME Server, right-click the AME TSV output link and “Save Target As” or “Save Link As” (see example image below), and save as
<filename>.tsv. This file can be read using
memes is a wrapper for a select few tools from the MEME Suite, which were developed by another group. In addition to citing memes, please cite the MEME Suite tools corresponding to the tools you use.
If you use
runAme() in your analysis, please cite:
The MEME Suite is free for non-profit use, but for-profit users should purchase a license. See the MEME Suite Copyright Page for details.