Recently Published
Multi-Metric Differential Abundance Benchmarking
This analysis addresses the methodological question raised in ANCOM-BC Issue #196: Can normalized abundance data be effectively used with differential abundance tools designed for raw counts?
Multi-Metric Differential Abundance Benchmarking
This analysis addresses the methodological question raised in ANCOM-BC Github repo Issue #196: Can normalized abundance data be effectively used with differential abundance tools designed for raw counts?
Many differential abundance (DA) tools were originally designed for raw read counts where library size differences are informative and corrected internally (e.g. via size-factor estimation, library size, gene length). For genome-level metagenomic data, however, raw counts are dominated by genome length; common practice is therefore to normalise reads to TPM/RPKM or coverage. This creates a methodological question about the potential effect on the results.
This study evaluates 48 metric × method combinations (6 metrics × 8 DA tools) using CAMISIM synthetic communities with known ground truth.
Multi-Metric Differential Abundance Tool Evaluation
Differential abundance (DA) tools were originally designed for raw read counts where library size differences are informativeand corrected internally (e.g. via size-factor estimation). For genome-level metagenomic data, however, raw counts are dominated by genome length; common practice is therefore to normalise reads to TPM/RPKM or coverage. This creates a methodological question about the potential effect on the results.
This study evaluates 54 metric×method combinations (6 metrics × 9 DA tools) using CAMISIM synthetic communities with known ground truth.
Research Questions
Cross-metric performance: How do DA tools perform across different abundance metrics?
Normalization impact: Does length/depth normalization improve or reduce DA detection power?
Method preferences: Do some DA tools work better with specific abundance metrics?
Consistency evaluation: Are DA results consistent across metrics for individual tools?