Intro

This script streamlines and standardizes our handling of the steps taken to reach each dataset from an original sampleset. You will export some standardized csv and tsv tables for easy importing and manipulation with our other workflows.

Files Needed

To start, you should have four categories of csv files (I download the first three from working google spreadsheets, and the fourth category is a series of files automatically generated by each MinION sequencing run). Those files and their list of column headers to be used are as follows (note: it is fine to have extra columns, but you must at least have these to run the script as is):

  1. Libraries
    • SequenceID
    • Pipeline
    • LibraryTube
    • LibraryBarcode
    • ExtractID
    • Final_Library_Concentration
    • Volume.Added.to.Pool.(uL)
    • Seq_ID
    • Run_ID
    • LibraryTubeID
  2. Extracts
    • ExtractID
    • ExtractDate
    • ExtractedBy
    • ExtractType
    • ExtractKit
    • SampleID
    • ExtractConcentration
    • ExtractBox
    • ExtractNotes
  3. Samples
    • SampleID
    • SampleSubject
    • SampleDate
    • SampleCollectedBy
    • SampleNotes
  4. Barcode Alignments (1 file per Run_ID)
    • barcode
    • alias
    • type
    • target_unclassified
    • qcquisition_run_id
    • protocol_group_id
    • sample_id
    • flow_cell_id
    • started

Other Configuration Settings

Sampleset in Params

You can use the sampleset setting under params in the header of this script to select which sampleset you will be working with. So long as the same name is used consistently, this should automatically filter for that name (e.g., loris or marmoset).

File Paths

Next, you should make sure your config.yml file contains the path to locate each of the files you will be using. Below is an example excerpt from my config file.

default:
  setup: "setup/setup.R"
  conflicts: "setup/conflicts.R"
  functions: "setup/functions.R"
  packages: "setup/packages.R"
  inputs: "setup/inputs.R"
  knit_engines: "setup/knit_engines.R"
  fonts: "setup/fonts/FA6_Free-Solid-900.otf"
  tmp_tsv: "tmp/tmp_table.tsv"
  tmp_downloads: "tmp/downloads/"
  tmp_fetch: "tmp/fetch_references.txt"
  tmp_fasta3: "tmp/tmp3.fasta"
  tmp_fasta4: "tmp/tmp4.fasta"
  
scripts:
  local:
    basecall: "batch_scripts/basecall_local.sh"
    trimWG: "batch_scripts/trimWG_local.sh"
  
isolates:
  samplesets: "salci" 
  minQC: 10

bats:
  inventories:
    all_stages: "samples/inventories/bats/compilation_bats.tsv"
    collection: "samples/inventories/bats/samples_bats.csv"
    extraction: "samples/inventories/bats/extracts_bats.csv"
    libraries: "samples/inventories/bats/libraries_bats.csv"
    seqruns: "samples/inventories/bats/seqruns_bats.csv"

loris:
  day1: "2023-10-26"
  last: "2024-10-25"
  sequencing:
    coverage: "visuals/loris_depth_summary.html"
    depth_plot: "visuals/loris_depth_hist.html"
  metadata: 
    scripts: !expr list("metadata/loris/colors.R", "metadata/loris/metadata_key.R", "metadata/loris/nutrition.R", "metadata/loris/hdz_loris_log.R", "metadata/loris/diet_tables.R")
    bristol: "metadata/loris/bristols.tsv"
    studbook: "metadata/loris/subjects_loris.csv"
    summary: "metadata/loris/samples_metadata.tsv"
    key: "metadata/loris/metadata_key.R"
    factors: "metadata/loris/factors.R"
    foods: "metadata/loris/foods.tsv"
    proteins: "metadata/loris/proteins.tsv"
    fats: "metadata/loris/fats.tsv"
    CHOs: "metadata/loris/CHOs.tsv"
    Ash: "metadata/loris/Ash.tsv"
    vitamins: "metadata/loris/vitamins.tsv"
    reactable: "metadata/loris/loris_metadata_summary.html"
    sample_table: 
      identifiers: "metadata/loris/identifier_key.tsv"
      main: "metadata/loris/sample_table.tsv"
      merged: "metadata/loris/sample_table_merged.tsv"
  inventories:
    all_stages: "samples/inventories/loris/compilation_loris.tsv"
    collection: "samples/inventories/loris/samples_loris.csv"
    extraction: "samples/inventories/loris/extracts_loris.csv"
    libraries: "samples/inventories/loris/libraries_loris.csv"
    seqruns: "samples/inventories/loris/seqruns_loris.csv"
  outputs_wf16s: "data/loris/outputs_wf16s/"
  barcodes_output: "dataframes/barcodes/loris/"
  read_alignments: "data/loris/read_alignments"
  taxa_reps:
    aligned: "data/loris/taxonomy/refseqs_aligned.fasta"
    tree: "data/loris/taxonomy/refseqs_tree.newick"
    table: "data/loris/taxonomy/tax_table.tsv"
  abundance_wf16s: "data/loris/wf16s_abundance/"
  microeco: 
    dataset:
      main:
        keg: "microeco/loris/datasets/main/keg"
        njc: "microeco/loris/datasets/main/njc"
        fpt: "microeco/loris/datasets/main/fpt"
        tax: "microeco/loris/datasets/main"
      culi:
        keg: "microeco/loris/datasets/culi/keg"
        njc: "microeco/loris/datasets/culi/njc"
        fpt: "microeco/loris/datasets/culi/fpt"
        tax: "microeco/loris/datasets/culi"
      warb:
        keg: "microeco/loris/datasets/warble/keg"
        njc: "microeco/loris/datasets/warble/njc"
        fpt: "microeco/loris/datasets/warble/fpt"
        tax: "microeco/loris/datasets/warble"
    abund:
      main:
        keg: "microeco/loris/abundance/main/keg"
        fpt: "microeco/loris/abundance/main/fpt"
        njc: "microeco/loris/abundance/main/njc"
        tax: "microeco/loris/abundance/main"
      culi:
        keg: "microeco/loris/abundance/culi/keg"
        fpt: "microeco/loris/abundance/culi/fpt"
        njc: "microeco/loris/abundance/culi/njc"
        tax: "microeco/loris/abundance/culi"
      warb:
        keg: "microeco/loris/abundance/warble/keg"
        fpt: "microeco/loris/abundance/warble/fpt"
        njc: "microeco/loris/abundance/warble/njc"
        tax: "microeco/loris/abundance/warble"
    alpha:
      main: "microeco/loris/alphadiversity/main"
      culi: "microeco/loris/alphadiversity/culi"
      warb: "microeco/loris/alphadiversity/warble"
    beta:
      main:
        kegg: "microeco/loris/betadiversity/main/keg"
        fpt: "microeco/loris/betadiversity/main/fpt"
        njc: "microeco/loris/betadiversity/main/njc"
        tax: "microeco/loris/betadiversity/main"
      culi:
        kegg: "microeco/loris/betadiversity/culi/keg"
        fpt:  "microeco/loris/betadiversity/culi/fpt"
        njc:  "microeco/loris/betadiversity/culi/njc"
        tax: "microeco/loris/betadiversity/culi"
      warb:
        kegg: "microeco/loris/betadiversity/warble/keg"
        fpt:  "microeco/loris/betadiversity/warble/fpt"
        njc:  "microeco/loris/betadiversity/warble/njc"
        tax: "microeco/loris/betadiversity/warble"
    data:
      main:
        feature: "microeco/loris/datasets/main/feature_table.tsv"
        tree:    "microeco/loris/datasets/main/phylo_tree.tre"
        fasta:   "microeco/loris/datasets/main/rep_fasta.fasta"
        samples: "microeco/loris/datasets/main/sample_table.tsv"
        taxa:    "microeco/loris/datasets/main/tax_table.tsv"
      culi: 
        feature: "microeco/loris/datasets/culi/feature_table.tsv"
        tree:    "microeco/loris/datasets/culi/phylo_tree.tre"
        fasta:   "microeco/loris/datasets/culi/rep_fasta.fasta"
        samples: "microeco/loris/datasets/culi/sample_table.tsv"
        taxa:    "microeco/loris/datasets/culi/tax_table.tsv"
      warb:
        feature: "microeco/loris/datasets/warb/feature_table.tsv"
        tree:    "microeco/loris/datasets/warb/phylo_tree.tre"
        fasta:   "microeco/loris/datasets/warb/rep_fasta.fasta"
        samples: "microeco/loris/datasets/warb/sample_table.tsv"
        taxa:    "microeco/loris/datasets/warb/tax_table.tsv"


sample_sheets:
  compilations:
    bats:    "samples/sample_sheets/bats/nwr_combined_sample_sheet.csv"
    loris:    "samples/sample_sheets/loris/hdz_combined_sample_sheet.csv"
  nwr1: "samples/sample_sheets/bats/nwr1_sample_sheet.csv"
  hdz1:  "samples/sample_sheets/loris/hdz1_sample_sheet.csv"
  hdz2:  "samples/sample_sheets/loris/hdz2_sample_sheet.csv"
  hdz3:  "samples/sample_sheets/loris/hdz3_sample_sheet.csv"
  hdz4:  "samples/sample_sheets/loris/hdz4_sample_sheet.csv"
  hdz5:  "samples/sample_sheets/loris/hdz5_sample_sheet.csv"
  hdz6:  "samples/sample_sheets/loris/hdz6_sample_sheet.csv"
  hdz7:  "samples/sample_sheets/loris/hdz7_sample_sheet.csv"
  hdz8:  "samples/sample_sheets/loris/hdz8_sample_sheet.csv"
  hdz9:  "samples/sample_sheets/loris/hdz9_sample_sheet.csv"
  hdz10: "samples/sample_sheets/loris/hdz10_sample_sheet.csv"
  hdz11: "samples/sample_sheets/loris/hdz11_sample_sheet.csv"
  hdz12: "samples/sample_sheets/loris/hdz12_sample_sheet.csv"
  hdz13: "samples/sample_sheets/loris/hdz13_sample_sheet.csv"
  hdz14: "samples/sample_sheets/loris/hdz14_sample_sheet.csv"
  hdz15: "samples/sample_sheets/loris/hdz15_sample_sheet.csv"
  hdz16: "samples/sample_sheets/loris/hdz16_sample_sheet.csv"
  hdz17: "samples/sample_sheets/loris/hdz17_sample_sheet.csv"
  hdz18: "samples/sample_sheets/loris/hdz18_sample_sheet.csv"

barcode_alignments:
  compilations:
    loris:    "samples/barcode_alignments/loris/hdz_combined_barcode_alignment.tsv"
    bats:    "samples/barcode_alignments/bats/nwr_combined_barcode_alignment.tsv"
  nwr1: "samples/barcode_alignments/bats/nwr1_barcode_alignment.tsv"
  hdz1:  "samples/barcode_alignments/loris/hdz1_barcode_alignment.tsv"
  hdz2:  "samples/barcode_alignments/loris/hdz2_barcode_alignment.tsv"
  hdz3:  "samples/barcode_alignments/loris/hdz3_barcode_alignment.tsv"
  hdz4:  "samples/barcode_alignments/loris/hdz4_barcode_alignment.tsv"
  hdz5:  "samples/barcode_alignments/loris/hdz5_barcode_alignment.tsv"
  hdz6:  "samples/barcode_alignments/loris/hdz6_barcode_alignment.tsv"
  hdz7:  "samples/barcode_alignments/loris/hdz7_barcode_alignment.tsv"
  hdz8:  "samples/barcode_alignments/loris/hdz8_barcode_alignment.tsv"
  hdz9:  "samples/barcode_alignments/loris/hdz9_barcode_alignment.tsv"
  hdz10: "samples/barcode_alignments/loris/hdz10_barcode_alignment.tsv"
  hdz11: "samples/barcode_alignments/loris/hdz11_barcode_alignment.tsv"
  hdz12: "samples/barcode_alignments/loris/hdz12_barcode_alignment.tsv"
  hdz13: "samples/barcode_alignments/loris/hdz13_barcode_alignment.tsv"
  hdz14: "samples/barcode_alignments/loris/hdz14_barcode_alignment.tsv"
  hdz15: "samples/barcode_alignments/loris/hdz15_barcode_alignment.tsv"
  hdz16: "samples/barcode_alignments/loris/hdz16_barcode_alignment.tsv"
  hdz17: "samples/barcode_alignments/loris/hdz17_barcode_alignment.tsv"
  hdz18: "samples/barcode_alignments/loris/hdz18_barcode_alignment.tsv"

abund_wf16s_files:
  hdz1:  "data/loris/wf16s_abundance/hdz1_abundance_table_species.tsv"
  hdz2:  "data/loris/wf16s_abundance/hdz2_abundance_table_species.tsv"
  hdz3:  "data/loris/wf16s_abundance/hdz3_abundance_table_species.tsv"
  hdz4:  "data/loris/wf16s_abundance/hdz4_abundance_table_species.tsv"
  hdz5:  "data/loris/wf16s_abundance/hdz5_abundance_table_species.tsv"
  hdz6:  "data/loris/wf16s_abundance/hdz6_abundance_table_species.tsv"
  hdz7:  "data/loris/wf16s_abundance/hdz7_abundance_table_species.tsv"
  hdz8:  "data/loris/wf16s_abundance/hdz8_abundance_table_species.tsv"
  hdz9:  "data/loris/wf16s_abundance/hdz9_abundance_table_species.tsv"
  hdz10: "data/loris/wf16s_abundance/hdz10_abundance_table_species.tsv"
  hdz11: "data/loris/wf16s_abundance/hdz11_abundance_table_species.tsv"
  hdz12: "data/loris/wf16s_abundance/hdz12_abundance_table_species.tsv"
  hdz13: "data/loris/wf16s_abundance/hdz13_abundance_table_species.tsv"
  hdz14: "data/loris/wf16s_abundance/hdz14_abundance_table_species.tsv"
  hdz15: "data/loris/wf16s_abundance/hdz15_abundance_table_species.tsv"
  hdz16: "data/loris/wf16s_abundance/hdz16_abundance_table_species.tsv"
  hdz17: "data/loris/wf16s_abundance/hdz17_abundance_table_species.tsv"
  hdz18: "data/loris/wf16s_abundance/hdz18_abundance_table_species.tsv"

methods_16s:
  libprep_workflow: "'rapid16s'"
  dorado_model: "'dna_r10.4.1_e8.2_400bps_sup@v5.0.0'"
  min_length: 1000
  max_length: 2000
  min_qual: 7
  min_id: 85
  min_cov: 80
  kit_name: "'SQK-16S114-24'"
  tax_rank: "S"
  n_taxa_barplot: 12
  abund_threshold: 0
  loris:
    rarefy: 4500
    norm: "SRS"
    min_abund: 0.00001
    min_freq: 1
    include_lowest: TRUE
    unifrac: TRUE
    betadiv: "aitchison"
    alpha_pd: TRUE
    tax4fun_db: "Ref99NR"
    loris_rarefy: 4500
    keg_minID: 97
    

Note that I also include paths to files that this script will create. If the file is already there, then it will be overwritten, if not, it will be created there. Run the code below to set up your paths from the config file for the working sampleset you identified in the header:

global            <- config::get(config = "default")
swan              <- config::get(config = "swan")
micro             <- config::get(config = "microbiome")
loris             <- config::get(config = "loris")
marmoset          <- config::get(config = "marmoset")
isolates          <- config::get(config = "isolates")
bats              <- config::get(config = "bats")
methods_16s       <- config::get(config = "methods_16s")
sample_sheets     <- config::get(config = "sample_sheets")
abund_wf16s_files <- config::get(config = "abund_wf16s_files")
barcode_alignments<- config::get(config = "barcode_alignments")

seqruns      <- seqruns %>% keep_at(params$sampleset)       %>% list_flatten(name_spec = "")
subject_list <- keep_at(subjects, paste0(params$sampleset)) %>% list_flatten(name_spec = "{inner}")
path         <- config::get(config = paste0(params$sampleset))

Sequencing Run Lists

The code in the chunk above also generated a list of formatted codes for each available sequencing run to date, separated by taxa/samplesets (currently just for loris and marmoset). Make sure the end number matches the highest integer we have for that sampleset to date.

Other Setup Scripts

The script that I pasted above sources additional scripts that I run routinely at the start of any work to bring in functions and other inputs with shorter code chunks as well as the text of the yaml header for this R Markdown file. You can flip through the text from those below.

---
output:
  html_document:
    theme:
      bslib: true
    css: journal.css
    toc: true
    toc_float: true
    df_print: paged
params:
  sampleset: "loris"
  seqrun: "hdz18"
                     
---
samplesets <- list(
  "Omaha Zoo Pygmy Lorises"  = "loris"  , 
  "UNO Marmosets"            = "marmoset", 
  "Wild North American Bats" = "bats",
  "North American eDNA"      = "envir",
  "Bacterial Isolates"       = "isolates"
)


subjects <- list(
  marmoset = list(
    HAM  = "Hamlet",
    HER  = "Hera",
    JAR  = "JarJar BINKS",
    OPH  = "Ophelia",
    KUB  = "Kubo",
    KOR  = "Korra",
    WOL  = "Wolverine",
    IRI  = "Iris",
    GOO  = "Goose",
    LAM  = "Lambchop",
    FRA  = "Franc",
    IVY  = "Ivy",
    CHA  = "Charles",
    PAD  = "Padme",
    BUB  = "Bubblegum",
    GRO  = "Grogu",
    MAR  = "Marshmallow",
    BUD  = "Buddy",
    JOA  = "Joans",
    HEN  = "Henry",
    GIN  = "Ginger"
  ),
  loris = list(
    WARB = "Warble",
    CULI = "Culi"
  ),
  bats = list(
    UNK = "Unknown"
  ),
  envir = list(
    UNK = "Unknown"
  ),
  isolates = list(
    UNK = "Unknown"
  )
)

seqruns <- list(
  loris     = as.list(paste0("hdz", 1:18)),
  marmoset  = as.list(sprintf("cm%03d", 1:10)),
  isolates  = as.list(paste0("salci", 1))
)

colors <- list(
  f = "#D53288FF",
  m = "#3F459BFF",
  u = "#21B14BFF",
  sire = "#3F459B33",
  dam  = "#D5328833",
  emph = "#DC8045FF",
  seq  = "rcartocolor::Sunset",
  div  = "rcartocolor::Temps",
  rand = "khroma::stratigraphy"
)


knitr::knit_engines$set(terminal = function(options) {
  code <- paste(options$code, collapse = "\n")
  
  params <- map(params, ~ if (is.atomic(.)) {list(.)} else {(.)}) %>%
    list_flatten()
  
  patterns <- list(
    params             = list(
      sampleset    = paste0(params$sampleset),
      seqrun       = paste0(params$seqrun),
      samplesheet  = as.character(sample_sheets[paste0(tolower(params$seqrun))])
    )            ,
    
    global            = global            ,
    swan              = swan              ,
    micro             = micro             ,
    loris             = loris             ,
    isolates          = isolates          ,
    bats              = bats              ,
    methods_16s       = methods_16s       ,
    sample_sheets     = sample_sheets     ,
    abund_wf16s_files = abund_wf16s_files ,
    barcode_alignments= barcode_alignments
  )
  
  
  # Replace placeholders group by group
  for (group in names(patterns)) {
    placeholder_list <- patterns[[group]]
    for (name in names(placeholder_list)) {
      placeholder <- paste(group, name, sep = "\\$") # Match exact placeholder
      value <- placeholder_list[[name]]
      
      # Replace placeholders exactly and avoid breaking suffixes
      code <- gsub(placeholder, value, code, perl = TRUE)
    }
  }
  
  options$warning <- FALSE
  knitr::engine_output(options, code, out = code)
})


knitr::opts_chunk$set(message = FALSE,
               warning = FALSE,
               echo    = TRUE,
               include = TRUE,
               eval    = TRUE,
               comment = "")

Script

Barcode Alignments

barcodes <- imap(seqruns, ~ {
  map(.x, ~ read_table(barcode_alignments[[.x]]) %>% mutate(seqrun = .x)) 
}) %>%
    bind_rows() %>%
                        as_tibble() %>%
                        filter(barcode     != "unclassified") %>%
                        mutate(SeqDateTime  = as_datetime(started)) %>%
                        mutate(SeqDate      = floor_date(SeqDateTime, unit = "day")) %>%
                        mutate(SeqRunID     = str_replace_all(sample_id, "pool1", "PL001")) %>%
                      mutate(LibraryCode    = str_squish(str_trim(seqrun      , "both")),
                             FlowCellSerial = str_squish(str_trim(flow_cell_id, "both"))
                             ) %>%
                      mutate(LibraryBarcode  = as.numeric(str_remove_all(barcode, "16S|barcode0|barcode"))) %>%
                        select(LibraryCode,
                               LibraryBarcode,
                               reads_unclassified = target_unclassified,
                               FlowCellSerial,
                               protocol_group_id,
                               SeqRunID,
                               SeqDate,
                               SeqDateTime)

write.table(barcodes, barcode_alignments$compilations[[paste0(params$sampleset)]],
            row.names = F,
            sep = "\t")

Sequencing Runs

seqrun.tbl <- read_csv(path$inventories$seqruns) %>% 
  rename_with(~str_replace_all(., "\\s", "_")) %>%
  mutate(SampleSet       = case_when(
    str_detect(Pooled_Library_Code, "CM")  ~ "marmoset", 
    str_detect(Pooled_Library_Code, "PL")  ~ "loris",
    str_detect(Pooled_Library_Code, "NWR") ~ "bats", .default = "unknown"),
         LibraryCode     = str_to_lower(Pooled_Library_Code),
         LibPrepWorkflow = case_when(
           str_detect(Kit, "LSK") & Pipeline == "16S" ~ "lsk16s",
           Pipeline == "Host mtDNA"                   ~ "lskadaptive",
           str_detect(Kit, "SQK-16S") & Pipeline == "16S" ~ "rapid16s"),
         LibPrepDate     = mdy(Run_Date),
         SeqRunDate      = ymd(str_remove_all(str_trim(Run_ID, "both"), "MIN_16_|MIN_16-|MIN_MT_"))) %>%
  mutate(LibraryCode     = str_replace_all(LibraryCode, "pl00|pl0", "hdz"),
         strands         = 2,
         fragment_type   = if_else(Pipeline == "16S", 3, 1),
         Length          = if_else(Pipeline == "16S", 1500, 10000),
         InputMassStart  = if_else(Pipeline == "16S", 10, 1000),
         TemplateVolPrep = if_else(LibPrepWorkflow == "rapid16s", 15, 47),
         PoolSamples     = if_else(Pipeline == "16S", "yes", "no"),
         InputMassFinal  = 50
         ) %>%
  filter(SampleSet == params$sampleset) %>%
  select(
         SampleSet,
         LibraryCode,
         LibPrepDate,
         LibPrepWorkflow,
         LibPrepKit      = Kit,
         FlowCellSerial  = Flow_Cell_ID,
         FlowCellType    = Flow_Cell_Type,
         FlongleAdapter  = Flongle_Adapter,
         SeqDevice       = Sequencer,
         strands,
         fragment_type,
         Length,
         InputMassStart,
         TemplateVolPrep,
         PoolSamples,
         InputMassFinal)

Sample Records

samples     <- read_csv(path$inventories$collection) %>% 
  rename_with(~str_replace_all(., "\\s", "_")) %>%
                      filter(str_starts(SampleID, "\\w+")) %>% 
                      select(-SampleBox)  %>%
                      mutate(SampleID = str_squish(str_trim(SampleID, "both"))) %>% distinct() %>%
                      mutate(CollectionDate     = mdy(SampleDate),
                             Subject            = str_squish(str_trim(SampleSubject)),
                             .keep = "unused") %>% distinct() %>%
                      mutate(Subj_Certainty = if_else(Subject %in% subject_list, "yes", "no")) %>%
                      mutate(Subject        = str_remove_all(Subject, "\\?"))

DNA Extract Records

We will also join the previous sample records to this table at the end of the chunk.

extracts <- read_csv(path$inventories$extraction) %>% 
  rename_with(~str_replace_all(., "\\s", "_")) %>%
  filter(str_starts(SampleID, "\\w+")) %>%
  mutate(SampleID        = if_else(str_detect(SampleID, "#N/A") | is.na(SampleID), "ExtractControl", SampleID)) %>%
  mutate(SampleID = str_squish(str_trim(SampleID, "both")),
         ExtractID= str_squish(str_trim(ExtractID, "both")),
         ExtractDate       = mdy(ExtractDate)) %>%
  mutate(ExtractConc       = str_remove_all(ExtractConcentration, ">"), .keep = "unused") %>%
  mutate(ExtractConc = if_else(str_detect(ExtractConc, "Higher"), "100", ExtractConc),
         ExtractConc = if_else(str_detect(ExtractConc, "HIGHER"), "100", ExtractConc),
         ExtractConc = if_else(ExtractConc == "LOW", "0", ExtractConc),
         ExtractConc = if_else(ExtractConc == "", NA, ExtractConc)) %>%
  mutate(ExtractConc = round(as.numeric(ExtractConc), 1))  %>% filter(ExtractType == "DNA" | ExtractType == "dna") %>%
  select(-ExtractType) %>%
  right_join(samples) %>% distinct()

Libraries and Combining all Records

compilation <- read_csv(path$inventories$libraries)  %>% 
  rename_with(~str_replace_all(., "\\s", "_")) %>%
  rename_with(~str_remove_all(., "\\(|\\)")) %>%
  filter(str_starts(SequenceID, "\\w+") & Seq_ID != "#N/A") %>%
  mutate(LibraryCode     = str_to_lower(Seq_ID)) %>%
  mutate(LibraryCode     = str_replace_all(LibraryCode, "pl00|pl0" , "hdz"),
         SampVolPool     = round(as.numeric(Volume_Added_to_Pool_uL), 0),
         LibraryBarcode  = as.numeric(str_remove_all(LibraryBarcode, "16S|barcode0|barcode"))) %>%
  mutate(TotalPoolVol    = sum(SampVolPool), .by = LibraryCode) %>%
  mutate(BeadVol         = TotalPoolVol * 0.6) %>%
  select(SequenceID,
         LibraryCode,
         LibraryTube,
         LibraryBarcode,
         ExtractID,
         SampVolPool,
         TotalPoolVol,
         BeadVol,
         Conc_QC2    = Final_Library_Concentration) %>%
  full_join(extracts, by = join_by(ExtractID)) %>% 
  mutate(SampleID = if_else(is.na(SampleID) & !is.na(ExtractID), "NTC", SampleID)) %>%
  full_join(barcodes, by = join_by(LibraryCode, LibraryBarcode)) %>%
  left_join(seqrun.tbl, by = join_by(LibraryCode, FlowCellSerial)) %>% distinct() %>%
  mutate(steps_remaining = case_when(
    is.na(ExtractID) ~ "sample not extracted",
    is.na(SequenceID) ~ "extract not sequenced",
    !is.na(ExtractID) & !is.na(SequenceID) & !is.na(SampleID) ~ "sample extracted and sequenced"
  )) %>%
  relocate(SampleID, ExtractID, SequenceID, steps_remaining) %>%
  arrange(CollectionDate, Subject)

Exporting a Spreadsheet with Records

We will use this spreadsheet for building the metadata table but also for calling up sample info in our protocol apps.

write.table(compilation,
            path$inventories$all_stages,
            row.names = F,
            sep = "\t")

Counting Replicates

count.extracts    <- extracts %>% select(ExtractID, SampleID) %>% distinct() %>% 
  group_by(SampleID)  %>% 
  mutate(n_dna_extracts = n_distinct(ExtractID)) %>% ungroup() %>% select(-ExtractID)

count.libraries <- compilation %>% select(SequenceID, ExtractID, SampleID) %>% distinct() %>% 
  group_by(ExtractID) %>% mutate(n_16s_extract = n_distinct(SequenceID)) %>% ungroup() %>%
  group_by(SampleID)  %>% mutate(n_16s_sample  = n_distinct(SequenceID)) %>% ungroup() %>% select(-SequenceID)

Exporting SampleSheets formatted for Dorado

samplesheet <- compilation %>%
  filter(steps_remaining == "sample extracted and sequenced") %>%
                      mutate(barcode = if_else(LibraryBarcode < 10, 
                                               str_glue("barcode0", "{LibraryBarcode}"),
                                               str_glue("barcode" , "{LibraryBarcode}"))) %>%
                      select(flow_cell_id  = FlowCellSerial,
                             experiment_id = protocol_group_id,
                             kit           = LibPrepKit,
                             barcode,
                             alias         = SequenceID,
                             seqrun        = LibraryCode)

write.table(samplesheet, 
          sample_sheets$compilations[[paste0(params$sampleset)]],
          row.names = F,
          quote     = F,
          sep       = ",")

Splitting Samplesheet to Individual Files for Each Run

samplesheet.nested <- samplesheet %>% nest(.by = seqrun) %>%
  deframe()

Next Step

Now you should proceed to the Read Processing workflow to begin basecalling the sequencing run.