Skip to content

Commit

Permalink
added tests
Browse files Browse the repository at this point in the history
  • Loading branch information
goodekat committed Jan 11, 2025
1 parent 00d467f commit 87f06b6
Show file tree
Hide file tree
Showing 33 changed files with 429 additions and 1 deletion.
61 changes: 61 additions & 0 deletions .github/workflows/test-coverage.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help
on:
push:
branches: [main, master]
pull_request:

name: test-coverage.yaml

permissions: read-all

jobs:
test-coverage:
runs-on: ubuntu-latest
env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}

steps:
- uses: actions/checkout@v4

- uses: r-lib/actions/setup-r@v2
with:
use-public-rspm: true

- uses: r-lib/actions/setup-r-dependencies@v2
with:
extra-packages: any::covr, any::xml2
needs: coverage

- name: Test coverage
run: |
cov <- covr::package_coverage(
quiet = FALSE,
clean = FALSE,
install_path = file.path(normalizePath(Sys.getenv("RUNNER_TEMP"), winslash = "/"), "package")
)
covr::to_cobertura(cov)
shell: Rscript {0}

- uses: codecov/codecov-action@v4
with:
# Fail if error if not on PR, or if on PR and token is given
fail_ci_if_error: ${{ github.event_name != 'pull_request' || secrets.CODECOV_TOKEN }}
file: ./cobertura.xml
plugin: noop
disable_search: true
token: ${{ secrets.CODECOV_TOKEN }}

- name: Show testthat output
if: always()
run: |
## --------------------------------------------------------------------
find '${{ runner.temp }}/package' -name 'testthat.Rout*' -exec cat '{}' \; || true
shell: bash

- name: Upload test results
if: failure()
uses: actions/upload-artifact@v4
with:
name: coverage-test-failures
path: ${{ runner.temp }}/package
4 changes: 3 additions & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,6 @@ Imports:
stringr,
tidyr
Suggests:
randomForest
randomForest,
testthat (>= 3.0.0)
Config/testthat/edition: 3
1 change: 1 addition & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
- Updated inputs in prep_training_data to match fdasrvf
- Added examples to documentation
- Cleaned up wording in documentation a bit
- Added tests

# veesa 0.1.2

Expand Down
2 changes: 2 additions & 0 deletions README.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ always_allow_html: yes
<!-- badges: start -->
[![CRAN status](https://www.r-pkg.org/badges/version/veesa)](https://CRAN.R-project.org/package=veesa)
[![R-CMD-check](https://github.com/sandialabs/veesa/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/sandialabs/veesa/actions/workflows/R-CMD-check.yaml)
[![test-coverage](https://github.com/sandialabs/veesa/actions/workflows/test-coverage.yaml/badge.svg)](https://github.com/sandialabs/veesa/actions/workflows/test-coverage.yaml)
[![Codecov test coverage](https://codecov.io/gh/sandialabs/veesa/branch/master/graph/badge.svg)](https://app.codecov.io/gh/sandialabs/veesa?branch=master)
<!-- badges: end -->

```{r setup, include = FALSE}
Expand Down
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ VEESA R Package
[![CRAN
status](https://www.r-pkg.org/badges/version/veesa)](https://CRAN.R-project.org/package=veesa)
[![R-CMD-check](https://github.com/sandialabs/veesa/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/sandialabs/veesa/actions/workflows/R-CMD-check.yaml)
[![test-coverage](https://github.com/sandialabs/veesa/actions/workflows/test-coverage.yaml/badge.svg)](https://github.com/sandialabs/veesa/actions/workflows/test-coverage.yaml)
[![Codecov test
coverage](https://codecov.io/gh/sandialabs/veesa/branch/master/graph/badge.svg)](https://app.codecov.io/gh/sandialabs/veesa?branch=master)
<!-- badges: end -->

# Set Up
Expand Down
Binary file modified README_files/figure-gfm/unnamed-chunk-10-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-10-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-10-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-15-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-15-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-15-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-16-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-16-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-16-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-18-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-18-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-18-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-20-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-20-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-21-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-21-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-23-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-23-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-4-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-9-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-9-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified README_files/figure-gfm/unnamed-chunk-9-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 12 additions & 0 deletions tests/testthat.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# This file is part of the standard setup for testthat.
# It is recommended that you do not modify it.
#
# Where should you do additional test configuration?
# Learn more about the roles of various files in:
# * https://r-pkgs.org/testing-design.html#sec-tests-files-overview
# * https://testthat.r-lib.org/articles/special-files.html

library(testthat)
library(veesa)

test_check("veesa")
69 changes: 69 additions & 0 deletions tests/testthat/test-compute_pfi.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
library(testthat)
library(randomForest)
library(dplyr)

# Mock data for testing
set.seed(123)
mock_x <- data.frame(
feature1 = rnorm(100),
feature2 = rnorm(100),
feature3 = rnorm(100)
)
mock_y <- factor(sample(c("A", "B"), 100, replace = TRUE))
mock_p <- factor(rbeta(n = 100, shape1 = 1, shape2 = 2))

# Train a random forest model
rf_model <- randomForest(
formula = mock_y ~ .,
data = mock_x
)

# Test 1: Check output structure
test_that("Output structure is correct", {
result <- compute_pfi(
x = mock_x,
y = mock_y,
f = rf_model,
K = 5,
metric = "accuracy"
)
expect_type(result, "list")
expect_true("pfi" %in% names(result))
expect_true("pfi_single_reps" %in% names(result))
})

# Test 2: Check dimensions of output
test_that("Output dimensions are correct", {
result <- compute_pfi(
x = mock_x,
y = mock_y,
f = rf_model,
K = 5,
metric = "accuracy"
)

expect_equal(length(result$pfi), ncol(mock_x)) # PFI should have length equal to number of features
expect_equal(dim(result$pfi_single_reps), c(5, ncol(mock_x))) # Single reps should be K x p
})

# Test 3: Check for error with mismatched dimensions
test_that("Function throws error with mismatched dimensions", {
expect_error(compute_pfi(
x = mock_x,
y = mock_y[1:50], # Mismatched length
f = rf_model,
K = 5,
metric = "accuracy"
), "Number of observations in x and y disagree.")
})

# Test 4: Check for invalid metric
test_that("Function throws error with invalid metric", {
expect_error(compute_pfi(
x = mock_x,
y = mock_y,
f = rf_model,
K = 5,
metric = "invalid_metric"
), "'metric' specified incorrectly. Select either 'accuracy', 'logloss', or 'nmse'.")
})
64 changes: 64 additions & 0 deletions tests/testthat/test-plot_pc_directions.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# library(testthat)
# library(ggplot2)
# library(dplyr)
# library(tidyr)
# library(purrr)
#
# # Mock data for testing
# set.seed(123)
# mock_f <- matrix(rnorm(1000), nrow = 100, ncol = 10) # 10 time points, 10 functions
# mock_time <- seq(0, 1, length.out = 100)
#
# # Mock training preparation data
# mock_fd <- prep_training_data(mock_f, mock_time, "jfpca")
#
# # Test 1: Check output structure
# test_that("Output is a ggplot object", {
# result <- plot_pc_directions(
# fpcs = 1,
# fdasrvf = mock_fd$fpca_res,
# fpca_method = "jfpca"
# )
# expect_s3_class(result, "ggplot")
# })
#
# # Test 2: Check error for invalid fpca_method
# test_that("Function throws error with invalid fpca_method", {
# expect_error(plot_pc_directions(
# fpcs = 1,
# fdasrvf = mock_fd,
# fpca_method = "invalid_method"
# ), "'fpca_method' entered incorrectly. Must be 'jfpca', 'vfpca', or 'hfpca'.")
# })
#
# # Test 3: Check handling of multiple PCs
# test_that("Function handles multiple PCs correctly", {
# result <- plot_pc_directions(
# fpcs = 1:3,
# fdasrvf = mock_fd$fpca_res,
# fpca_method = "jfpca"
# )
# expect_s3_class(result, "ggplot")
# })
#
# # Test 4: Check for correct number of lines in the plot
# test_that("Plot contains correct number of lines", {
# result <- plot_pc_directions(
# fpcs = 1,
# fdasrvf = mock_fd$fpca_res,
# fpca_method = "jfpca"
# )
# # Check the number of lines in the plot
# expect_equal(length(result$layers), 1) # Should have one layer for the line
# })
#
# # Test 5: Check for correct alpha values
# test_that("Function applies correct alpha values", {
# result <- plot_pc_directions(
# fpcs = 1,
# fdasrvf = mock_fd$fpca_res,
# fpca_method = "jfpca",
# alpha = 0.5
# )
# expect_equal(result$layers[[1]]$aes_params$alpha, 0.5) # Check if alpha is set correctly
# })
101 changes: 101 additions & 0 deletions tests/testthat/test-prep_testing_data.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
library(testthat)
library(fdasrvf)

# Mock data for testing (100 time points, 10 functions)
set.seed(123)
mock_f <- matrix(rnorm(1000), nrow = 100, ncol = 10)
mock_time <- seq(0, 1, length.out = 100)

# Mock training preparation data
mock_train_prep <- prep_training_data(mock_f, mock_time, "jfpca")

# Test 1: Check output structure and dimensions
test_that("Output structure is correct", {
result <-
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "DP"
)
expect_type(result, "list")
expect_true(all(
c("time", "f0", "fn", "q0", "qn", "mqn", "gam", "coef") %in%
names(result)
))
expect_equal(dim(result$f0), dim(mock_f))
expect_equal(dim(result$fn), dim(mock_f))
expect_equal(dim(result$q0), dim(mock_f))
expect_equal(dim(result$qn), dim(mock_f))
expect_equal(length(result$mqn), nrow(mock_f))
expect_equal(dim(result$coef), c(nrow(mock_f), ncol(mock_f)))
})

# Test 2: Check handling of different fpca types
test_that("Function handles different fpca types", {
# hfPCA
mock_train_prep = prep_training_data(mock_f, mock_time, "hfpca")
result_hfpca <-
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "DP"
)
expect_true("psi" %in% names(result_hfpca))
expect_true("nu" %in% names(result_hfpca))
# vfPCA
mock_train_prep = prep_training_data(mock_f, mock_time, "vfpca")
result_vfpca <-
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "DP"
)
expect_true("gam" %in% names(result_vfpca))
})

# Test 3: Check for error with incorrect input dimensions
test_that("Function throws error with incorrect input dimensions", {
expect_error(
prep_testing_data(
f = matrix(1:20, nrow = 5),
time = mock_time,
train_prep = mock_train_prep,
optim_method = "DP"
)
)
})

# Test 4: Check for error with invalid optim_method
test_that("Function throws error with invalid optim_method", {
expect_error(
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "invalid_method"
)
)
})

# Test 5: Check for correct output when using different optimization methods
test_that("Function works with different optimization methods", {
result_dp <-
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "DPo"
)
result_dpo <-
prep_testing_data(
f = mock_f,
time = mock_time,
train_prep = mock_train_prep,
optim_method = "RBFGS"
)
expect_type(result_dp, "list")
expect_type(result_dpo, "list")
})
Loading

0 comments on commit 87f06b6

Please sign in to comment.