diff --git a/DESCRIPTION b/DESCRIPTION index 111fc3a4..c12810b6 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -20,9 +20,6 @@ Encoding: UTF-8 LazyData: true Depends: R (>= 3.5.0), - lubridate, - PerformanceAnalytics, - quantmod (>= 0.4-13) Imports: dplyr (>= 1.0.0), ggplot2 (>= 3.4.0), @@ -30,7 +27,10 @@ Imports: httr, curl, lazyeval, + lubridate, magrittr, + PerformanceAnalytics, + quantmod (>= 0.4-13), purrr, readr, readxl, @@ -41,11 +41,13 @@ Imports: timeDate, TTR, xts, - rlang + rlang, + zoo, + cli Suggests: alphavantager (>= 0.1.2), Quandl, - riingo, + riingo, tibbletime, broom, knitr, diff --git a/NAMESPACE b/NAMESPACE index bee7cfcc..00b90f3d 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -1,5 +1,6 @@ # Generated by roxygen2: do not edit by hand +S3method(print,tidyquant_conflicts) S3method(tq_mutate_,data.frame) S3method(tq_mutate_,default) S3method(tq_mutate_,tbl_df) @@ -157,6 +158,7 @@ export(scale_fill_tq) export(theme_tq) export(theme_tq_dark) export(theme_tq_green) +export(tidyquant_conflicts) export(tiingo_api_key) export(tq_exchange) export(tq_exchange_options) @@ -182,10 +184,16 @@ export(tq_transmute_) export(tq_transmute_fun_options) export(tq_transmute_xy) export(tq_transmute_xy_) -import(PerformanceAnalytics) +import(TTR) import(lubridate) import(quantmod) +import(xts) +import(zoo) +importFrom(TTR,BBands) +importFrom(TTR,MACD) importFrom(TTR,SMA) +importFrom(TTR,runCor) +importFrom(TTR,runSD) importFrom(ggplot2,`%+replace%`) importFrom(magrittr,"%$%") importFrom(magrittr,"%>%") @@ -193,4 +201,8 @@ importFrom(rlang,":=") importFrom(rlang,.data) importFrom(utils,download.file) importFrom(utils,read.csv) +importFrom(xts,lag.xts) +importFrom(xts,to.monthly) importFrom(xts,to.period) +importFrom(xts,xts) +importFrom(zoo,rollapply) diff --git a/NEWS.md b/NEWS.md index 9d9c5fb9..0af930fd 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,12 +1,23 @@ # tidyquant (development version) -- Remove the dependency on tidyverse -- tidyquant no longer loads lubridate as tidyverse 2.0 now loads lubridate +## Breaking changes + +- tidyquant no longer loads lubridate. (#237, @olivroy) + + If you use tidyquant with tidyverse, there is no change for you. + +tidyquant no longer loads many packages. + +## Fixes + +- tidyquant startup messages mimics the tidyverse messages for clarity. (#163, #116) +- Remove the dependency on tidyverse (#236, @olivroy) +- tidyquant no longer loads lubridate as tidyverse 2.0 now loads lubridate. - Changed the `size` argument to `linewidth` for ggplot2 3.4.0 -- Removed the last tidyr and dplyr deprecated functions -- Add linewidth = to `geom_ma()` -- Move Quandl, riingo, and alphavantager to Suggests. tidyquant will not explictly install those, but you can install them from CRAN. -- Fixed CRAN alias +- Removed the last tidyr and dplyr deprecated functions. +- Add `linewidth` to `geom_ma()` +- Move `Quandl`, `riingo`, and `alphavantager` to Suggests. tidyquant will not explicitly install those, but you can install them from CRAN. +- Fixed CRAN package alias - FB to META change in `FANG` # tidyquant 1.0.7 @@ -96,7 +107,7 @@ Other changes: - Excel Date Math functions: `NET_WORKDAYS()`, `EOMONTH()` - __Financial Math Functions__ - `NPV()`, `IRR()`, `FV()`, `PV()`, `PMT()`, `RATE()` -* __NEW Tidyverse Functionality__ +* __NEW tidyverse Functionality__ - `summarise_by_time()` - This is a new time-based variant of `summarise()` that allows collapsing the time-series by "day", "week", "month", "quarter", "year", and more. - Note: I will evaluate the need for `summarise_at_by_time()`, `summarise_all_by_time()`, and `summarise_if_by_time()` after the release of `dplyr` v1.0.0. @@ -119,7 +130,7 @@ Other changes: # tidyquant 0.5.10 -* `tq_get()` - Temporarily adjust tests for `tq_get(get = "dividends")` and `tq_get(get = "splits")` until API is stabilizes. Yahoo! Dividends and Splits intermitently returns errors. +* `tq_get()` - Temporarily adjust tests for `tq_get(get = "dividends")` and `tq_get(get = "splits")` until API is stabilizes. Yahoo! Dividends and Splits intermittently returns errors. * Fix documentation warnings during package build checks. Documentation moved from `tq_stocklist` to `?tq_index`. # tidyquant 0.5.9 @@ -144,9 +155,9 @@ _Visualizations & Color Palettes_ * `geom_candlestick` and `geom_barchart` - Issue #112. * Added color names of `theme_tq` palettes (`palette_light`, `palette_dark`, and `palette_green`) for easier identification. -_Compatability with `tidyr` v1.0.0_ +_Compatibility with `tidyr` v1.0.0_ -* Improvements to ensure compatability with `tidyr` v1.0.0 +* Improvements to ensure compatibility with `tidyr` v1.0.0 _[Potential Breaking Change] Move `tidyverse` to suggests_ @@ -289,9 +300,10 @@ _[Potential Breaking Change] Move `tidyverse` to suggests_ * Changed `tq_mutate()`, `tq_transform()`, `tq_mutate_xy()` and `tq_transform_xy()` arguments to be more obvious: * `x_fun` is now `ohlc_fun` for `tq_mutate()` and `tq_transform()` * `.x` is now `x` and `.y` is now `y` for `tq_mutate_xy()` and `tq_transform_xy()` -* Fixed duplication of column names during `tq_mutate`. Names are now sequentually indexed with duplicate names starting at `.1` suffix. +* Fixed duplication of column names during `tq_mutate`. Names are now sequentially indexed with duplicate names starting at `.1` suffix. # tidyquant 0.1.0 + * Initial release of `tidyquant`, for seamless quantitative financial analysis (`xts`, `quantmod`, `TTR`) package integration with the `tidyverse`. diff --git a/R/api-tiingo.R b/R/api-tiingo.R index 35251543..02109388 100644 --- a/R/api-tiingo.R +++ b/R/api-tiingo.R @@ -1,6 +1,6 @@ #' Set Tiingo API Key #' -#' Requires the riingo package to be installled. +#' Requires the riingo package to be installed. #' @param api_key Optionally passed parameter to set Tiingo `api_key`. #' #' @return Returns invisibly the currently set `api_key` diff --git a/R/attach.R b/R/attach.R new file mode 100644 index 00000000..88b06f23 --- /dev/null +++ b/R/attach.R @@ -0,0 +1,210 @@ +# Taken from tidyverse +core <- c("xts", "quantmod", "TTR", "PerformanceAnalytics") + +core_unloaded <- function() { + search <- paste0("package:", core) + core[!search %in% search()] +} + +# Attach the package from the same package library it was +# loaded from before. https://github.com/tidyverse/tidyverse/issues/171 +same_library <- function(pkg) { + loc <- if (pkg %in% loadedNamespaces()) dirname(getNamespaceInfo(pkg, "path")) + library(pkg, lib.loc = loc, character.only = TRUE, warn.conflicts = FALSE) +} + +tidyquant_attach <- function() { + to_load <- core_unloaded() + + suppressPackageStartupMessages( + lapply(to_load, same_library) + ) + + invisible(to_load) +} + +tidyquant_attach_message <- function(to_load) { + if (length(to_load) == 0) { + return(NULL) + } + + header <- cli::rule( + left = cli::style_bold("Attaching core tidyquant packages"), + right = paste0("tidyquant ", package_version_h("tidyquant")) + ) + + to_load <- sort(to_load) + versions <- vapply(to_load, package_version_h, character(1)) + + packages <- paste0( + cli::col_green(cli::symbol$tick), " ", cli::col_blue(format(to_load)), " ", + cli::ansi_align(versions, max(cli::ansi_nchar(versions))) + ) + + if (length(packages) %% 2 == 1) { + packages <- append(packages, "") + } + col1 <- seq_len(length(packages) / 2) + info <- paste0(packages[col1], " ", packages[-col1]) + + paste0(header, "\n", paste(info, collapse = "\n")) +} + +package_version_h <- function(pkg) { + highlight_version(utils::packageVersion(pkg)) +} + +highlight_version <- function(x) { + x <- as.character(x) + + is_dev <- function(x) { + x <- suppressWarnings(as.numeric(x)) + !is.na(x) & x >= 9000 + } + + pieces <- strsplit(x, ".", fixed = TRUE) + pieces <- lapply(pieces, function(x) ifelse(is_dev(x), cli::col_red(x), x)) + vapply(pieces, paste, collapse = ".", FUN.VALUE = character(1)) +} + +#' Conflicts between the tidyquant and other packages +#' +#' This function lists all the conflicts between packages in the tidyverse +#' and other packages that you have loaded. +#' +#' There are four conflicts that are deliberately ignored: \code{intersect}, +#' \code{union}, \code{setequal}, and \code{setdiff} from dplyr. These functions +#' make the base equivalents generic, so shouldn't negatively affect any +#' existing code. +#' +#' @export +#' @param only Set this to a character vector to restrict to conflicts only +#' with these packages. +#' @examples +#' tidyquant_conflicts() +tidyquant_conflicts <- function(only = NULL) { + envs <- grep("^package:", search(), value = TRUE) + envs <- purrr::set_names(envs) + + if (!is.null(only)) { + only <- union(only, core) + envs <- envs[names(envs) %in% paste0("package:", only)] + } + + objs <- invert(lapply(envs, ls_env)) + + conflicts <- purrr::keep(objs, ~ length(.x) > 1) + + tidy_names <- paste0("package:", tidyquant_packages()) + conflicts <- purrr::keep(conflicts, ~ any(.x %in% tidy_names)) + + conflict_funs <- purrr::imap(conflicts, confirm_conflict) + conflict_funs <- purrr::compact(conflict_funs) + + structure(conflict_funs, class = "tidyquant_conflicts") +} + +tidyquant_conflict_message <- function(x) { + header <- cli::rule( + left = cli::style_bold("Conflicts"), + right = "tidyquant_conflicts()" + ) + + pkgs <- x %>% purrr::map(~ gsub("^package:", "", .)) + others <- pkgs %>% purrr::map(`[`, -1) + other_calls <- purrr::map2_chr( + others, names(others), + ~ paste0(cli::col_blue(.x), "::", .y, "()", collapse = ", ") + ) + + winner <- pkgs %>% purrr::map_chr(1) + funs <- format(paste0(cli::col_blue(winner), "::", cli::col_green(paste0(names(x), "()")))) + bullets <- paste0( + cli::col_red(cli::symbol$cross), " ", funs, " masks ", other_calls, + collapse = "\n" + ) + + conflicted <- paste0( + cli::col_cyan(cli::symbol$info), " ", + cli::format_inline("Use the {.href [conflicted package](http://conflicted.r-lib.org/)} to force all conflicts to become errors" + )) + + paste0( + header, "\n", + bullets, "\n", + conflicted + ) +} + +#' @export +print.tidyquant_conflicts <- function(x, ..., startup = FALSE) { + cli::cat_line(tidyquant_conflict_message(x)) + invisible(x) +} + +#' @importFrom magrittr %>% +confirm_conflict <- function(packages, name) { + # Only look at functions + objs <- packages %>% + purrr::map(~ get(name, pos = .)) %>% + purrr::keep(is.function) + + if (length(objs) <= 1) + return() + + # Remove identical functions + objs <- objs[!duplicated(objs)] + packages <- packages[!duplicated(packages)] + if (length(objs) == 1) + return() + + packages +} + +ls_env <- function(env) { + x <- ls(pos = env) + + # intersect, setdiff, setequal, union come from generics + if (env %in% c("package:dplyr", "package:lubridate")) { + x <- setdiff(x, c("intersect", "setdiff", "setequal", "union")) + } + + if (env == "package:lubridate") { + x <- setdiff(x, c( + "as.difftime", # lubridate makes into an S4 generic + "date" # matches base behavior + )) + } + + x +} + +inform_startup <- function(msg, ...) { + if (is.null(msg)) { + return() + } + if (isTRUE(getOption("tidyquant.quiet"))) { + return() + } + + rlang::inform(msg, ..., class = "packageStartupMessage") +} +tidyquant_packages <- function(include_self = TRUE) { + raw <- utils::packageDescription("tidyquant")$Imports + imports <- strsplit(raw, ",")[[1]] + parsed <- gsub("^\\s+|\\s+$", "", imports) + names <- vapply(strsplit(parsed, "\\s+"), "[[", 1, FUN.VALUE = character(1)) + + if (include_self) { + names <- c(names, "tidyquant") + } + + names +} + +invert <- function(x) { + if (length(x) == 0) return() + stacked <- utils::stack(x) + tapply(as.character(stacked$ind), stacked$values, list) +} + diff --git a/R/excel-date-functions.R b/R/excel-date-functions.R index 84fee9e6..a0ad1776 100644 --- a/R/excel-date-functions.R +++ b/R/excel-date-functions.R @@ -46,7 +46,7 @@ #' (as ordered factors) or numeric values. #' @param abbr A logical used for [MONTH()] and [WEEKDAY()]. If `label = TRUE`, used to determine if #' full names (e.g. Wednesday) or abbreviated names (e.g. Wed) should be returned. -#' @param include_year A logicial value used in [QUARTER()]. Determines whether or not to return 2020 Q3 as `3` or `2020.3`. +#' @param include_year A logical value used in [QUARTER()]. Determines whether or not to return 2020 Q3 as `3` or `2020.3`. #' @param fiscal_start A numeric value used in [QUARTER()]. Determines the fiscal-year starting quarter. #' @param by Used to determine the gap in Date Sequence calculations and value to round to in Date Collapsing operations. #' Acceptable values are: A character string, containing one of `"day"`, `"week"`, `"month"`, `"quarter"` or `"year"`. diff --git a/R/excel-financial-math-functions.R b/R/excel-financial-math-functions.R index 3db452ea..a4261f25 100644 --- a/R/excel-financial-math-functions.R +++ b/R/excel-financial-math-functions.R @@ -14,7 +14,7 @@ #' @param cashflow Cash flow values. When one value is provided, it's assumed constant cash flow. #' @param rate One or more rate. When one rate is provided it's assumed constant rate. #' @param nper Number of periods. When `nper`` is provided, the cashflow values and rate are assumed constant. -#' @param pv Present value. Initial investments (cash inflows) are typcially a negative value. +#' @param pv Present value. Initial investments (cash inflows) are typically a negative value. #' @param fv Future value. Cash outflows are typically a positive value. #' @param pmt Number of payments per period. #' @param type Should payments (`pmt`) occur at the beginning (`type = 0`) or diff --git a/R/ggplot-geom_bbands.R b/R/ggplot-geom_bbands.R index 2fd4f431..f746fd56 100644 --- a/R/ggplot-geom_bbands.R +++ b/R/ggplot-geom_bbands.R @@ -67,7 +67,7 @@ #' @examples #' library(dplyr) #' library(ggplot2) -#' +#' library(lubridate) #' #' AAPL <- tq_get("AAPL", from = "2013-01-01", to = "2016-12-31") #' diff --git a/R/ggplot-geom_chart.R b/R/ggplot-geom_chart.R index 22110513..05f4c755 100644 --- a/R/ggplot-geom_chart.R +++ b/R/ggplot-geom_chart.R @@ -11,12 +11,12 @@ #' @inheritParams geom_ma #' @inheritParams ggplot2::geom_linerange #' @param colour_up,colour_down Select colors to be applied based on price movement -#' from open to close. If close >= open, `colour_up` is used. Otherwise, -#' `colour_down` is used. The default is "darkblue" and "red", respectively. +#' from open to close. If `close >= open`, `colour_up` is used. Otherwise, +#' `colour_down` is used. The default is `"darkblue"` and `"red"`, respectively. #' @param fill_up,fill_down Select fills to be applied based on price movement #' from open to close. If close >= open, `fill_up` is used. Otherwise, -#' `fill_down` is used. The default is "darkblue" and "red", respectively. -#' Only affects `geom_candlestick`. +#' `fill_down` is used. The default is `"darkblue"` and "red", respectively. +#' Only affects `geom_candlestick()`. #' #' @section Aesthetics: #' The following aesthetics are understood (required are in bold): @@ -46,6 +46,7 @@ #' @examples #' library(dplyr) #' library(ggplot2) +#' library(lubridate) #' #' AAPL <- tq_get("AAPL", from = "2013-01-01", to = "2016-12-31") #' diff --git a/R/ggplot-geom_ma.R b/R/ggplot-geom_ma.R index efb66307..d47c3ae9 100644 --- a/R/ggplot-geom_ma.R +++ b/R/ggplot-geom_ma.R @@ -49,7 +49,7 @@ #' #' @param inherit.aes If `FALSE`, overrides the default aesthetics, #' rather than combining with them. This is most useful for helper functions -#' that define both data and aesthetics and shouldn't inherit behaviour from +#' that define both data and aesthetics and shouldn't inherit behavior from #' the default plot specification, e.g. [ggplot2::borders()]. #' #' @param ma_fun The function used to calculate the moving average. Seven options are @@ -107,7 +107,7 @@ #' geom_ma(ma_fun = SMA, n = 50) + # Plot 50-day SMA #' geom_ma(ma_fun = SMA, n = 200, color = "red") + # Plot 200-day SMA #' coord_x_date(xlim = c("2016-01-01", "2016-12-31"), -#' ylim = c(75, 125)) # Zoom in +#' ylim = c(20, 30)) # Zoom in #' #' # EVWMA #' AAPL %>% @@ -115,7 +115,7 @@ #' geom_line() + # Plot stock price #' geom_ma(aes(volume = volume), ma_fun = EVWMA, n = 50) + # Plot 50-day EVWMA #' coord_x_date(xlim = c("2016-01-01", "2016-12-31"), -#' ylim = c(75, 125)) # Zoom in +#' ylim = c(20, 30)) # Zoom in #' @rdname geom_ma @@ -182,7 +182,7 @@ StatMA <- ggplot2::ggproto("StatSMA", ggplot2::Stat, StatMA_vol <- ggplot2::ggproto("StatSMA_vol", ggplot2::Stat, required_aes = c("x", "y", "volume"), - + dropped_aes = "volume", compute_group = function(data, scales, params, ma_fun, n = 10, wilder = FALSE, ratio = NULL, diff --git a/R/sysdata.rda b/R/sysdata.rda index 95fb891f..19abbdb3 100644 Binary files a/R/sysdata.rda and b/R/sysdata.rda differ diff --git a/R/tidyquant.R b/R/tidyquant-package.R similarity index 83% rename from R/tidyquant.R rename to R/tidyquant-package.R index f3755714..84ab764d 100644 --- a/R/tidyquant.R +++ b/R/tidyquant-package.R @@ -24,13 +24,19 @@ #' To learn more about tidyquant, start with the vignettes: #' `browseVignettes(package = "tidyquant")` #' -#' +"_PACKAGE" + +## usethis namespace: start #' @import quantmod #' @import lubridate -#' @import PerformanceAnalytics -#' @importFrom rlang ":=" .data +#' @import TTR +#' @import zoo +#' @import xts +#' @importFrom rlang := .data #' @importFrom magrittr %$% #' @importFrom utils download.file read.csv -#' @importFrom TTR SMA -#' @importFrom xts to.period -"_PACKAGE" +#' @importFrom TTR SMA runSD MACD runCor BBands +#' @importFrom xts to.period xts to.monthly lag.xts +#' @importFrom zoo rollapply +## usethis namespace: end +NULL diff --git a/R/tq_get.R b/R/tq_get.R index 84e330cb..9eecc685 100644 --- a/R/tq_get.R +++ b/R/tq_get.R @@ -48,15 +48,13 @@ #' @param ... Additional parameters passed to the "wrapped" #' function. Investigate underlying functions to see full list of arguments. #' Common optional parameters include: -#' \itemize{ -#' \item `from`: Standardized for time series functions in `quantmod`, `quandl`, `tiingo`, `alphavantager` packages. +#' +#' * `from`: Standardized for time series functions in `quantmod`, `quandl`, `tiingo`, `alphavantager` packages. #' A character string representing a start date in #' YYYY-MM-DD format. -#' \item `to`: Standardized for time series functions in `quantmod`, `quandl`, `tiingo`, `alphavantager` packages. +#' * `to`: Standardized for time series functions in `quantmod`, `quandl`, `tiingo`, `alphavantager` packages. #' A character string representing a end date in #' YYYY-MM-DD format. -#' } -#' #' #' @return Returns data in the form of a `tibble` object. #' @@ -67,7 +65,7 @@ #' in other packages. #' The results are always returned as a `tibble`. The advantages #' are (1) only one function is needed for all data sources and (2) the function -#' can be seemlessly used with the tidyverse: `purrr`, `tidyr`, and +#' can be seamlessly used with the tidyverse: `purrr`, `tidyr`, and #' `dplyr` verbs. #' #' `tq_get_options()` returns a list of valid `get` options you can diff --git a/R/tq_performance.R b/R/tq_performance.R index 16669805..8281db5a 100644 --- a/R/tq_performance.R +++ b/R/tq_performance.R @@ -243,65 +243,7 @@ tq_performance_.grouped_df <- function(data, Ra, Rb = NULL, performance_fun, ... #' @rdname tq_performance #' @export tq_performance_fun_options <- function() { - - # Performance Analytics functions - pkg_regex_table <- "^table" - funs_table <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_table) - funs_table <- ls("package:PerformanceAnalytics")[funs_table] - funs_table <- funs_table[!stringr::str_detect(funs_table, "(Drawdowns$|CalendarReturns$|ProbOutPerformance$)")] # remove table.Drawdowns - - pkg_regex_capm <- "^CAPM" - funs_capm <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_capm) - funs_capm <- c(ls("package:PerformanceAnalytics")[funs_capm], "TimingRatio", "MarketTiming") - - pkg_regex_sfm <- "^SFM" - funs_sfm <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_sfm) - funs_sfm <- ls("package:PerformanceAnalytics")[funs_sfm] - - funs_VaR <- c("VaR", "ES", "ETL", "CDD", "CVaR") - - funs_descriptive <- c("mean", "sd", "min", "max", "cor", "mean.geometric", "mean.stderr", "mean.LCL", "mean.UCL") - - funs_annualized <- c("Return.annualized", "Return.annualized.excess", "sd.annualized", "SharpeRatio.annualized") - - funs_moments <- c("var", "cov", "skewness", "kurtosis", "CoVariance", "CoSkewness", "CoSkewnessMatrix", - "CoKurtosis", "CoKurtosisMatrix", "M3.MM", "M4.MM", "BetaCoVariance", "BetaCoSkewness", "BetaCoKurtosis") - - funs_drawdown <- c("AverageDrawdown", "AverageLength", "AverageRecovery", "DrawdownDeviation", "DrawdownPeak", "maxDrawdown") - - funs_risk <- c("MeanAbsoluteDeviation", "Frequency", "SharpeRatio", "MSquared", "MSquaredExcess", "HurstIndex") - - funs_regression <- c("CAPM.alpha", "CAPM.beta", "CAPM.epsilon", "CAPM.jensenAlpha", "SystematicRisk", - "SpecificRisk", "TotalRisk", "TreynorRatio", "AppraisalRatio", "FamaBeta", - "Selectivity", "NetSelectivity") - - funs_rel_risk <- c("ActivePremium", "ActiveReturn", "TrackingError", "InformationRatio") - - funs_drw_dn <- c("PainIndex", "PainRatio", "CalmarRatio", "SterlingRatio", "BurkeRatio", "MartinRatio", "UlcerIndex") - - funs_dside_risk <- c("DownsideDeviation", "DownsidePotential", "DownsideFrequency", "SemiDeviation", "SemiVariance", - "UpsideRisk", "UpsidePotentialRatio", "UpsideFrequency", - "BernardoLedoitRatio", "DRatio", "Omega", "OmegaSharpeRatio", "OmegaExcessReturn", "SortinoRatio", "M2Sortino", "Kappa", - "VolatilitySkewness", "AdjustedSharpeRatio", "SkewnessKurtosisRatio", "ProspectRatio") - - funs_misc <- c("KellyRatio", "Modigliani", "UpDownRatios") - - fun_options <- list(table.funs = funs_table, - CAPM.funs = funs_capm, - SFM.funs = funs_sfm, - descriptive.funs = funs_descriptive, - annualized.funs = funs_annualized, - VaR.funs = funs_VaR, - moment.funs = funs_moments, - drawdown.funs = funs_drawdown, - Bacon.risk.funs = funs_risk, - Bacon.regression.funs = funs_regression, - Bacon.relative.risk.funs = funs_rel_risk, - Bacon.drawdown.funs = funs_drw_dn, - Bacon.downside.risk.funs = funs_dside_risk, - misc.funs = funs_misc) - - fun_options + tq_performance_options } # Utility --------------------------------------------------------------------------------------------------- diff --git a/R/tq_transmute.R b/R/tq_transmute.R index 1c0c42fa..7f18cde1 100644 --- a/R/tq_transmute.R +++ b/R/tq_transmute.R @@ -278,37 +278,10 @@ tq_transmute_xy_.grouped_df <- function(data, x, y = NULL, mutate_fun, col_renam #' @rdname tq_mutate #' @export tq_transmute_fun_options <- function() { - - # zoo rollapply functions - pkg_regex_zoo <- "roll" - funs_zoo <- ls("package:zoo")[stringr::str_detect(ls("package:zoo"), pkg_regex_zoo)] - - # xts apply.period, to.period, lag and diff functions - pkg_regex_xts <- "apply|to\\.|period|lag|diff" - funs_xts <- ls("package:xts")[stringr::str_detect(ls("package:xts"), pkg_regex_xts)] - - # quantmod periodReturns, Delt, series functions - pkg_regex_quantmod <- "Return|Delt|Lag|Next|^Op..|^Cl..|^Hi..|^Lo..|^series" - funs_quantmod <- ls("package:quantmod")[stringr::str_detect(ls("package:quantmod"), pkg_regex_quantmod)] - - # TTR functions - pkg_regex_ttr <- "^get*|^stock|^naCh" # NOT these - funs_ttr <- ls("package:TTR")[!stringr::str_detect(ls("package:TTR"), pkg_regex_ttr)] - - # PerformanceAnalytics apply.rolling, Return... - pkg_PA <- "package:PerformanceAnalytics" - pkg_regex_PA <- "Return.annualized|Return.excess|Return.Geltner|Return.cumulative|Return.clean|zerofill" - funs_PA <- ls(pkg_PA)[stringr::str_detect(ls(pkg_PA), pkg_regex_PA)] - - - - fun_options <- list(zoo = funs_zoo, - xts = funs_xts, - quantmod = funs_quantmod, - TTR = funs_ttr, - PerformanceAnalytics = funs_PA) - - fun_options + # Moved to an internal dataset to avoid requiring to load (and modify a user namespace) + # This needs to be updated if new functions are added / removed. + # Run data-raw/fun-options.R script to regenerate this. + tq_transmute_options } # Checks ---------------------------------------------------------------------------------------------------- diff --git a/R/zzz.R b/R/zzz.R index e8cfae08..d30c3c28 100644 --- a/R/zzz.R +++ b/R/zzz.R @@ -32,3 +32,25 @@ # }) # # } + +.onAttach <- function(...) { + attached <- tidyquant_attach() + if (!is_loading_for_tests()) { + inform_startup(tidyquant_attach_message(attached)) + } + + if (!is_attached("conflicted") && !is_loading_for_tests()) { + conflicts <- tidyquant_conflicts() + inform_startup(tidyquant_conflict_message(conflicts)) + } +} + +is_attached <- function(x) { + paste0("package:", x) %in% search() +} + +is_loading_for_tests <- function() { + !interactive() && identical(Sys.getenv("DEVTOOLS_LOAD"), "tidyquant") +} + + diff --git a/README.Rmd b/README.Rmd index 62d27f30..933aa8fc 100644 --- a/README.Rmd +++ b/README.Rmd @@ -20,7 +20,7 @@ knitr::opts_chunk$set( [![R-CMD-check](https://github.com/business-science/tidyquant/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/business-science/tidyquant/actions/workflows/R-CMD-check.yaml) -[![codecov](https://codecov.io/gh/business-science/tidyquant/branch/master/graph/badge.svg)](https://app.codecov.io/gh/business-science/tidyquant) +[![Codecov](https://codecov.io/gh/business-science/tidyquant/branch/master/graph/badge.svg)](https://app.codecov.io/gh/business-science/tidyquant) [![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/tidyquant)](https://cran.r-project.org/package=tidyquant) ![](http://cranlogs.r-pkg.org/badges/tidyquant?color=brightgreen) ![](http://cranlogs.r-pkg.org/badges/grand-total/tidyquant?color=brightgreen) @@ -46,7 +46,7 @@ Our short introduction to `tidyquant` on * __A few core functions with a lot of power__ * __Integrates the quantitative analysis functionality of `zoo`, `xts`, `quantmod`, `TTR`, and _now_ `PerformanceAnalytics`__ -* __Designed for modeling and scaling analyses using the the `tidyverse` tools in [_R for Data Science_](https://r4ds.hadley.nz/)__ +* __Designed for modeling and scaling analyses using the `tidyverse` tools in [_R for Data Science_](https://r4ds.hadley.nz/)__ * __Implements `ggplot2` functionality for beautiful and meaningful financial visualizations__ * __User-friendly documentation to get you up to speed quickly!__ diff --git a/README.md b/README.md index a7312040..2a2268b0 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ [![R-CMD-check](https://github.com/business-science/tidyquant/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/business-science/tidyquant/actions/workflows/R-CMD-check.yaml) -[![codecov](https://codecov.io/gh/business-science/tidyquant/branch/master/graph/badge.svg)](https://app.codecov.io/gh/business-science/tidyquant) +[![Codecov](https://codecov.io/gh/business-science/tidyquant/branch/master/graph/badge.svg)](https://app.codecov.io/gh/business-science/tidyquant) [![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/tidyquant)](https://cran.r-project.org/package=tidyquant) ![](http://cranlogs.r-pkg.org/badges/tidyquant?color=brightgreen) ![](http://cranlogs.r-pkg.org/badges/grand-total/tidyquant?color=brightgreen) @@ -32,7 +32,7 @@ perform complete financial analyses in the `tidyverse`. - **A few core functions with a lot of power** - **Integrates the quantitative analysis functionality of `zoo`, `xts`, `quantmod`, `TTR`, and *now* `PerformanceAnalytics`** -- **Designed for modeling and scaling analyses using the the `tidyverse` +- **Designed for modeling and scaling analyses using the `tidyverse` tools in [*R for Data Science*](https://r4ds.hadley.nz/)** - **Implements `ggplot2` functionality for beautiful and meaningful financial visualizations** diff --git a/data-raw/data_raw_scripts.R b/data-raw/data_raw_scripts.R index 5149dabb..2fd49520 100644 --- a/data-raw/data_raw_scripts.R +++ b/data-raw/data_raw_scripts.R @@ -62,9 +62,124 @@ run_yahoo_finance_tags <- function() { yahoo_tags } +# TQ Performance ------- + # Script ---- stock_indexes <- re_run_fallback() yahoo_tags <- run_yahoo_finance_tags() -usethis::use_data(stock_indexes, yahoo_tags, internal = TRUE, overwrite = TRUE) +if (FALSE) { + # If yahoo tags and stock indexes are updated + readr::write_rds(stock_indexes, "data-raw/stock_index.RDS") + readr::write_rds(yahoo_tags, "data-raw/yahoo_tags.RDS") +} + +# To improve reproducibility, I saved stock_indexes and yahoo_tags as RDS files. +stock_indexes <- readr::read_rds("data-raw/stock_index.RDS") +yahoo_tags <- readr::read_rds("data-raw/yahoo_tags.RDS") + +library(PerformanceAnalytics) +pkg_regex_table <- "^table" +funs_table <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_table) +funs_table <- ls("package:PerformanceAnalytics")[funs_table] +funs_table <- funs_table[!stringr::str_detect(funs_table, "(Drawdowns$|CalendarReturns$|ProbOutPerformance$)")] # remove table.Drawdowns + +pkg_regex_capm <- "^CAPM" +funs_capm <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_capm) +funs_capm <- c(ls("package:PerformanceAnalytics")[funs_capm], "TimingRatio", "MarketTiming") + +pkg_regex_sfm <- "^SFM" +funs_sfm <- stringr::str_detect(ls("package:PerformanceAnalytics"), pkg_regex_sfm) +funs_sfm <- ls("package:PerformanceAnalytics")[funs_sfm] + +funs_VaR <- c("VaR", "ES", "ETL", "CDD", "CVaR") + +funs_descriptive <- c("mean", "sd", "min", "max", "cor", "mean.geometric", "mean.stderr", "mean.LCL", "mean.UCL") + +funs_annualized <- c("Return.annualized", "Return.annualized.excess", "sd.annualized", "SharpeRatio.annualized") + +funs_moments <- c( + "var", "cov", "skewness", "kurtosis", "CoVariance", "CoSkewness", "CoSkewnessMatrix", + "CoKurtosis", "CoKurtosisMatrix", "M3.MM", "M4.MM", "BetaCoVariance", "BetaCoSkewness", "BetaCoKurtosis" +) + +funs_drawdown <- c("AverageDrawdown", "AverageLength", "AverageRecovery", "DrawdownDeviation", "DrawdownPeak", "maxDrawdown") + +funs_risk <- c("MeanAbsoluteDeviation", "Frequency", "SharpeRatio", "MSquared", "MSquaredExcess", "HurstIndex") + +funs_regression <- c( + "CAPM.alpha", "CAPM.beta", "CAPM.epsilon", "CAPM.jensenAlpha", "SystematicRisk", + "SpecificRisk", "TotalRisk", "TreynorRatio", "AppraisalRatio", "FamaBeta", + "Selectivity", "NetSelectivity" +) + +funs_rel_risk <- c("ActivePremium", "ActiveReturn", "TrackingError", "InformationRatio") + +funs_drw_dn <- c("PainIndex", "PainRatio", "CalmarRatio", "SterlingRatio", "BurkeRatio", "MartinRatio", "UlcerIndex") + +funs_dside_risk <- c( + "DownsideDeviation", "DownsidePotential", "DownsideFrequency", "SemiDeviation", "SemiVariance", + "UpsideRisk", "UpsidePotentialRatio", "UpsideFrequency", + "BernardoLedoitRatio", "DRatio", "Omega", "OmegaSharpeRatio", "OmegaExcessReturn", "SortinoRatio", "M2Sortino", "Kappa", + "VolatilitySkewness", "AdjustedSharpeRatio", "SkewnessKurtosisRatio", "ProspectRatio" +) + +funs_misc <- c("KellyRatio", "Modigliani", "UpDownRatios") + +tq_performance_options <- list( + table.funs = funs_table, + CAPM.funs = funs_capm, + SFM.funs = funs_sfm, + descriptive.funs = funs_descriptive, + annualized.funs = funs_annualized, + VaR.funs = funs_VaR, + moment.funs = funs_moments, + drawdown.funs = funs_drawdown, + Bacon.risk.funs = funs_risk, + Bacon.regression.funs = funs_regression, + Bacon.relative.risk.funs = funs_rel_risk, + Bacon.drawdown.funs = funs_drw_dn, + Bacon.downside.risk.funs = funs_dside_risk, + misc.funs = funs_misc +) + +library(zoo) +library(quantmod) +library(xts) +library(PerformanceAnalytics) +library(TTR) +# zoo rollapply functions +pkg_regex_zoo <- "roll" +funs_zoo <- ls("package:zoo")[stringr::str_detect(ls("package:zoo"), pkg_regex_zoo)] + +# xts apply.period, to.period, lag and diff functions +pkg_regex_xts <- "apply|to\\.|period|lag|diff" +funs_xts <- ls("package:xts")[stringr::str_detect(ls("package:xts"), pkg_regex_xts)] + +# quantmod periodReturns, Delt, series functions +pkg_regex_quantmod <- "Return|Delt|Lag|Next|^Op..|^Cl..|^Hi..|^Lo..|^series" +funs_quantmod <- ls("package:quantmod")[stringr::str_detect(ls("package:quantmod"), pkg_regex_quantmod)] + +# TTR functions +pkg_regex_ttr <- "^get*|^stock|^naCh" # NOT these +funs_ttr <- ls("package:TTR")[!stringr::str_detect(ls("package:TTR"), pkg_regex_ttr)] + +# PerformanceAnalytics apply.rolling, Return... +pkg_PA <- "package:PerformanceAnalytics" +pkg_regex_PA <- "Return.annualized|Return.excess|Return.Geltner|Return.cumulative|Return.clean|zerofill" +funs_PA <- ls(pkg_PA)[stringr::str_detect(ls(pkg_PA), pkg_regex_PA)] + + + +tq_transmute_options <- list( + zoo = funs_zoo, + xts = funs_xts, + quantmod = funs_quantmod, + TTR = funs_ttr, + PerformanceAnalytics = funs_PA +) + + + +usethis::use_data(stock_indexes, yahoo_tags,tq_performance_options, tq_transmute_options, internal = TRUE, overwrite = TRUE) diff --git a/data-raw/stock_index.RDS b/data-raw/stock_index.RDS new file mode 100644 index 00000000..4490a69f Binary files /dev/null and b/data-raw/stock_index.RDS differ diff --git a/data-raw/yahoo_tags.RDS b/data-raw/yahoo_tags.RDS new file mode 100644 index 00000000..bac70188 Binary files /dev/null and b/data-raw/yahoo_tags.RDS differ diff --git a/man/excel_date_functions.Rd b/man/excel_date_functions.Rd index 4f5fc3b1..2700501b 100644 --- a/man/excel_date_functions.Rd +++ b/man/excel_date_functions.Rd @@ -223,7 +223,7 @@ ROUND_YEAR(x, ...) \item{abbr}{A logical used for \code{\link[=MONTH]{MONTH()}} and \code{\link[=WEEKDAY]{WEEKDAY()}}. If \code{label = TRUE}, used to determine if full names (e.g. Wednesday) or abbreviated names (e.g. Wed) should be returned.} -\item{include_year}{A logicial value used in \code{\link[=QUARTER]{QUARTER()}}. Determines whether or not to return 2020 Q3 as \code{3} or \code{2020.3}.} +\item{include_year}{A logical value used in \code{\link[=QUARTER]{QUARTER()}}. Determines whether or not to return 2020 Q3 as \code{3} or \code{2020.3}.} \item{fiscal_start}{A numeric value used in \code{\link[=QUARTER]{QUARTER()}}. Determines the fiscal-year starting quarter.} diff --git a/man/excel_financial_math_functions.Rd b/man/excel_financial_math_functions.Rd index 157c5914..85c4e4fc 100644 --- a/man/excel_financial_math_functions.Rd +++ b/man/excel_financial_math_functions.Rd @@ -29,7 +29,7 @@ RATE(nper, pmt, pv, fv = 0, type = 0) \item{nper}{Number of periods. When `nper`` is provided, the cashflow values and rate are assumed constant.} -\item{pv}{Present value. Initial investments (cash inflows) are typcially a negative value.} +\item{pv}{Present value. Initial investments (cash inflows) are typically a negative value.} \item{pmt}{Number of payments per period.} diff --git a/man/geom_bbands.Rd b/man/geom_bbands.Rd index 9cd443e0..0342ca3b 100644 --- a/man/geom_bbands.Rd +++ b/man/geom_bbands.Rd @@ -82,7 +82,7 @@ display.} \item{inherit.aes}{If \code{FALSE}, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions -that define both data and aesthetics and shouldn't inherit behaviour from +that define both data and aesthetics and shouldn't inherit behavior from the default plot specification, e.g. \code{\link[ggplot2:borders]{ggplot2::borders()}}.} \item{ma_fun}{The function used to calculate the moving average. Seven options are @@ -162,7 +162,7 @@ The following aesthetics are understood (required are in bold): \examples{ library(dplyr) library(ggplot2) - +library(lubridate) AAPL <- tq_get("AAPL", from = "2013-01-01", to = "2016-12-31") diff --git a/man/geom_chart.Rd b/man/geom_chart.Rd index 525f16aa..24785acc 100644 --- a/man/geom_chart.Rd +++ b/man/geom_chart.Rd @@ -76,17 +76,17 @@ display.} \item{inherit.aes}{If \code{FALSE}, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions -that define both data and aesthetics and shouldn't inherit behaviour from +that define both data and aesthetics and shouldn't inherit behavior from the default plot specification, e.g. \code{\link[ggplot2:borders]{ggplot2::borders()}}.} \item{colour_up, colour_down}{Select colors to be applied based on price movement -from open to close. If close >= open, \code{colour_up} is used. Otherwise, -\code{colour_down} is used. The default is "darkblue" and "red", respectively.} +from open to close. If \code{close >= open}, \code{colour_up} is used. Otherwise, +\code{colour_down} is used. The default is \code{"darkblue"} and \code{"red"}, respectively.} \item{fill_up, fill_down}{Select fills to be applied based on price movement from open to close. If close >= open, \code{fill_up} is used. Otherwise, -\code{fill_down} is used. The default is "darkblue" and "red", respectively. -Only affects \code{geom_candlestick}.} +\code{fill_down} is used. The default is \code{"darkblue"} and "red", respectively. +Only affects \code{geom_candlestick()}.} \item{...}{Other arguments passed on to \code{\link[ggplot2:layer]{ggplot2::layer()}}. These are often aesthetics, used to set an aesthetic to a fixed value, like @@ -121,6 +121,7 @@ The following aesthetics are understood (required are in bold): \examples{ library(dplyr) library(ggplot2) +library(lubridate) AAPL <- tq_get("AAPL", from = "2013-01-01", to = "2016-12-31") diff --git a/man/geom_ma.Rd b/man/geom_ma.Rd index 8d56f561..80dfd446 100644 --- a/man/geom_ma.Rd +++ b/man/geom_ma.Rd @@ -72,7 +72,7 @@ display.} \item{inherit.aes}{If \code{FALSE}, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions -that define both data and aesthetics and shouldn't inherit behaviour from +that define both data and aesthetics and shouldn't inherit behavior from the default plot specification, e.g. \code{\link[ggplot2:borders]{ggplot2::borders()}}.} \item{ma_fun}{The function used to calculate the moving average. Seven options are @@ -150,7 +150,7 @@ AAPL \%>\% geom_ma(ma_fun = SMA, n = 50) + # Plot 50-day SMA geom_ma(ma_fun = SMA, n = 200, color = "red") + # Plot 200-day SMA coord_x_date(xlim = c("2016-01-01", "2016-12-31"), - ylim = c(75, 125)) # Zoom in + ylim = c(20, 30)) # Zoom in # EVWMA AAPL \%>\% @@ -158,7 +158,7 @@ AAPL \%>\% geom_line() + # Plot stock price geom_ma(aes(volume = volume), ma_fun = EVWMA, n = 50) + # Plot 50-day EVWMA coord_x_date(xlim = c("2016-01-01", "2016-12-31"), - ylim = c(75, 125)) # Zoom in + ylim = c(20, 30)) # Zoom in } \seealso{ See individual modeling functions for underlying parameters: diff --git a/man/tidyquant-package.Rd b/man/tidyquant-package.Rd index 0d92bd40..8982866f 100644 --- a/man/tidyquant-package.Rd +++ b/man/tidyquant-package.Rd @@ -1,5 +1,5 @@ % Generated by roxygen2: do not edit by hand -% Please edit documentation in R/tidyquant.R +% Please edit documentation in R/tidyquant-package.R \docType{package} \name{tidyquant-package} \alias{tidyquant} diff --git a/man/tidyquant_conflicts.Rd b/man/tidyquant_conflicts.Rd new file mode 100644 index 00000000..9bb262f4 --- /dev/null +++ b/man/tidyquant_conflicts.Rd @@ -0,0 +1,25 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/attach.R +\name{tidyquant_conflicts} +\alias{tidyquant_conflicts} +\title{Conflicts between the tidyquant and other packages} +\usage{ +tidyquant_conflicts(only = NULL) +} +\arguments{ +\item{only}{Set this to a character vector to restrict to conflicts only +with these packages.} +} +\description{ +This function lists all the conflicts between packages in the tidyverse +and other packages that you have loaded. +} +\details{ +There are four conflicts that are deliberately ignored: \code{intersect}, +\code{union}, \code{setequal}, and \code{setdiff} from dplyr. These functions +make the base equivalents generic, so shouldn't negatively affect any +existing code. +} +\examples{ +tidyquant_conflicts() +} diff --git a/man/tiingo_api_key.Rd b/man/tiingo_api_key.Rd index 615ca60e..be4cd4c1 100644 --- a/man/tiingo_api_key.Rd +++ b/man/tiingo_api_key.Rd @@ -13,7 +13,7 @@ tiingo_api_key(api_key) Returns invisibly the currently set \code{api_key} } \description{ -Requires the riingo package to be installled. +Requires the riingo package to be installed. } \details{ A wrapper for \code{riingo::ringo_set_token()} diff --git a/man/tq_get.Rd b/man/tq_get.Rd index 5bbca932..d13b0791 100644 --- a/man/tq_get.Rd +++ b/man/tq_get.Rd @@ -83,7 +83,7 @@ functions, \code{Quandl} functions, and also gets data from websources unavailab in other packages. The results are always returned as a \code{tibble}. The advantages are (1) only one function is needed for all data sources and (2) the function -can be seemlessly used with the tidyverse: \code{purrr}, \code{tidyr}, and +can be seamlessly used with the tidyverse: \code{purrr}, \code{tidyr}, and \code{dplyr} verbs. \code{tq_get_options()} returns a list of valid \code{get} options you can diff --git a/tests/testthat/test-tq_mutate.R b/tests/testthat/test-tq_mutate.R index 4d03a98e..a2495ba9 100644 --- a/tests/testthat/test-tq_mutate.R +++ b/tests/testthat/test-tq_mutate.R @@ -3,6 +3,9 @@ context("Testing tq_mutate()") #### Setup ---- AAPL <- tq_get("AAPL", get = "stock.prices", from = "2010-01-01", to = "2015-01-01") +if (nrow(AAPL) == 0) { + skip("Could not load AAPL") +} # Test 1: tq_mutate piping test test1 <- AAPL %>% tq_mutate(close, MACD) %>% diff --git a/tests/testthat/test-tq_portfolio.R b/tests/testthat/test-tq_portfolio.R index 2165e1ec..3098de39 100644 --- a/tests/testthat/test-tq_portfolio.R +++ b/tests/testthat/test-tq_portfolio.R @@ -1,7 +1,7 @@ #### Setup context(paste0("Testing tq_portfolio")) - +skip_if_offline() # Get stock prices stock_prices <- c("AAPL", "GOOG", "NFLX") %>% tq_get(get = "stock.prices", diff --git a/tests/testthat/test-tq_transmute.R b/tests/testthat/test-tq_transmute.R index a701c210..dcff35d8 100644 --- a/tests/testthat/test-tq_transmute.R +++ b/tests/testthat/test-tq_transmute.R @@ -1,5 +1,5 @@ context("Testing tq_transmute") - +skip_if_offline() #### Setup ---- AAPL <- tq_get("AAPL", get = "stock.prices", from = "2010-01-01", to = "2015-01-01") diff --git a/tidyquant.Rproj b/tidyquant.Rproj index b76d37eb..78f6367f 100644 --- a/tidyquant.Rproj +++ b/tidyquant.Rproj @@ -19,3 +19,5 @@ BuildType: Package PackageUseDevtools: Yes PackageInstallArgs: --no-multiarch --with-keep.source PackageRoxygenize: rd,collate,namespace + +SpellingDictionary: en_US diff --git a/vignettes/TQ00-introduction-to-tidyquant.Rmd b/vignettes/TQ00-introduction-to-tidyquant.Rmd index a176d19e..dd0946be 100644 --- a/vignettes/TQ00-introduction-to-tidyquant.Rmd +++ b/vignettes/TQ00-introduction-to-tidyquant.Rmd @@ -22,6 +22,7 @@ knitr::opts_chunk$set(message = FALSE, dpi = 200) library(tidyquant) +library(lubridate) library(dplyr) library(ggplot2) # devtools::load_all() # Travis CI fails on load_all() @@ -43,7 +44,7 @@ Check out our entire [Software Intro Series](https://www.youtube.com/watch?v=Gk_ * A few core functions with a lot of power * Integrates the quantitative analysis functionality of `zoo`, `xts`, `quantmod`, `TTR`, and `PerformanceAnalytics` -* Designed for modeling and scaling analyses using the the `tidyverse` tools in [_R for Data Science_](https://r4ds.had.co.nz/) +* Designed for modeling and scaling analyses using the `tidyverse` tools in [_R for Data Science_](https://r4ds.hadley.nz/) * Implements `ggplot2` functionality for beautiful and meaningful financial visualizations * User-friendly documentation to get you up to speed quickly! @@ -87,7 +88,7 @@ For more information, refer to the third topic-specific vignette, [Scaling and M The `tidyquant` package includes charting tools to assist users in developing quick visualizations in `ggplot2` using the grammar of graphics format and workflow. ```{r, echo = F} -end <- as_date("2017-01-01") +end <- lubridate::as_date("2017-01-01") start <- end - weeks(24) FANG %>% filter(date >= start - days(2 * 20)) %>% @@ -110,10 +111,10 @@ For more information, refer to the fourth topic-specific vignette, [Charting wit ## Performance Analysis of Asset and Portfolio Returns -Asset and portfolio performance analysis is a deep field with a wide range of theories and methods for analyzing risk versus reward. The `PerformanceAnalytics` package consolidates many of the most widely used performance metrics as functions that can be applied to stock or portfolio returns. `tidquant` implements the functionality with two primary functions: +Asset and portfolio performance analysis is a deep field with a wide range of theories and methods for analyzing risk versus reward. The `PerformanceAnalytics` package consolidates many of the most widely used performance metrics as functions that can be applied to stock or portfolio returns. `tidyquant` implements the functionality with two primary functions: -* `tq_performance` implements the performance analysis functions in a tidy way, enabling scaling analysis using the split, apply, combine framework. -* `tq_portfolio` provides a useful toolset for aggregating a group of individual asset returns into one or many portfolios. +* `tq_performance()` implements the performance analysis functions in a tidy way, enabling scaling analysis using the split, apply, combine framework. +* `tq_portfolio()` provides a useful toolset for aggregating a group of individual asset returns into one or many portfolios. Performance is based on the statistical properties of returns, and as a result both functions use __returns as opposed to stock prices__. diff --git a/vignettes/TQ01-core-functions-in-tidyquant.Rmd b/vignettes/TQ01-core-functions-in-tidyquant.Rmd index 9fec72c1..3b0227e1 100644 --- a/vignettes/TQ01-core-functions-in-tidyquant.Rmd +++ b/vignettes/TQ01-core-functions-in-tidyquant.Rmd @@ -20,14 +20,13 @@ knitr::opts_chunk$set(message = FALSE, fig.align = 'center', out.width='95%', dpi = 200) -# devtools::load_all() # Travis CI fails on load_all() ``` > A few core functions with a lot of power # Overview -The `tidyquant` package has a __core functions with a lot of power__. Few functions means less of a learning curve for the user, which is why there are only a handful of functions the user needs to learn to perform the vast majority of financial analysis tasks. The main functions are: +The `tidyquant` package has a __few core functions with a lot of power__. Few functions means less of a learning curve for the user, which is why there are only a handful of functions the user needs to learn to perform the vast majority of financial analysis tasks. The main functions are: * __Get a Stock Index, `tq_index()`, or a Stock Exchange, `tq_exchange()`__: Returns the stock symbols and various attributes for every stock in an index or exchange. Eighteen indexes and three exchanges are available. @@ -42,12 +41,18 @@ The `tidyquant` package has a __core functions with a lot of power__. Few functi Load the `tidyquant` package to get started. -```{r} +```r # Loads tidyquant, lubridate, xts, quantmod, TTR -library(dplyr) +library(tidyverse) library(tidyquant) ``` +```{r, include=FALSE} +# Load for R CMD CHECK +library(dplyr) +library(lubridate) +library(tidyquant) +``` # 1.0 Retrieve Consolidated Symbol Data @@ -270,7 +275,7 @@ c("META", "MSFT") %>% ## 2.6 Bloomberg -[Bloomberg](https://www.bloomberg.com/professional/solution/bloomberg-terminal/) provides access to arguably the most comprehensive financial data and is actively used by most major financial instutions that work with financial data. The `Rblpapi` package, an R interface to Bloomberg, has been integrated into `tidyquant` as follows. The benefit of the integration is the __scalability since we can now get multiple symbols returned in a tidy format__. +[Bloomberg](https://www.bloomberg.com/professional/solution/bloomberg-terminal/) provides access to arguably the most comprehensive financial data and is actively used by most major financial institutions that work with financial data. The `Rblpapi` package, an R interface to Bloomberg, has been integrated into `tidyquant` as follows. The benefit of the integration is the __scalability since we can now get multiple symbols returned in a tidy format__. ### Authentication @@ -362,7 +367,7 @@ A very powerful example is applying __custom functions__ across a rolling window 1. Get returns 2. Create a custom function -3. Apply the custom function accross a rolling window using `tq_mutate(mutate_fun = rollapply)` +3. Apply the custom function across a rolling window using `tq_mutate(mutate_fun = rollapply)` _Step 1: Get Returns_ diff --git a/vignettes/TQ02-quant-integrations-in-tidyquant.Rmd b/vignettes/TQ02-quant-integrations-in-tidyquant.Rmd index 2e67ce90..b5919de5 100644 --- a/vignettes/TQ02-quant-integrations-in-tidyquant.Rmd +++ b/vignettes/TQ02-quant-integrations-in-tidyquant.Rmd @@ -26,7 +26,7 @@ knitr::opts_chunk$set(message = FALSE, # Overview -There's a wide range of useful quantitative analysis functions that work with time-series objects. The problem is that many of these _wonderful_ functions don't work with data frames or the `tidyverse` workflow. That is until now! The `tidyquant` package integrates the most useful functions from the `xts`, `zoo`, `quantmod`, `TTR`, and `PerformanceAnalytics` packages. This vignette focuses on the following _core functions_ to demonstrate how the integratation works with the quantitative finance packages: +There's a wide range of useful quantitative analysis functions that work with time-series objects. The problem is that many of these _wonderful_ functions don't work with data frames or the `tidyverse` workflow. That is until now! The `tidyquant` package integrates the most useful functions from the `xts`, `zoo`, `quantmod`, `TTR`, and `PerformanceAnalytics` packages. This vignette focuses on the following _core functions_ to demonstrate how the integration works with the quantitative finance packages: * Transmute, `tq_transmute()`: Returns a new tidy data frame typically in a different periodicity than the input. * Mutate, `tq_mutate()`: Adds columns to the existing tidy data frame. @@ -37,8 +37,14 @@ Refer to [Performance Analysis with tidyquant](TQ05-performance-analysis-with-ti Load the `tidyquant` package to get started. -```{r} -# Loads tidyquant, lubridate, xts, quantmod, TTR +```r +# Loads tidyquant, xts, quantmod, TTR +library(tidyquant) +library(tidyverse) +``` + +```{r, include=FALSE} +# Loads packages individually for R CMD CHECK library(tidyquant) library(lubridate) library(dplyr) @@ -156,7 +162,7 @@ Here' a brief description of the most popular functions from `TTR`: * `runVar(x, y = NULL, n = 10, sample = TRUE, cumulative = FALSE)`: returns variances over a n-period moving window. * `runSD(x, n = 10, sample = TRUE, cumulative = FALSE)`: returns standard deviations over a n-period moving window. * `runMAD(x, n = 10, center = NULL, stat = "median", constant = 1.4826, non.unique = "mean", cumulative = FALSE)`: returns median/mean absolute deviations over a n-period moving window. - * `wilderSum(x, n = 10)`: retuns a Welles Wilder style weighted sum over a n-period moving window. + * `wilderSum(x, n = 10)`: returns a Welles Wilder style weighted sum over a n-period moving window. * Stochastic Oscillator / Stochastic Momentum Index: * `stoch(HLC, nFastK = 14, nFastD = 3, nSlowD = 3, maType, bounded = TRUE, smooth = 1, ...)`: Stochastic Oscillator * `SMI(HLC, n = 13, nFast = 2, nSlow = 25, nSig = 9, maType, bounded = TRUE, ...)`: Stochastic Momentum Index @@ -223,7 +229,7 @@ FANG_annual_returns %>% ### Example 1B: Getting Daily Log Returns -Daily log returns follows a similar approach. Normally I go with a transmute function, `tq_transmute`, because the `periodReturn` function accepts different periodicity options, and anything other than daily will blow up a mutation. But, in our situation the period returns periodicity is the same as the stock prices periodicity (both daily), so we can use either. We want to use the adjusted closing prices column (adjusted for stock splits, which can make it appear that a stock is performing poorly if a split is included), so we set `select = adjusted`. We researched the `periodReturn` function, and we found that it accepts `type = "log"` and `period = "daily"`, which returns the daily log returns. +Daily log returns follow a similar approach. Normally I go with a transmute function, `tq_transmute()`, because the `periodReturn` function accepts different periodicity options, and anything other than daily will blow up a mutation. But, in our situation the period returns periodicity is the same as the stock prices periodicity (both daily), so we can use either. We want to use the adjusted closing prices column (adjusted for stock splits, which can make it appear that a stock is performing poorly if a split is included), so we set `select = adjusted`. We researched the `periodReturn` function, and we found that it accepts `type = "log"` and `period = "daily"`, which returns the daily log returns. ```{r} @@ -233,15 +239,15 @@ FANG_daily_log_returns <- FANG %>% mutate_fun = periodReturn, period = "daily", type = "log", - col_rename = "monthly.returns") + col_rename = "daily.returns") ``` ```{r, fig.height = 4.5} FANG_daily_log_returns %>% - ggplot(aes(x = monthly.returns, fill = symbol)) + + ggplot(aes(x = daily.returns, fill = symbol)) + geom_density(alpha = 0.5) + labs(title = "FANG: Charting the Daily Log Returns", - x = "Monthly Returns", y = "Density") + + x = "Daily Returns", y = "Density") + theme_tq() + scale_fill_tq() + facet_wrap(~ symbol, ncol = 2) @@ -250,7 +256,7 @@ FANG_daily_log_returns %>% ## Example 2: Use xts to.period to Change the Periodicity from Daily to Monthly -The `xts::to.period` function is used for periodicity aggregation (converting from a lower level periodicity to a higher level such as minutes to hours or months to years). Because we are seeking a return structure that is on a different time scale than the input (daily versus weekly), we need to use a transmute function. We select `tq_transmute()` and pass the open, high, low, close and volume columns via `select = open:volume`. Looking at the documentation for `to.period`, we see that it accepts a `period` argument that we can set to `"weeks"`. The result is the OHLCV data returned with the dates changed to one day per week. +The `xts::to.period` function is used for periodicity aggregation (converting from a lower level periodicity to a higher level such as minutes to hours or months to years). Because we are seeking a return structure that is on a different time scale than the input (daily versus weekly), we need to use a transmute function. We select `tq_transmute()` and pass the open, high, low, close and volume columns via `select = open:volume`. Looking at the documentation for `to.period`, we see that it accepts a `period` argument that we can set to `"months"`. The result is the OHLCV data returned with the dates changed to one day per month. ```{r} FANG %>% @@ -260,7 +266,7 @@ FANG %>% period = "months") ``` -A common usage case is to reduce the number of points to smooth time series plots. Let's check out difference between daily and monthly plots. +A common usage case is to reduce the number of points to smooth time series plots. Let's check out the difference between daily and monthly plots. ### Without Periodicity Aggregation @@ -450,7 +456,7 @@ FANG_by_qtr %>% ## Example 6: Use zoo rollapply to visualize a rolling regression -A good way to analyze relationships over time is using rolling calculations that compare two assets. Pairs trading is a common mechanism for similar assets. While we will not go into a pairs trade analysis, we will analyze the relationship between two similar assets as a precursor to a pairs trade. In this example we will analyze two similar assets, Mastercard (MA) and Visa (V) to show the relationship via regression. +A good way to analyze relationships over time is using rolling calculations that compare two assets. Pairs trading is a common mechanism for similar assets. While we will not go into a pairs trade analysis, we will analyze the relationship between two similar assets as a precursor to a pairs trade. In this example we will analyze two similar assets, MasterCard (MA) and Visa (V) to show the relationship via regression. Before we analyze a rolling regression, it's helpful to view the overall trend in returns. To do this, we use `tq_get()` to get stock prices for the assets and `tq_transmute()` to transform the daily prices to daily returns. We'll collect the data and visualize via a scatter plot. @@ -482,14 +488,14 @@ stock_pairs %>% theme_tq() ``` -We can get statistcs on the relationship from the `lm` function. The model is highly correlated with a p-value of essential zero. The coefficient estimate for V (Coefficient 1) is 0.8134 indicating a positive relationship, meaning as V increases MA also tends to increase. +We can get statistics on the relationship from the `lm` function. The model is highly correlated with a p-value of essential zero. The coefficient estimate for V (Coefficient 1) is 0.8134 indicating a positive relationship, meaning as V increases MA also tends to increase. ```{r} lm(MA ~ V, data = stock_pairs) %>% summary() ``` -While this characterizes the overall relationship, it's missing the time aspect. Fortunately, we can use the `rollapply` function from the `zoo` package to plot a rolling regression, showing how the model coefficent varies on a rolling basis over time. We calculate rolling regressions with `tq_mutate()` in two additional steps: +While this characterizes the overall relationship, it's missing the time aspect. Fortunately, we can use the `zoo::rollapply()` function to plot a rolling regression, showing how the model coefficient varies on a rolling basis over time. We calculate rolling regressions with `tq_mutate()` in two additional steps: 1. Create a custom function 2. Apply the function with `tq_mutate(mutate_fun = rollapply)` @@ -544,7 +550,7 @@ stock_prices %>% ## Example 7: Use Return.clean and Return.excess to clean and calculate excess returns -In this example we use several of the `PerformanceAnalytics` functions to clean and format returns. The example uses three progressive applications of `tq_transmute` to apply various quant functions to the grouped stock prices from the `FANG` data set. First, we calculate daily returns using `quantmod::periodReturn`. Next, we use `Return.clean` to clean outliers from the return data. The `alpha` parameter is the percentage of oultiers to be cleaned. Finally, the excess returns are calculated using a risk-free rate of 3% (divided by 252 for 252 trade days in one year). +In this example we use several of the `PerformanceAnalytics` functions to clean and format returns. The example uses three progressive applications of `tq_transmute` to apply various quant functions to the grouped stock prices from the `FANG` data set. First, we calculate daily returns using `quantmod::periodReturn`. Next, we use `Return.clean` to clean outliers from the return data. The `alpha` parameter is the percentage of outliers to be cleaned. Finally, the excess returns are calculated using a risk-free rate of 3% (divided by 252 for 252 trade days in one year). ```{r} FANG %>% diff --git a/vignettes/TQ03-scaling-and-modeling-with-tidyquant.Rmd b/vignettes/TQ03-scaling-and-modeling-with-tidyquant.Rmd index 9fd9c4de..86f6b444 100644 --- a/vignettes/TQ03-scaling-and-modeling-with-tidyquant.Rmd +++ b/vignettes/TQ03-scaling-and-modeling-with-tidyquant.Rmd @@ -27,15 +27,15 @@ knitr::opts_chunk$set(message = FALSE, # Overview -The greatest benefit to `tidyquant` is the ability to apply the data science workflow to easily model and scale your financial analysis as described in [_R for Data Science_](https://r4ds.had.co.nz/). Scaling is the process of creating an analysis for one asset and then extending it to multiple groups. This idea of scaling is incredibly useful to financial analysts because typically one wants to compare many assets to make informed decisions. Fortunately, the `tidyquant` package integrates with the `tidyverse` making scaling super simple! +The greatest benefit to `tidyquant` is the ability to apply the data science workflow to easily model and scale your financial analysis as described in [_R for Data Science_](https://r4ds.hadley.nz/). Scaling is the process of creating an analysis for one asset and then extending it to multiple groups. This idea of scaling is incredibly useful to financial analysts because typically one wants to compare many assets to make informed decisions. Fortunately, the `tidyquant` package integrates with the `tidyverse` making scaling super simple! All `tidyquant` functions return data in the `tibble` (tidy data frame) format, which allows for interaction within the `tidyverse`. This means we can: * Seamlessly scale data retrieval and mutations * Use the pipe (`%>%`) for chaining operations * Use `dplyr` and `tidyr`: `select`, `filter`, `group_by`, `nest`/`unnest`, `spread`/`gather`, etc -* Use `purrr`: mapping functions with `map` -* Model financial analysis using the data science workflow in [_R for Data Science_](https://r4ds.had.co.nz/) +* Use `purrr`: mapping functions with `map()` +* Model financial analysis using the data science workflow in [_R for Data Science_](https://r4ds.hadley.nz/) We'll go through some useful techniques for getting and manipulating groups of data. @@ -43,8 +43,14 @@ We'll go through some useful techniques for getting and manipulating groups of d Load the `tidyquant` package to get started. -```{r} -# Loads tidyquant, lubridate, xts, quantmod, TTR, and PerformanceAnalytics +```r +# Loads tidyquant, xts, quantmod, TTR, and PerformanceAnalytics +library(tidyverse) +library(tidyquant) +``` + +```{r, include=FALSE} +# Loads packages for R CMD CHECK library(lubridate) library(dplyr) library(purrr) @@ -53,7 +59,7 @@ library(tidyr) library(tidyquant) ``` - + # 1.0 Scaling the Getting of Financial Data A very basic example is retrieving the stock prices for multiple stocks. There are three primary ways to do this: @@ -66,7 +72,7 @@ c("AAPL", "GOOG", "META") %>% tq_get(get = "stock.prices", from = "2016-01-01", to = "2017-01-01") ``` -The output is a single level tibble with all or the stock prices in one tibble. The auto-generated column name is "symbol", which can be pre-emptively renamed by giving the vector a name (e.g. `stocks <- c("AAPL", "GOOG", "META")`) and then piping to `tq_get`. +The output is a single level tibble with all or the stock prices in one tibble. The auto-generated column name is "symbol", which can be preemptively renamed by giving the vector a name (e.g. `stocks <- c("AAPL", "GOOG", "META")`) and then piping to `tq_get`. ## Method 2: Map a tibble with stocks in first column @@ -155,7 +161,7 @@ FANG_returns_yearly %>% # 3.0 Modeling Financial Data using purrr -Eventually you will want to begin modeling (or more generally applying functions) at scale! One of the __best__ features of the `tidyverse` is the ability to map functions to nested tibbles using `purrr`. From the Many Models chapter of "[R for Data Science](https://r4ds.had.co.nz/)", we can apply the same modeling workflow to financial analysis. Using a two step workflow: +Eventually you will want to begin modeling (or more generally applying functions) at scale! One of the __best__ features of the `tidyverse` is the ability to map functions to nested tibbles using `purrr`. From the Many Models chapter of "[R for Data Science](https://r4ds.hadley.nz/)", we can apply the same modeling workflow to financial analysis. Using a two step workflow: 1. Model a single stock 2. Scale to many stocks @@ -239,7 +245,7 @@ get_model <- function(stock_data) { } ``` -Testing it out on a single stock. We can see that the "term" that contains the direction of the trend (the slope) is "year(date)". The interpetation is that as year increases one unit, the annual returns decrease by 3%. +Testing it out on a single stock. We can see that the "term" that contains the direction of the trend (the slope) is "year(date)". The interpretation is that as year increases one unit, the annual returns decrease by 3%. ```{r} get_model(AAPL) @@ -258,7 +264,7 @@ stocks_tbl <- tq_index("SP500") %>% stocks_tbl ``` -We can now apply our analysis function to the stocks using `dplyr::mutate` and `purrr::map`. The `mutate()` function adds a column to our tibble, and the `map()` function maps our custom `get_model` function to our tibble of stocks using the `symbol` column. The `tidyr::unnest` function unrolls the nested data frame so all of the model statistics are accessable in the top data frame level. The `filter`, `arrange` and `select` steps just manipulate the data frame to isolate and arrange the data for our viewing. +We can now apply our analysis function to the stocks using `dplyr::mutate()` and `purrr::map()`. The `mutate()` function adds a column to our tibble, and the `map()` function maps our custom `get_model` function to our tibble of stocks using the `symbol` column. The `tidyr::unnest()` function unrolls the nested data frame so all of the model statistics are accessible in the top data frame level. The `filter`, `arrange` and `select` steps just manipulate the data frame to isolate and arrange the data for our viewing. ```{r} stocks_model_stats <- stocks_tbl %>% @@ -304,7 +310,7 @@ There are pros and cons to this approach that you may not agree with, but I beli * __Pros__: Long running scripts are not interrupted because of one error -* __Cons__: Errors can be inadvertently handled or flow downstream if the users does not read the warnings +* __Cons__: Errors can be inadvertently handled or flow downstream if the user does not read the warnings ## Bad Apples Fail Gracefully, tq_get @@ -323,4 +329,4 @@ c("AAPL", "GOOG", "BAD APPLE") %>% tq_get(get = "stock.prices", complete_cases = FALSE) ``` -In both cases, the prudent user will review the warnings to determine what happened and whether or not this is acceptable. In the `complete_cases = FALSE` example, if the user attempts to perform downstream computations at scale, the computations will likely fail grinding the analysis to a hault. But, the advantage is that the user will more easily be able to filter to the problem childs to determine what happened and decide whether this is acceptable or not. +In both cases, the prudent user will review the warnings to determine what happened and whether or not this is acceptable. In the `complete_cases = FALSE` example, if the user attempts to perform downstream computations at scale, the computations will likely fail grinding the analysis to a halt. But, the advantage is that the user will more easily be able to filter to the problem root to determine what happened and decide whether this is acceptable or not. diff --git a/vignettes/TQ04-charting-with-tidyquant.Rmd b/vignettes/TQ04-charting-with-tidyquant.Rmd index 8c19e8ff..e6fc58d0 100644 --- a/vignettes/TQ04-charting-with-tidyquant.Rmd +++ b/vignettes/TQ04-charting-with-tidyquant.Rmd @@ -38,8 +38,14 @@ The `tidyquant` package includes charting tools to assist users in developing qu Load the `tidyquant` package to get started. -```{r} -# Loads tidyquant, lubridate, xts, quantmod, TTR, and PerformanceAnalytics +```r +# Loads tidyquant, xts, quantmod, TTR, and PerformanceAnalytics +library(tidyverse) +library(tidyquant) +``` + +```{r,include=FALSE} +# Loads tidyquant, xts, quantmod, TTR, and PerformanceAnalytics library(lubridate) library(dplyr) library(ggplot2) @@ -58,7 +64,7 @@ AMZN <- tq_get("AMZN", get = "stock.prices", from = "2000-01-01", to = "2016-12- The `end` date parameter will be used when setting date limits throughout the examples. ```{r} -end <- as_date("2016-12-31") +end <- lubridate::as_date("2016-12-31") end ``` @@ -198,7 +204,7 @@ FANG %>% ggplot(aes(x = date, y = close, group = symbol)) + geom_candlestick(aes(open = open, high = high, low = low, close = close)) + labs(title = "FANG Candlestick Chart", - subtitle = "Experimenting with Mulitple Stocks", + subtitle = "Experimenting with Multiple Stocks", y = "Closing Price", x = "") + coord_x_date(xlim = c(start, end)) + facet_wrap(~ symbol, ncol = 2, scale = "free_y") + @@ -217,7 +223,7 @@ FANG %>% geom_candlestick(aes(open = open, high = high, low = low, close = close)) + geom_ma(ma_fun = SMA, n = 15, color = "darkblue", size = 1) + labs(title = "FANG Candlestick Chart", - subtitle = "Experimenting with Mulitple Stocks", + subtitle = "Experimenting with Multiple Stocks", y = "Closing Price", x = "") + coord_x_date(xlim = c(start, end)) + facet_wrap(~ symbol, ncol = 2, scale = "free_y") + diff --git a/vignettes/TQ05-performance-analysis-with-tidyquant.Rmd b/vignettes/TQ05-performance-analysis-with-tidyquant.Rmd index cf7d1b5b..260f4741 100644 --- a/vignettes/TQ05-performance-analysis-with-tidyquant.Rmd +++ b/vignettes/TQ05-performance-analysis-with-tidyquant.Rmd @@ -20,7 +20,6 @@ knitr::opts_chunk$set(message = FALSE, fig.align = 'center', out.width='95%', dpi = 200) -library(tidyquant) # devtools::load_all() # Travis CI fails on load_all() ``` @@ -28,7 +27,7 @@ library(tidyquant) # Overview -Financial asset (individual stocks, securities, etc) and portfolio (groups of stocks, securities, etc) performance analysis is a deep field with a wide range of theories and methods for analyzing risk versus reward. The `PerformanceAnalytics` package consolidates functions to compute many of the most widely used performance metrics. `tidquant` integrates this functionality so it can be used at scale using the split, apply, combine framework within the `tidyverse`. Two primary functions integrate the performance analysis functionality: +Financial asset (individual stocks, securities, etc) and portfolio (groups of stocks, securities, etc) performance analysis is a deep field with a wide range of theories and methods for analyzing risk versus reward. The `PerformanceAnalytics` package consolidates functions to compute many of the most widely used performance metrics. `tidyquant` integrates this functionality so it can be used at scale using the split, apply, combine framework within the `tidyverse`. Two primary functions integrate the performance analysis functionality: * `tq_performance` implements the performance analysis functions in a tidy way, enabling scaling analysis using the split, apply, combine framework. * `tq_portfolio` provides a useful tool set for aggregating a group of individual asset returns into one or many portfolios. @@ -59,9 +58,15 @@ We'll use the `PerformanceAnalytics` function, `table.CAPM`, to evaluate the ret First, load the `tidyquant` package. -```{r} +```r +library(tidyverse) +library(tidyquant) +``` + +```{r, include=FALSE} library(dplyr) library(ggplot2) +library(lubridate) library(tidyquant) ``` @@ -283,7 +288,7 @@ We now have an aggregated portfolio that is a 50/50 blend of AAPL and NFLX. You may be asking why didn't we use GOOG? __The important thing to understand is that all of the assets from the asset returns don't need to be used when creating the portfolio!__ This enables us to scale individual stock returns and then vary weights to optimize the portfolio (this will be a further subject that we address in the future!) -##### Method 2: Aggregating a Portfolio using Two Column Tibble with Symbols and Weights +##### Method 2: Aggregating a Portfolio using Two Column tibble with Symbols and Weights A possibly more useful method of aggregating returns is using a tibble of symbols and weights that are mapped to the portfolio. We'll recreate the previous portfolio example using mapped weights. @@ -410,7 +415,7 @@ weights_table <- tibble(stocks) %>% weights_table ``` -Now just pass the the expanded `stock_returns_monthly_multi` and the `weights_table` to `tq_portfolio` for portfolio aggregation. +Now just pass the expanded `stock_returns_monthly_multi` and the `weights_table` to `tq_portfolio` for portfolio aggregation. ```{r} portfolio_returns_monthly_multi <- stock_returns_monthly_multi %>% @@ -421,7 +426,7 @@ portfolio_returns_monthly_multi <- stock_returns_monthly_multi %>% portfolio_returns_monthly_multi ``` -Let's assess the output. We now have a single, "long" format data frame of portfolio returns. It has three groups with the aggregated portfolios blended by mapping the `weight_table`. +Let's assess the output. We now have a single, "long" format data frame of portfolio returns. It has three groups with the aggregated portfolios blended by mapping the `weights_table`. #### Steps 3B and 4: Merging and Assessing Performance