Skip to content

Commit

Permalink
test
Browse files Browse the repository at this point in the history
  • Loading branch information
martinstuder committed Oct 28, 2024
1 parent 5d5b18e commit 68e177b
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 6 deletions.
2 changes: 1 addition & 1 deletion R/spark_read_bigquery.R
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
#' the service account will be used to interact with BigQuery and Google Cloud Storage (GCS).
#' Defaults to \code{\link{default_service_account_key_file}}.
#' @param additionalParameters
#' \href{https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#properties}{Additional Spark BigQuery connector options}.
#' \href{https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#properties}{List of additional Spark BigQuery connector options}.
#' @param memory \code{logical} specifying whether data should be loaded eagerly into
#' memory, i.e. whether the table should be cached. Note that eagerly caching prevents
#' predicate pushdown (e.g. in conjunction with \code{\link[dplyr]{filter}}) and therefore
Expand Down
2 changes: 1 addition & 1 deletion R/spark_write_bigquery.R
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#' with Google Cloud services. The use of service accounts is highly recommended. Specifically,
#' the service account will be used to interact with BigQuery and Google Cloud Storage (GCS).
#' @param additionalParameters
#' \href{https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#properties}{Additional Spark BigQuery connector options}.
#' \href{https://github.com/GoogleCloudDataproc/spark-bigquery-connector?tab=readme-ov-file#properties}{List of additional Spark BigQuery connector options}.
#' @param mode Specifies the behavior when data or table already exist. One of "overwrite",
#' "append", "ignore" or "error" (default).
#' @param ... Additional arguments passed to \code{\link[sparklyr]{spark_write_source}}.
Expand Down
2 changes: 1 addition & 1 deletion man/spark_read_bigquery.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/spark_write_bigquery.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 4 additions & 2 deletions tests/testthat/test-read.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ test_that("reading BigQuery tables works", {
name = "shakespeare",
projectId = "bigquery-public-data",
datasetId = "samples",
tableId = "shakespeare"
tableId = "shakespeare",
additionalParameters = list(parentProject = default_project_id())
)

expect_equal(shakespeare %>% sparklyr::sdf_nrow(), 164656)
Expand All @@ -22,7 +23,8 @@ test_that("executing SQL queries works", {
shakespeare <- spark_read_bigquery(
sc,
name = "shakespeare",
sqlQuery = "SELECT * FROM bigquery-public-data.samples.shakespeare"
sqlQuery = "SELECT * FROM bigquery-public-data.samples.shakespeare",
additionalParameters = list(parentProject = default_project_id())
)

expect_equal(shakespeare %>% sparklyr::sdf_nrow(), 164656)
Expand Down

0 comments on commit 68e177b

Please sign in to comment.