Parquet vs the RDS Format
This is part of a series of related posts on Apache Arrow. Other posts in the series are:
- Understanding the Parquet file format
- Reading and Writing Data with {arrow}
- Parquet vs the RDS Format (This post)
The benefit of using the {arrow} package with parquet files, is it enables you to work with ridiculously large data sets from the comfort of an R session. Using the NYC-Taxi data from the previous blog post we can perform standard data science operations, such as,
library("arrow")
nyc_taxi = open_dataset(nyc_data)
nyc_taxi |>
dplyr::filter(year == 2019) |>
dplyr::group_by(month) |>
dplyr::summarise(trip_distance = max(trip_distance)) |>
dplyr::collect()
with a speed that seems almost magical. When your dataset is as large as the NYC-Taxi data, then standard file formats, such as, CSV files and R binary files, simply aren’t an option.
However, let’s suppose you are in the situation where your data is inconvenient - not big, just a bit annoying. For example, if we take a single year and a single month
taxi_subset = open_dataset(nyc_data) |>
dplyr::filter(year == 2019 & month == 1) |>
dplyr::collect()
The data is still large, with around eight million rows
nrow(taxi_subset)
and takes around 1.2GB of RAM when we load it into R. The data isn’t big, just annoying! In this situation, should we use the native binary format or stick with parquet?
In theory, we could use CSV, but that’s really slow!
RDS vs Parquet
The RDS format is a binary file format, native to R. It has been part of R for many years, and provides a convenient method for saving R objects, including data sets.
The obvious question is which file format should you use for storing tabular data? RDS or parquet? For this comparison, I’m interested in the following characteristics:
- the time required to save the file;
- the file size;
- the time required to load the file.
I’m also a firm believer of keeping things stable and simple. So if both methods are roughly the same or even if parquet is little better, then I would stick with R’s binary format. Consequently, I don’t really care about a few MBs or seconds.
Reading and writing the data
To save the taxi data subset, we use saveRDS()
for the rds format and
write_parquet()
for the parquet format. The default compression method
used by RDS is gzip
, whereas the
parquet uses
snappy
. As you
might guess, the gzip
method produces smaller files, but takes longer.
saveRDS(taxi_subset, file = "taxi.rds")
# Default parquet compression is "snappy"
tf1 = tempfile(fileext = ".parquet")
write_parquet(taxi_subset, sink = tf1, compression = "snappy")
tf2 = tempfile(fileext = ".gzip.parquet")
write_parquet(taxi_subset, sink = tf2, compression = "gzip")
Reading in either file type is also straightforward
readRDS("taxi.rds")
# Need to use collect() to make comparison far
open_dataset(file_path) |>
dplyr::collect()
Results
Each test was run a couple of times, and the average is given in the table below. The read times and size were fairly deterministic, but the write times had massive variability.
Method | Compression | Size (MB) | Write Time (s) | Read Time (s) |
---|---|---|---|---|
RDS | gzip | 115 | 27 | 5.7 |
Parquet | snappy | 143 | 4 | 0.3 |
Parquet | gzip | 105 | 12 | 0.4 |
For me the results suggest that for files of this size, I would consider using the native binary R format only if
- the writing and reading file times weren’t an issue;
- and/or using the native binary R format (and the implied stability) was really important.
However, parquet and {arrow} do look appealing.
When Should we use Parquet over RDS?
The above timings are for a particular size data set (110MB). However, a few quick experiments show the performance improvement is fairly consistent for different file sizes:
- Writing (parquet vs rds): around 6 time faster using snappy, and twice as fast using gzip;
- Reading (parquet vs rds): around 16 times faster using parquet.
So to answer the question, when should we use parquet over rds? For me
that depends. If it was for a standard analysis, and the files were
fairly modest (less than 20 MB), I would probably just go for an RDS
file. However, if I had a Shiny application, then this would
significantly lower the threshold where I would use parquet, for the
simple reason that one second on a web
application
feels like a lifetime. Remember that if you are using
{pins}, then pin_write()
can handle
parquet files without any issue.