Read image data into a Spark DataFrame.
R/data_interface.R
spark_read_image
Description
Read image files within a directory and convert each file into a record within the resulting Spark dataframe. The output will be a Spark dataframe consisting of struct types containing the following attributes:
-origin: StringType
-height: IntegerType
-width: IntegerType
-nChannels: IntegerType
-mode: IntegerType
-data: BinaryType
Usage
spark_read_image(
sc, name = NULL,
dir = name,
drop_invalid = TRUE,
repartition = 0,
memory = TRUE,
overwrite = TRUE
)
Arguments
Arguments | Description |
---|---|
sc | A spark_connection . |
name | The name to assign to the newly generated table. |
dir | Directory to read binary files from. |
drop_invalid | Whether to drop files that are not valid images from the result (default: TRUE). |
repartition | The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning. |
memory | Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?) |
overwrite | Boolean; overwrite the table with the given name if it already exists? |
See Also
Other Spark serialization routines: collect_from_rds()
, spark_insert_table()
, spark_load_table()
, spark_read_avro()
, spark_read_binary()
, spark_read_csv()
, spark_read_delta()
, spark_read_jdbc()
, spark_read_json()
, spark_read_libsvm()
, spark_read_orc()
, spark_read_parquet()
, spark_read_source()
, spark_read_table()
, spark_read_text()
, spark_read()
, spark_save_table()
, spark_write_avro()
, spark_write_csv()
, spark_write_delta()
, spark_write_jdbc()
, spark_write_json()
, spark_write_orc()
, spark_write_parquet()
, spark_write_source()
, spark_write_table()
, spark_write_text()