library(sparklyr)
library(dplyr)
<- spark_connect(master = "local")
sc <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
iris_tbl
%>%
iris_tbl ft_bucketizer(
input_col = "Sepal_Length",
output_col = "Sepal_Length_bucket",
splits = c(0, 4.5, 5, 8)
%>%
) select(Sepal_Length, Sepal_Length_bucket, Species)
#> # Source: spark<?> [?? x 3]
#> Sepal_Length Sepal_Length_bucket Species
#> <dbl> <dbl> <chr>
#> 1 5.1 2 setosa
#> 2 4.9 1 setosa
#> 3 4.7 1 setosa
#> 4 4.6 1 setosa
#> 5 5 2 setosa
#> 6 5.4 2 setosa
#> 7 4.6 1 setosa
#> 8 5 2 setosa
#> 9 4.4 0 setosa
#> 10 4.9 1 setosa
#> # … with more rows
Feature Transformation – Bucketizer (Transformer)
R/ml_feature_bucketizer.R
ft_bucketizer
Description
Similar to R
’s cut
function, this transforms a numeric column into a discretized column, with breaks specified through the splits
parameter.
Usage
ft_bucketizer(
x, input_col = NULL,
output_col = NULL,
splits = NULL,
input_cols = NULL,
output_cols = NULL,
splits_array = NULL,
handle_invalid = "error",
uid = random_string("bucketizer_"),
... )
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
input_col | The name of the input column. |
output_col | The name of the output column. |
splits | A numeric vector of cutpoints, indicating the bucket boundaries. |
input_cols | Names of input columns. |
output_cols | Names of output columns. |
splits_array | Parameter for specifying multiple splits parameters. Each element in this array can be used to map continuous features into buckets. |
handle_invalid | (Spark 2.1.0+) Param for how to handle invalid entries. Options are ‘skip’ (filter out rows with invalid values), ‘error’ (throw an error), or ‘keep’ (keep invalid values in a special additional bucket). Default: “error” |
uid | A character string used to uniquely identify the feature transformer. |
… | Optional arguments; currently unused. |
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns aml_transformer
, aml_estimator
, or one of their subclasses. The object contains a pointer to a SparkTransformer
orEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the transformer or estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, a transformer is constructed then immediately applied to the inputtbl_spark
, returning atbl_spark
Examples
See Also
See https://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark. Other feature transformers: ft_binarizer()
, ft_chisq_selector()
, ft_count_vectorizer()
, ft_dct()
, ft_elementwise_product()
, ft_feature_hasher()
, ft_hashing_tf()
, ft_idf()
, ft_imputer()
, ft_index_to_string()
, ft_interaction()
, ft_lsh
, ft_max_abs_scaler()
, ft_min_max_scaler()
, ft_ngram()
, ft_normalizer()
, ft_one_hot_encoder_estimator()
, ft_one_hot_encoder()
, ft_pca()
, ft_polynomial_expansion()
, ft_quantile_discretizer()
, ft_r_formula()
, ft_regex_tokenizer()
, ft_robust_scaler()
, ft_sql_transformer()
, ft_standard_scaler()
, ft_stop_words_remover()
, ft_string_indexer()
, ft_tokenizer()
, ft_vector_assembler()
, ft_vector_indexer()
, ft_vector_slicer()
, ft_word2vec()