library(sparklyr)
<- spark_connect(master = "local")
sc <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
iris_tbl
<- c("Sepal_Length", "Sepal_Width", "Petal_Length", "Petal_Width")
features
%>%
iris_tbl ft_vector_assembler(
input_col = features,
output_col = "features_temp"
%>%
) ft_min_max_scaler(
input_col = "features_temp",
output_col = "features"
) #> # Source: spark<?> [?? x 7]
#> Sepal_L…¹ Sepal…² Petal…³ Petal…⁴ Species featu…⁵ featu…⁶
#> <dbl> <dbl> <dbl> <dbl> <chr> <list> <list>
#> 1 5.1 3.5 1.4 0.2 setosa <dbl> <dbl>
#> 2 4.9 3 1.4 0.2 setosa <dbl> <dbl>
#> 3 4.7 3.2 1.3 0.2 setosa <dbl> <dbl>
#> 4 4.6 3.1 1.5 0.2 setosa <dbl> <dbl>
#> 5 5 3.6 1.4 0.2 setosa <dbl> <dbl>
#> 6 5.4 3.9 1.7 0.4 setosa <dbl> <dbl>
#> 7 4.6 3.4 1.4 0.3 setosa <dbl> <dbl>
#> 8 5 3.4 1.5 0.2 setosa <dbl> <dbl>
#> 9 4.4 2.9 1.4 0.2 setosa <dbl> <dbl>
#> 10 4.9 3.1 1.5 0.1 setosa <dbl> <dbl>
#> # … with more rows, and abbreviated variable names
#> # ¹Sepal_Length, ²Sepal_Width, ³Petal_Length,
#> # ⁴Petal_Width, ⁵features_temp, ⁶features
Feature Transformation – MinMaxScaler (Estimator)
R/ml_feature_min_max_scaler.R
ft_min_max_scaler
Description
Rescale each feature individually to a common range [min, max] linearly using column summary statistics, which is also known as min-max normalization or Rescaling
Usage
ft_min_max_scaler(
x, input_col = NULL,
output_col = NULL,
min = 0,
max = 1,
uid = random_string("min_max_scaler_"),
... )
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
input_col | The name of the input column. |
output_col | The name of the output column. |
min | Lower bound after transformation, shared by all features Default: 0.0 |
max | Upper bound after transformation, shared by all features Default: 1.0 |
uid | A character string used to uniquely identify the feature transformer. |
… | Optional arguments; currently unused. |
Details
In the case where x
is a tbl_spark
, the estimator fits against x
to obtain a transformer, which is then immediately used to transform x
, returning a tbl_spark
.
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns aml_transformer
, aml_estimator
, or one of their subclasses. The object contains a pointer to a SparkTransformer
orEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the transformer or estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, a transformer is constructed then immediately applied to the inputtbl_spark
, returning atbl_spark
Examples
See Also
See https://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark. Other feature transformers: ft_binarizer()
, ft_bucketizer()
, ft_chisq_selector()
, ft_count_vectorizer()
, ft_dct()
, ft_elementwise_product()
, ft_feature_hasher()
, ft_hashing_tf()
, ft_idf()
, ft_imputer()
, ft_index_to_string()
, ft_interaction()
, ft_lsh
, ft_max_abs_scaler()
, ft_ngram()
, ft_normalizer()
, ft_one_hot_encoder_estimator()
, ft_one_hot_encoder()
, ft_pca()
, ft_polynomial_expansion()
, ft_quantile_discretizer()
, ft_r_formula()
, ft_regex_tokenizer()
, ft_robust_scaler()
, ft_sql_transformer()
, ft_standard_scaler()
, ft_stop_words_remover()
, ft_string_indexer()
, ft_tokenizer()
, ft_vector_assembler()
, ft_vector_indexer()
, ft_vector_slicer()
, ft_word2vec()