library(sparklyr)
library(dplyr)
<- spark_connect(master = "local")
sc <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
iris_tbl
%>%
iris_tbl select(-Species) %>%
ml_pca(k = 2)
#> Explained variance:
#>
#> PC1 PC2
#> 0.92461872 0.05306648
#>
#> Rotation:
#> PC1 PC2
#> Sepal_Length -0.36138659 -0.65658877
#> Sepal_Width 0.08452251 -0.73016143
#> Petal_Length -0.85667061 0.17337266
#> Petal_Width -0.35828920 0.07548102
Feature Transformation – PCA (Estimator)
R/ml_feature_pca.R
ft_pca
Description
PCA trains a model to project vectors to a lower dimensional space of the top k principal components.
Usage
ft_pca(
x, input_col = NULL,
output_col = NULL,
k = NULL,
uid = random_string("pca_"),
...
)
ml_pca(x, features = tbl_vars(x), k = length(features), pc_prefix = "PC", ...)
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
input_col | The name of the input column. |
output_col | The name of the output column. |
k | The number of principal components |
uid | A character string used to uniquely identify the feature transformer. |
… | Optional arguments; currently unused. |
features | The columns to use in the principal components analysis. Defaults to all columns in x . |
pc_prefix | Length-one character vector used to prepend names of components. |
Details
In the case where x
is a tbl_spark
, the estimator fits against x
to obtain a transformer, which is then immediately used to transform x
, returning a tbl_spark
. ml_pca()
is a wrapper around ft_pca()
that returns a ml_model
.
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns aml_transformer
, aml_estimator
, or one of their subclasses. The object contains a pointer to a SparkTransformer
orEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the transformer or estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, a transformer is constructed then immediately applied to the inputtbl_spark
, returning atbl_spark
Examples
See Also
See https://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark. Other feature transformers: ft_binarizer()
, ft_bucketizer()
, ft_chisq_selector()
, ft_count_vectorizer()
, ft_dct()
, ft_elementwise_product()
, ft_feature_hasher()
, ft_hashing_tf()
, ft_idf()
, ft_imputer()
, ft_index_to_string()
, ft_interaction()
, ft_lsh
, ft_max_abs_scaler()
, ft_min_max_scaler()
, ft_ngram()
, ft_normalizer()
, ft_one_hot_encoder_estimator()
, ft_one_hot_encoder()
, ft_polynomial_expansion()
, ft_quantile_discretizer()
, ft_r_formula()
, ft_regex_tokenizer()
, ft_robust_scaler()
, ft_sql_transformer()
, ft_standard_scaler()
, ft_stop_words_remover()
, ft_string_indexer()
, ft_tokenizer()
, ft_vector_assembler()
, ft_vector_indexer()
, ft_vector_slicer()
, ft_word2vec()