library(sparklyr)
library(dplyr)
<- spark_connect(master = "local")
sc <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
iris_tbl
%>%
iris_tbl select(-Species) %>%
ml_bisecting_kmeans(k = 4, Species ~ .)
#> K-means clustering with 4 clusters
#>
#> Cluster centers:
#> Sepal_Length Sepal_Width Petal_Length Petal_Width
#> 1 4.733333 3.158333 1.391667 0.2000000
#> 2 5.231034 3.544828 1.700000 0.3655172
#> 3 5.947458 2.766102 4.454237 1.4542373
#> 4 6.850000 3.073684 5.742105 2.0710526
#>
#> Within Set Sum of Squared Errors = 77.38099
Spark ML – Bisecting K-Means Clustering
R/ml_clustering_bisecting_kmeans.R
ml_bisecting_kmeans
Description
A bisecting k-means algorithm based on the paper “A comparison of document clustering techniques” by Steinbach, Karypis, and Kumar, with modification to fit Spark. The algorithm starts from a single cluster that contains all points. Iteratively it finds divisible clusters on the bottom level and bisects each of them using k-means, until there are k leaf clusters in total or no leaf clusters are divisible. The bisecting steps of clusters on the same level are grouped together to increase parallelism. If bisecting all divisible clusters on the bottom level would result more than k leaf clusters, larger clusters get higher priority.
Usage
ml_bisecting_kmeans(
x, formula = NULL,
k = 4,
max_iter = 20,
seed = NULL,
min_divisible_cluster_size = 1,
features_col = "features",
prediction_col = "prediction",
uid = random_string("bisecting_bisecting_kmeans_"),
... )
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
formula | Used when x is a tbl_spark . R formula as a character string or a formula. This is used to transform the input dataframe before fitting, see ft_r_formula for details. |
k | The number of clusters to create |
max_iter | The maximum number of iterations to use. |
seed | A random seed. Set this value if you need your results to be reproducible across repeated calls. |
min_divisible_cluster_size | The minimum number of points (if greater than or equal to 1.0) or the minimum proportion of points (if less than 1.0) of a divisible cluster (default: 1.0). |
features_col | Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by ft_r_formula . |
prediction_col | Prediction column name. |
uid | A character string used to uniquely identify the ML estimator. |
… | Optional arguments, see Details. |
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns an instance of aml_estimator
object. The object contains a pointer to a SparkEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the clustering estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, an estimator is constructed then immediately fit with the inputtbl_spark
, returning a clustering model.tbl_spark
, withformula
orfeatures
specified: Whenformula
is specified, the inputtbl_spark
is first transformed using aRFormula
transformer before being fit by the estimator. The object returned in this case is aml_model
which is a wrapper of aml_pipeline_model
. This signature does not apply toml_lda()
.
Examples
See Also
See https://spark.apache.org/docs/latest/ml-clustering.html for more information on the set of clustering algorithms. Other ml clustering algorithms: ml_gaussian_mixture()
, ml_kmeans()
, ml_lda()