library(sparklyr)
<- spark_connect(master = "local")
sc <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE)
iris_tbl ml_kmeans(iris_tbl, Species ~ .)
#> K-means clustering with 2 clusters
#>
#> Cluster centers:
#> Sepal_Length Sepal_Width Petal_Length Petal_Width
#> 1 6.301031 2.886598 4.958763 1.695876
#> 2 5.005660 3.369811 1.560377 0.290566
#>
#> Within Set Sum of Squared Errors = not computed.
Spark ML – K-Means Clustering
R/ml_clustering_kmeans.R, R/ml_model_kmeans.R
ml_kmeans
Description
K-means clustering with support for k-means|| initialization proposed by Bahmani et al. Using ml_kmeans()
with the formula interface requires Spark 2.0+.
Usage
ml_kmeans(
x, formula = NULL,
k = 2,
max_iter = 20,
tol = 1e-04,
init_steps = 2,
init_mode = "k-means||",
seed = NULL,
features_col = "features",
prediction_col = "prediction",
uid = random_string("kmeans_"),
...
)
ml_compute_cost(model, dataset)
ml_compute_silhouette_measure(
model,
dataset, distance_measure = c("squaredEuclidean", "cosine")
)
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
formula | Used when x is a tbl_spark . R formula as a character string or a formula. This is used to transform the input dataframe before fitting, see ft_r_formula for details. |
k | The number of clusters to create |
max_iter | The maximum number of iterations to use. |
tol | Param for the convergence tolerance for iterative algorithms. |
init_steps | Number of steps for the k-means |
init_mode | Initialization algorithm. This can be either “random” to choose random points as initial cluster centers, or “k-means |
seed | A random seed. Set this value if you need your results to be reproducible across repeated calls. |
features_col | Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by ft_r_formula . |
prediction_col | Prediction column name. |
uid | A character string used to uniquely identify the ML estimator. |
… | Optional arguments, see Details. |
model | A fitted K-means model returned by ml_kmeans() |
dataset | Dataset on which to calculate K-means cost |
distance_measure | Distance measure to apply when computing the Silhouette measure. |
Value
The object returned depends on the class of x
.
spark_connection
: Whenx
is aspark_connection
, the function returns an instance of aml_estimator
object. The object contains a pointer to a SparkEstimator
object and can be used to composePipeline
objects.ml_pipeline
: Whenx
is aml_pipeline
, the function returns aml_pipeline
with the clustering estimator appended to the pipeline.tbl_spark
: Whenx
is atbl_spark
, an estimator is constructed then immediately fit with the inputtbl_spark
, returning a clustering model.tbl_spark
, withformula
orfeatures
specified: Whenformula
is specified, the inputtbl_spark
is first transformed using aRFormula
transformer before being fit by the estimator. The object returned in this case is aml_model
which is a wrapper of aml_pipeline_model
. This signature does not apply toml_lda()
.ml_compute_cost()
returns the K-means cost (sum of squared distances of points to their nearest center) for the model on the given data.ml_compute_silhouette_measure()
returns the Silhouette measure of the clustering on the given data.
Examples
See Also
See https://spark.apache.org/docs/latest/ml-clustering.html for more information on the set of clustering algorithms. Other ml clustering algorithms: ml_bisecting_kmeans()
, ml_gaussian_mixture()
, ml_lda()