Feature Transformation - HashingTF (Transformer)
ft_hashing_tf
Description
Maps a sequence of terms to their term frequencies using the hashing trick.
Usage
ft_hashing_tf(
x,input_col = NULL,
output_col = NULL,
binary = FALSE,
num_features = 2^18,
uid = random_string("hashing_tf_"),
... )
Arguments
Arguments | Description |
---|---|
x | A spark_connection , ml_pipeline , or a tbl_spark . |
input_col | The name of the input column. |
output_col | The name of the output column. |
binary | Binary toggle to control term frequency counts. If true, all non-zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. (default = FALSE ) |
num_features | Number of features. Should be greater than 0. (default = 2^18 ) |
uid | A character string used to uniquely identify the feature transformer. |
… | Optional arguments; currently unused. |
Value
The object returned depends on the class of x
. If it is a spark_connection
, the function returns a ml_estimator
or a ml_estimator
object. If it is a ml_pipeline
, it will return a pipeline with the transformer or estimator appended to it. If a tbl_spark
, it will return a tbl_spark
with the transformation applied to it.
See Also
Other feature transformers: ft_binarizer()
, ft_bucketizer()
, ft_chisq_selector()
, ft_count_vectorizer()
, ft_dct()
, ft_elementwise_product()
, ft_feature_hasher()
, ft_idf()
, ft_imputer()
, ft_index_to_string()
, ft_interaction()
, ft_lsh
, ft_max_abs_scaler()
, ft_min_max_scaler()
, ft_ngram()
, ft_normalizer()
, ft_one_hot_encoder()
, ft_one_hot_encoder_estimator()
, ft_pca()
, ft_polynomial_expansion()
, ft_quantile_discretizer()
, ft_r_formula()
, ft_regex_tokenizer()
, ft_robust_scaler()
, ft_sql_transformer()
, ft_standard_scaler()
, ft_stop_words_remover()
, ft_string_indexer()
, ft_tokenizer()
, ft_vector_assembler()
, ft_vector_indexer()
, ft_vector_slicer()
, ft_word2vec()