Calculates word2vec dimension estimates

word_dims_newtext(lda_model, text, n_iter = 20)

word_dims(text, n = 10, n_iter = 20)

## Arguments

lda_model A pretrained LDA model from text2vec. Input data. Should be character vector. Integer, number of sampling iterations. Integer, determines the number of latent topics.

## Value

A tibble data frame

## Examples

trump_tweets <- c(
"#FraudNewsCNN #FNN https://t.co/WYUnHjjUjg",
"TODAY WE MAKE AMERICA GREAT AGAIN!",
paste("Why would Kim Jong-un insult me by calling me \"old,\" when I would",
"NEVER call him \"short and fat?\" Oh well, I try so hard to be his",
"friend - and maybe someday that will happen!"),
paste("Such a beautiful and important evening! The forgotten man and woman",
"will never be forgotten again. We will all come together as never before"),
paste("North Korean Leader Kim Jong Un just stated that the \"Nuclear",
"Button is on his desk at all times.\" Will someone from his depleted and",
"food starved regime please inform him that I too have a Nuclear Button,",
"but it is a much bigger &amp; more powerful one than his, and my Button",
"works!")
)
word_dims(trump_tweets)#> INFO [2019-09-03 11:36:58] iter 10 loglikelihood = -73.730
#> INFO [2019-09-03 11:36:58] iter 20 loglikelihood = -73.776
#> INFO [2019-09-03 11:36:58] early stopping at 20 iteration#>           w1         w2         w3        w4        w5         w6        w7
#> 1 0.00000000 0.00000000 0.00000000 0.0000000 0.0000000 0.00000000 0.0000000
#> 2 0.50000000 0.00000000 0.00000000 0.5000000 0.0000000 0.00000000 0.0000000
#> 3 0.05882353 0.05882353 0.11764706 0.1176471 0.1176471 0.11764706 0.1176471
#> 4 0.28571429 0.14285714 0.07142857 0.0000000 0.0000000 0.07142857 0.1428571
#> 5 0.08333333 0.04166667 0.16666667 0.1250000 0.0000000 0.08333333 0.1250000
#>           w8         w9        w10
#> 1 0.00000000 0.00000000 0.00000000
#> 2 0.00000000 0.00000000 0.00000000
#> 3 0.23529412 0.00000000 0.05882353
#> 4 0.21428571 0.07142857 0.00000000
#> 5 0.08333333 0.20833333 0.08333333