diff --git a/apple-touch-icon-120x120.png b/apple-touch-icon-120x120.png index d05c9ea..aecb85b 100644 Binary files a/apple-touch-icon-120x120.png and b/apple-touch-icon-120x120.png differ diff --git a/apple-touch-icon-152x152.png b/apple-touch-icon-152x152.png index ebe48a1..c9d603e 100644 Binary files a/apple-touch-icon-152x152.png and b/apple-touch-icon-152x152.png differ diff --git a/apple-touch-icon-180x180.png b/apple-touch-icon-180x180.png index b555bf0..e2b65fe 100644 Binary files a/apple-touch-icon-180x180.png and b/apple-touch-icon-180x180.png differ diff --git a/apple-touch-icon-60x60.png b/apple-touch-icon-60x60.png index 9b03d37..d9c2bfc 100644 Binary files a/apple-touch-icon-60x60.png and b/apple-touch-icon-60x60.png differ diff --git a/apple-touch-icon-76x76.png b/apple-touch-icon-76x76.png index 2e732b2..e17a595 100644 Binary files a/apple-touch-icon-76x76.png and b/apple-touch-icon-76x76.png differ diff --git a/apple-touch-icon.png b/apple-touch-icon.png index 96cd4ca..86415ae 100644 Binary files a/apple-touch-icon.png and b/apple-touch-icon.png differ diff --git a/favicon-16x16.png b/favicon-16x16.png index 20202f2..574aa43 100644 Binary files a/favicon-16x16.png and b/favicon-16x16.png differ diff --git a/favicon-32x32.png b/favicon-32x32.png index 439807a..aed31ab 100644 Binary files a/favicon-32x32.png and b/favicon-32x32.png differ diff --git a/index.html b/index.html index 67cee8a..40e7459 100644 --- a/index.html +++ b/index.html @@ -133,13 +133,6 @@
pip install transformers torch
See Guidance for GPU Acceleration for installation guidance if you have an NVIDIA GPU device on your PC and want to use GPU to accelerate the pipeline.
-Alternative approach (NOT suggested): Besides the pip/conda installation in the Conda Environment, you might instead create and use a Virtual Environment (see R code below with the reticulate
package), but then you need to specify the Python interpreter as “~/.virtualenvs/r-reticulate/Scripts/python.exe” in RStudio.
-## DON'T RUN THIS UNLESS YOU PREFER VIRTUAL ENVIRONMENT
-library(reticulate)
-# install_python()
-virtualenv_create()
-virtualenv_install(packages=c("transformers", "torch"))
Use BERT_download()
to load BERT models. Model files are permanently saved to your local folder “%USERPROFILE%/.cache/huggingface”. A full list of BERT-family models are available at Hugging Face.
Use BERT_download()
to download BERT models. Model files are saved to your local folder “%USERPROFILE%/.cache/huggingface”. A full list of BERT models are available at Hugging Face.
Use BERT_info()
and BERT_vocab()
to find detailed information of BERT models.
+library(FMAT) models = c( "bert-base-uncased", @@ -425,7 +419,7 @@
BERT Models
+BERT_info(models)
model size vocab dims mask <fctr> <char> <int> <int> <char> diff --git a/news/index.html b/news/index.html index 52b8d60..1506923 100644 --- a/news/index.html +++ b/news/index.html @@ -45,7 +45,7 @@
-FMAT 2024.5
+FMAT 2024.5
CRAN release: 2024-05-19
- Added
BERT_info()
.- Added
add.tokens
andadd.method
parameters forBERT_vocab()
andFMAT_run()
: An experimental functionality to add new tokens (e.g., out-of-vocabulary words, compound words, or even phrases) as [MASK] options. Validation is still needed for this novel practice (one of my ongoing projects), so currently please only use at your own risk, waiting until the publication of my validation work.- All functions except
diff --git a/pkgdown.yml b/pkgdown.yml index a17caf9..452f736 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -2,7 +2,7 @@ pandoc: 3.1.11 pkgdown: 2.0.9 pkgdown_sha: ~ articles: {} -last_built: 2024-05-19T05:15Z +last_built: 2024-05-20T14:47Z urls: reference: https://psychbruce.github.io/FMAT/reference article: https://psychbruce.github.io/FMAT/articles diff --git a/search.json b/search.json index 5c1d711..079b90e 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://psychbruce.github.io/FMAT/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Han-Wu-Shuang Bao. Author, maintainer.","code":""},{"path":"https://psychbruce.github.io/FMAT/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Bao H (2024). FMAT: Fill-Mask Association Test. R package version 2024.5, https://psychbruce.github.io/FMAT/.","code":"@Manual{, title = {FMAT: The Fill-Mask Association Test}, author = {Han-Wu-Shuang Bao}, year = {2024}, note = {R package version 2024.5}, url = {https://psychbruce.github.io/FMAT/}, }"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"fmat-","dir":"","previous_headings":"","what":"The Fill-Mask Association Test","title":"The Fill-Mask Association Test","text":"😷 Fill-Mask Association Test (掩码填空联系测验). Fill-Mask Association Test (FMAT) integrative probability-based method using BERT Models measure conceptual associations (e.g., attitudes, biases, stereotypes, social norms, cultural values) propositions natural language (Bao, 2024, JPSP). ⚠️ Please update package version ≥ 2024.5 faster robust functionality.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"author","dir":"","previous_headings":"","what":"Author","title":"The Fill-Mask Association Test","text":"Han-Wu-Shuang (Bruce) Bao 包寒吴霜 📬 baohws@foxmail.com 📋 psychbruce.github.io","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"The Fill-Mask Association Test","text":"Note: original citation. Please refer information library(FMAT) APA-7 format version installed. Bao, H.-W.-S. (2024). Fill-Mask Association Test (FMAT): Measuring propositions natural language. Journal Personality Social Psychology. Advance online publication. DOI: 10.1037/pspa0000396 Bao, H.-W.-S., & Gries, P. (2024). Intersectional race–gender stereotypes natural language. British Journal Social Psychology. Advance online publication. https://doi.org/10.1111/bjso.12748","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"The Fill-Mask Association Test","text":"use FMAT, R package FMAT two Python packages (transformers torch) need installed.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"id_1-r-package","dir":"","previous_headings":"Installation","what":"(1) R Package","title":"The Fill-Mask Association Test","text":"","code":"## Method 1: Install from CRAN install.packages(\"FMAT\") ## Method 2: Install from GitHub install.packages(\"devtools\") devtools::install_github(\"psychbruce/FMAT\", force=TRUE)"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"id_2-python-environment-and-packages","dir":"","previous_headings":"Installation","what":"(2) Python Environment and Packages","title":"The Fill-Mask Association Test","text":"Install Anaconda (recommended package manager automatically installs Python, Python IDEs like Spyder, large list necessary Python package dependencies). Specify Python interpreter RStudio. RStudio → Tools → Global/Project Options → Python → Select → Conda Environments → Choose “…/Anaconda3/python.exe” Install “transformers” “torch” Python packages. (Windows Command / Anaconda Prompt / RStudio Terminal) See Guidance GPU Acceleration installation guidance NVIDIA GPU device PC want use GPU accelerate pipeline. Alternative approach (suggested): Besides pip/conda installation Conda Environment, might instead create use Virtual Environment (see R code reticulate package), need specify Python interpreter “~/.virtualenvs/r-reticulate/Scripts/python.exe” RStudio.","code":"pip install transformers torch ## DON'T RUN THIS UNLESS YOU PREFER VIRTUAL ENVIRONMENT library(reticulate) # install_python() virtualenv_create() virtualenv_install(packages=c(\"transformers\", \"torch\"))"},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-1-download-bert-models","dir":"","previous_headings":"Guidance for FMAT","what":"Step 1: Download BERT Models","title":"The Fill-Mask Association Test","text":"Use BERT_download() load BERT models. Model files permanently saved local folder “%USERPROFILE%/.cache/huggingface”. full list BERT-family models available Hugging Face.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-2-design-fmat-queries","dir":"","previous_headings":"Guidance for FMAT","what":"Step 2: Design FMAT Queries","title":"The Fill-Mask Association Test","text":"Design queries conceptually represent constructs measure (see Bao, 2024, JPSP design queries). Use FMAT_query() /FMAT_query_bind() prepare data.table queries.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-3-run-fmat","dir":"","previous_headings":"Guidance for FMAT","what":"Step 3: Run FMAT","title":"The Fill-Mask Association Test","text":"Use FMAT_run() get raw data (probability estimates) analysis. Several steps preprocessing included function easier use (see FMAT_run() details). BERT variants usingBERT_download()
now import local model files only, without automatically downloading models. Users must first useBERT_download()
to download models.rather [MASK] mask token, input query automatically modified users can always use [MASK] query design. BERT variants, special prefix characters \\u0120 \\u2581 automatically added match whole words (rather subwords) [MASK].","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"notes","dir":"","previous_headings":"Guidance for FMAT","what":"Notes","title":"The Fill-Mask Association Test","text":"Improvements ongoing, especially adaptation diverse (less popular) BERT models. find bugs problems using functions, please report GitHub Issues send email.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"guidance-for-gpu-acceleration","dir":"","previous_headings":"","what":"Guidance for GPU Acceleration","title":"The Fill-Mask Association Test","text":"default, FMAT package uses CPU enable functionality users. advanced users want accelerate pipeline GPU, FMAT_run() function now supports using GPU device, 3x faster CPU. Test results (developer’s computer, depending BERT model size): CPU (Intel 13th-Gen i7-1355U): 500~1000 queries/min GPU (NVIDIA GeForce RTX 2050): 1500~3000 queries/min Checklist: Ensure NVIDIA GPU device (e.g., GeForce RTX Series) NVIDIA GPU driver installed system. Find guidance installation command https://pytorch.org/get-started/locally/. CUDA available Windows Linux, MacOS. installed version torch without CUDA support, please first uninstall (command: pip uninstall torch) install suggested one. may also install corresponding version CUDA Toolkit (e.g., torch version supporting CUDA 12.1, version CUDA Toolkit 12.1 may also installed). Example code installing PyTorch CUDA support: (Windows Command / Anaconda Prompt / RStudio Terminal)","code":"pip install torch --index-url https://download.pytorch.org/whl/cu121"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"bert-models","dir":"","previous_headings":"","what":"BERT Models","title":"The Fill-Mask Association Test","text":"reliability validity following 12 representative BERT models established research articles, future work needed examine performance models. (model name Hugging Face - downloaded model file size) bert-base-uncased (420 MB) bert-base-cased (416 MB) bert-large-uncased (1283 MB) bert-large-cased (1277 MB) distilbert-base-uncased (256 MB) distilbert-base-cased (251 MB) albert-base-v1 (45 MB) albert-base-v2 (45 MB) roberta-base (476 MB) distilroberta-base (316 MB) vinai/bertweet-base (517 MB) vinai/bertweet-large (1356 MB) new BERT, references can helpful: Fill-Mask? [HuggingFace] Explorable BERT [HuggingFace] BERT Model Documentation [HuggingFace] BERT Explained Breaking BERT Illustrated BERT Visual Guide BERT (Tested 2024-05-16 developer’s computer: HP Probook 450 G10 Notebook PC)","code":"library(FMAT) models = c( \"bert-base-uncased\", \"bert-base-cased\", \"bert-large-uncased\", \"bert-large-cased\", \"distilbert-base-uncased\", \"distilbert-base-cased\", \"albert-base-v1\", \"albert-base-v2\", \"roberta-base\", \"distilroberta-base\", \"vinai/bertweet-base\", \"vinai/bertweet-large\" ) BERT_download(models) ℹ Device Info: R Packages: FMAT 2024.5 reticulate 1.36.1 Python Packages: transformers 4.40.2 torch 2.2.1+cu121 NVIDIA GPU CUDA Support: CUDA Enabled: TRUE CUDA Version: 12.1 GPU (Device): NVIDIA GeForce RTX 2050 ── Downloading model \"bert-base-uncased\" ────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 570/570 [00:00<00:00, 114kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 23.9kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.98MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 440M/440M [00:36<00:00, 12.1MB/s] ✔ Successfully downloaded model \"bert-base-uncased\" ── Downloading model \"bert-base-cased\" ──────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 570/570 [00:00<00:00, 63.3kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 8.66kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 10.1MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 436M/436M [00:37<00:00, 11.6MB/s] ✔ Successfully downloaded model \"bert-base-cased\" ── Downloading model \"bert-large-uncased\" ───────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 571/571 [00:00<00:00, 268kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 12.0kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.99MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 1.34G/1.34G [01:36<00:00, 14.0MB/s] ✔ Successfully downloaded model \"bert-large-uncased\" ── Downloading model \"bert-large-cased\" ─────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 762/762 [00:00<00:00, 125kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 12.3kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.41MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 5.39MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 1.34G/1.34G [01:35<00:00, 14.0MB/s] ✔ Successfully downloaded model \"bert-large-cased\" ── Downloading model \"distilbert-base-uncased\" ──────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 483/483 [00:00<00:00, 161kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 9.46kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 16.5MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 14.8MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 268M/268M [00:19<00:00, 13.5MB/s] ✔ Successfully downloaded model \"distilbert-base-uncased\" ── Downloading model \"distilbert-base-cased\" ────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 465/465 [00:00<00:00, 233kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 9.80kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 8.70MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 263M/263M [00:24<00:00, 10.9MB/s] ✔ Successfully downloaded model \"distilbert-base-cased\" ── Downloading model \"albert-base-v1\" ───────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 3.57kB/s] spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 4.93MB/s] tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 13.4MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.4MB/s] ✔ Successfully downloaded model \"albert-base-v1\" ── Downloading model \"albert-base-v2\" ───────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 4.17kB/s] spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 5.10MB/s] tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 6.93MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.8MB/s] ✔ Successfully downloaded model \"albert-base-v2\" ── Downloading model \"roberta-base\" ─────────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 481/481 [00:00<00:00, 80.3kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 6.25kB/s] vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 2.72MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 8.22MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.56MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 499M/499M [00:38<00:00, 12.9MB/s] ✔ Successfully downloaded model \"roberta-base\" ── Downloading model \"distilroberta-base\" ───────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 480/480 [00:00<00:00, 96.4kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 12.0kB/s] vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 6.59MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 9.46MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 11.5MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 331M/331M [00:25<00:00, 13.0MB/s] ✔ Successfully downloaded model \"distilroberta-base\" ── Downloading model \"vinai/bertweet-base\" ──────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 558/558 [00:00<00:00, 187kB/s] → (2) Downloading tokenizer... vocab.txt: 100%|██████████| 843k/843k [00:00<00:00, 7.44MB/s] bpe.codes: 100%|██████████| 1.08M/1.08M [00:00<00:00, 7.01MB/s] tokenizer.json: 100%|██████████| 2.91M/2.91M [00:00<00:00, 9.10MB/s] → (3) Downloading model... pytorch_model.bin: 100%|██████████| 543M/543M [00:48<00:00, 11.1MB/s] ✔ Successfully downloaded model \"vinai/bertweet-base\" ── Downloading model \"vinai/bertweet-large\" ─────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 614/614 [00:00<00:00, 120kB/s] → (2) Downloading tokenizer... vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 5.90MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 7.30MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.31MB/s] → (3) Downloading model... pytorch_model.bin: 100%|██████████| 1.42G/1.42G [02:29<00:00, 9.53MB/s] ✔ Successfully downloaded model \"vinai/bertweet-large\" ── Downloaded models: ── size albert-base-v1 45 MB albert-base-v2 45 MB bert-base-cased 416 MB bert-base-uncased 420 MB bert-large-cased 1277 MB bert-large-uncased 1283 MB distilbert-base-cased 251 MB distilbert-base-uncased 256 MB distilroberta-base 316 MB roberta-base 476 MB vinai/bertweet-base 517 MB vinai/bertweet-large 1356 MB ✔ Downloaded models saved at C:/Users/Bruce/.cache/huggingface/hub (6.52 GB) BERT_info(models) model size vocab dims mask 1: bert-base-uncased 420MB 30522 768 [MASK] 2: bert-base-cased 416MB 28996 768 [MASK] 3: bert-large-uncased 1283MB 30522 1024 [MASK] 4: bert-large-cased 1277MB 28996 1024 [MASK] 5: distilbert-base-uncased 256MB 30522 768 [MASK] 6: distilbert-base-cased 251MB 28996 768 [MASK] 7: albert-base-v1 45MB 30000 128 [MASK] 8: albert-base-v2 45MB 30000 128 [MASK] 9: roberta-base 476MB 50265 768 10: distilroberta-base 316MB 50265 768 11: vinai/bertweet-base 517MB 64001 768 12: vinai/bertweet-large 1356MB 50265 1024 "},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"related-packages","dir":"","previous_headings":"","what":"Related Packages","title":"The Fill-Mask Association Test","text":"FMAT innovative method computational intelligent analysis psychology society, may also seek integrative toolbox text-analytic methods. Another R package developed—PsychWordVec—useful user-friendly word embedding analysis (e.g., Word Embedding Association Test, WEAT). Please refer documentation feel free use .","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":null,"dir":"Reference","previous_headings":"","what":"Download and save BERT models to local cache folder. — BERT_download","title":"Download and save BERT models to local cache folder. — BERT_download","text":"Download save BERT models local cache folder \"%USERPROFILE%/.cache/huggingface\".","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Download and save BERT models to local cache folder. — BERT_download","text":"","code":"BERT_download(models = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Download and save BERT models to local cache folder. — BERT_download","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Download and save BERT models to local cache folder. — BERT_download","text":"return value.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Download and save BERT models to local cache folder. — BERT_download","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_download(models) BERT_download() # check downloaded models BERT_info() # information of all downloaded models }"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":null,"dir":"Reference","previous_headings":"","what":"Get basic information of BERT models. — BERT_info","title":"Get basic information of BERT models. — BERT_info","text":"Get basic information BERT models.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get basic information of BERT models. — BERT_info","text":"","code":"BERT_info(models = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get basic information of BERT models. — BERT_info","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get basic information of BERT models. — BERT_info","text":"data.table model name, model file size, vocabulary size (word/token embeddings), embedding dimensions (word/token embeddings), [MASK] token.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get basic information of BERT models. — BERT_info","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_info(models) BERT_info() # information of all downloaded models }"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if mask words are in the model vocabulary. — BERT_vocab","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"Check mask words model vocabulary.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"","code":"BERT_vocab( models, mask.words, add.tokens = FALSE, add.method = c(\"sum\", \"mean\") )"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"models Model names HuggingFace. mask.words Option words filling mask. add.tokens Add new tokens (--vocabulary words even phrases) model vocabulary? Defaults FALSE. temporarily adds tokens tasks change raw model file. add.method Method used produce token embeddings new added tokens. Can \"sum\" (default) \"mean\" subword token embeddings.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"data.table model name, mask word, real token (replaced vocabulary), token id (0~N).","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_info(models) BERT_vocab(models, c(\"bruce\", \"Bruce\")) BERT_vocab(models, 2020:2025) # some are out-of-vocabulary BERT_vocab(models, 2020:2025, add.tokens=TRUE) # add vocab BERT_vocab(models, c(\"individualism\", \"artificial intelligence\"), add.tokens=TRUE) }"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":null,"dir":"Reference","previous_headings":"","what":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"Load BERT models local cache folder \"%USERPROFILE%/.cache/huggingface\". GPU Acceleration, please directly use FMAT_run. general, FMAT_run always preferred FMAT_load.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"","code":"FMAT_load(models)"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"named list fill-mask pipelines obtained models. returned object saved RData. need rerun function restart R session.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") models = FMAT_load(models) # load models from cache }"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"Prepare data.table queries variables FMAT.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"","code":"FMAT_query( query = \"Text with [MASK], optionally with {TARGET} and/or {ATTRIB}.\", MASK = .(), TARGET = .(), ATTRIB = .() )"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"query Query text (character string/vector least one [MASK] token). Multiple queries share set MASK, TARGET, ATTRIB. multiple queries different MASK, TARGET, /ATTRIB, please use FMAT_query_bind combine . MASK named list [MASK] target words. Must single words vocabulary certain masked language model. model vocabulary, see, e.g., https://huggingface.co/bert-base-uncased/raw/main/vocab.txt Infrequent words may included model's vocabulary, case may insert words context specifying either TARGET ATTRIB. TARGET, ATTRIB named list Target/Attribute words phrases. specified, query must contain {TARGET} /{ATTRIB} (uppercase braces) replaced words/phrases.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"data.table queries variables.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"","code":"FMAT_query(\"[MASK] is a nurse.\", MASK = .(Male=\"He\", Female=\"She\")) #> query MASK M_pair M_word #> #> 1: [MASK] is a nurse. Male 1 He #> 2: [MASK] is a nurse. Female 1 She FMAT_query( c(\"[MASK] is {TARGET}.\", \"[MASK] works as {TARGET}.\"), MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ) #> qid query MASK M_pair M_word TARGET #> #> 1: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 2: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 3: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 4: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 5: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 6: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 7: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 8: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> 9: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 10: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> 11: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 12: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> T_pair T_word #> #> 1: Occupation.1 a doctor #> 2: Occupation.1 a doctor #> 3: Occupation.2 a nurse #> 4: Occupation.2 a nurse #> 5: Occupation.3 an artist #> 6: Occupation.3 an artist #> 7: Occupation.1 a doctor #> 8: Occupation.1 a doctor #> 9: Occupation.2 a nurse #> 10: Occupation.2 a nurse #> 11: Occupation.3 an artist #> 12: Occupation.3 an artist FMAT_query( \"The [MASK] {ATTRIB}.\", MASK = .(Male=cc(\"man, boy\"), Female=cc(\"woman, girl\")), ATTRIB = .(Masc=cc(\"is masculine, has a masculine personality\"), Femi=cc(\"is feminine, has a feminine personality\")) ) #> query MASK M_pair M_word ATTRIB A_pair #> #> 1: The [MASK] {ATTRIB}. Male 1 man Masc Masc-Femi.1 #> 2: The [MASK] {ATTRIB}. Male 2 boy Masc Masc-Femi.1 #> 3: The [MASK] {ATTRIB}. Female 1 woman Masc Masc-Femi.1 #> 4: The [MASK] {ATTRIB}. Female 2 girl Masc Masc-Femi.1 #> 5: The [MASK] {ATTRIB}. Male 1 man Masc Masc-Femi.2 #> 6: The [MASK] {ATTRIB}. Male 2 boy Masc Masc-Femi.2 #> 7: The [MASK] {ATTRIB}. Female 1 woman Masc Masc-Femi.2 #> 8: The [MASK] {ATTRIB}. Female 2 girl Masc Masc-Femi.2 #> 9: The [MASK] {ATTRIB}. Male 1 man Femi Masc-Femi.1 #> 10: The [MASK] {ATTRIB}. Male 2 boy Femi Masc-Femi.1 #> 11: The [MASK] {ATTRIB}. Female 1 woman Femi Masc-Femi.1 #> 12: The [MASK] {ATTRIB}. Female 2 girl Femi Masc-Femi.1 #> 13: The [MASK] {ATTRIB}. Male 1 man Femi Masc-Femi.2 #> 14: The [MASK] {ATTRIB}. Male 2 boy Femi Masc-Femi.2 #> 15: The [MASK] {ATTRIB}. Female 1 woman Femi Masc-Femi.2 #> 16: The [MASK] {ATTRIB}. Female 2 girl Femi Masc-Femi.2 #> A_word #> #> 1: is masculine #> 2: is masculine #> 3: is masculine #> 4: is masculine #> 5: has a masculine personality #> 6: has a masculine personality #> 7: has a masculine personality #> 8: has a masculine personality #> 9: is feminine #> 10: is feminine #> 11: is feminine #> 12: is feminine #> 13: has a feminine personality #> 14: has a feminine personality #> 15: has a feminine personality #> 16: has a feminine personality FMAT_query( \"The association between {TARGET} and {ATTRIB} is [MASK].\", MASK = .(H=\"strong\", L=\"weak\"), TARGET = .(Flower=cc(\"rose, iris, lily\"), Insect=cc(\"ant, cockroach, spider\")), ATTRIB = .(Pos=cc(\"health, happiness, love, peace\"), Neg=cc(\"death, sickness, hatred, disaster\")) ) #> query MASK M_pair #> #> 1: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 2: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 3: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 4: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 5: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 6: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 7: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 8: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 9: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 10: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 11: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 12: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 13: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 14: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 15: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 16: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 17: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 18: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 19: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 20: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 21: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 22: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 23: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 24: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 25: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 26: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 27: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 28: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 29: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 30: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 31: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 32: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 33: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 34: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 35: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 36: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 37: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 38: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 39: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 40: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 41: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 42: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 43: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 44: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 45: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 46: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 47: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 48: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 49: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 50: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 51: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 52: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 53: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 54: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 55: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 56: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 57: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 58: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 59: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 60: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 61: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 62: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 63: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 64: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 65: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 66: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 67: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 68: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 69: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 70: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 71: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 72: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 73: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 74: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 75: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 76: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 77: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 78: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 79: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 80: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 81: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 82: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 83: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 84: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 85: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 86: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 87: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 88: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 89: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 90: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 91: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 92: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 93: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 94: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 95: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 96: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> query MASK M_pair #> M_word TARGET T_word ATTRIB A_word #> #> 1: strong Flower rose Pos health #> 2: weak Flower rose Pos health #> 3: strong Flower iris Pos health #> 4: weak Flower iris Pos health #> 5: strong Flower lily Pos health #> 6: weak Flower lily Pos health #> 7: strong Flower rose Pos happiness #> 8: weak Flower rose Pos happiness #> 9: strong Flower iris Pos happiness #> 10: weak Flower iris Pos happiness #> 11: strong Flower lily Pos happiness #> 12: weak Flower lily Pos happiness #> 13: strong Flower rose Pos love #> 14: weak Flower rose Pos love #> 15: strong Flower iris Pos love #> 16: weak Flower iris Pos love #> 17: strong Flower lily Pos love #> 18: weak Flower lily Pos love #> 19: strong Flower rose Pos peace #> 20: weak Flower rose Pos peace #> 21: strong Flower iris Pos peace #> 22: weak Flower iris Pos peace #> 23: strong Flower lily Pos peace #> 24: weak Flower lily Pos peace #> 25: strong Flower rose Neg death #> 26: weak Flower rose Neg death #> 27: strong Flower iris Neg death #> 28: weak Flower iris Neg death #> 29: strong Flower lily Neg death #> 30: weak Flower lily Neg death #> 31: strong Flower rose Neg sickness #> 32: weak Flower rose Neg sickness #> 33: strong Flower iris Neg sickness #> 34: weak Flower iris Neg sickness #> 35: strong Flower lily Neg sickness #> 36: weak Flower lily Neg sickness #> 37: strong Flower rose Neg hatred #> 38: weak Flower rose Neg hatred #> 39: strong Flower iris Neg hatred #> 40: weak Flower iris Neg hatred #> 41: strong Flower lily Neg hatred #> 42: weak Flower lily Neg hatred #> 43: strong Flower rose Neg disaster #> 44: weak Flower rose Neg disaster #> 45: strong Flower iris Neg disaster #> 46: weak Flower iris Neg disaster #> 47: strong Flower lily Neg disaster #> 48: weak Flower lily Neg disaster #> 49: strong Insect ant Pos health #> 50: weak Insect ant Pos health #> 51: strong Insect cockroach Pos health #> 52: weak Insect cockroach Pos health #> 53: strong Insect spider Pos health #> 54: weak Insect spider Pos health #> 55: strong Insect ant Pos happiness #> 56: weak Insect ant Pos happiness #> 57: strong Insect cockroach Pos happiness #> 58: weak Insect cockroach Pos happiness #> 59: strong Insect spider Pos happiness #> 60: weak Insect spider Pos happiness #> 61: strong Insect ant Pos love #> 62: weak Insect ant Pos love #> 63: strong Insect cockroach Pos love #> 64: weak Insect cockroach Pos love #> 65: strong Insect spider Pos love #> 66: weak Insect spider Pos love #> 67: strong Insect ant Pos peace #> 68: weak Insect ant Pos peace #> 69: strong Insect cockroach Pos peace #> 70: weak Insect cockroach Pos peace #> 71: strong Insect spider Pos peace #> 72: weak Insect spider Pos peace #> 73: strong Insect ant Neg death #> 74: weak Insect ant Neg death #> 75: strong Insect cockroach Neg death #> 76: weak Insect cockroach Neg death #> 77: strong Insect spider Neg death #> 78: weak Insect spider Neg death #> 79: strong Insect ant Neg sickness #> 80: weak Insect ant Neg sickness #> 81: strong Insect cockroach Neg sickness #> 82: weak Insect cockroach Neg sickness #> 83: strong Insect spider Neg sickness #> 84: weak Insect spider Neg sickness #> 85: strong Insect ant Neg hatred #> 86: weak Insect ant Neg hatred #> 87: strong Insect cockroach Neg hatred #> 88: weak Insect cockroach Neg hatred #> 89: strong Insect spider Neg hatred #> 90: weak Insect spider Neg hatred #> 91: strong Insect ant Neg disaster #> 92: weak Insect ant Neg disaster #> 93: strong Insect cockroach Neg disaster #> 94: weak Insect cockroach Neg disaster #> 95: strong Insect spider Neg disaster #> 96: weak Insect spider Neg disaster #> M_word TARGET T_word ATTRIB A_word"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"Combine multiple query data.tables renumber query ids.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"","code":"FMAT_query_bind(...)"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"... Query data.tables returned FMAT_query.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"data.table queries variables.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"","code":"FMAT_query_bind( FMAT_query( \"[MASK] is {TARGET}.\", MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ), FMAT_query( \"[MASK] occupation is {TARGET}.\", MASK = .(Male=\"His\", Female=\"Her\"), TARGET = .(Occupation=cc(\"doctor, nurse, artist\")) ) ) #> qid query MASK M_pair M_word TARGET #> #> 1: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 2: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 3: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 4: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 5: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 6: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 7: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 8: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> 9: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 10: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> 11: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 12: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> T_pair T_word #> #> 1: Occupation.1 a doctor #> 2: Occupation.1 a doctor #> 3: Occupation.2 a nurse #> 4: Occupation.2 a nurse #> 5: Occupation.3 an artist #> 6: Occupation.3 an artist #> 7: Occupation.1 doctor #> 8: Occupation.1 doctor #> 9: Occupation.2 nurse #> 10: Occupation.2 nurse #> 11: Occupation.3 artist #> 12: Occupation.3 artist"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":null,"dir":"Reference","previous_headings":"","what":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"Run fill-mask pipeline multiple models CPU GPU (faster requiring NVIDIA GPU device).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"","code":"FMAT_run( models, data, gpu, add.tokens = FALSE, add.method = c(\"sum\", \"mean\"), file = NULL, progress = TRUE, warning = TRUE, na.out = TRUE )"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"models Options: character vector model names HuggingFace. Can used CPU GPU. returned object FMAT_load. Can used CPU. restart R session, need rerun FMAT_load. data data.table returned FMAT_query FMAT_query_bind. gpu Use GPU (3x faster CPU) run fill-mask pipeline? Defaults missing value automatically use available GPU (available, use CPU). NVIDIA GPU device (e.g., GeForce RTX Series) required use GPU. See Guidance GPU Acceleration. Options passing device parameter Python: FALSE: CPU (device = -1). TRUE: GPU (device = 0). value: passing transformers.pipeline(device=...) defines device (e.g., \"cpu\", \"cuda:0\", GPU device id like 1) pipeline allocated. add.tokens Add new tokens (--vocabulary words even phrases) model vocabulary? Defaults FALSE. temporarily adds tokens tasks change raw model file. add.method Method used produce token embeddings new added tokens. Can \"sum\" (default) \"mean\" subword token embeddings. file File name .RData save returned data. progress Show progress bar? Defaults TRUE. warning Alert warning --vocabulary word(s)? Defaults TRUE. na.Replace probabilities --vocabulary word(s) NA? Defaults TRUE.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"data.table (new class fmat) appending data new variables: model: model name. output: complete sentence output unmasked token. token: actual token filled blank mask (note \"--vocabulary\" added original word found model vocabulary). prob: (raw) conditional probability unmasked token given provided context, estimated masked language model. SUGGESTED directly interpret raw probabilities contrast pair probabilities interpretable. See summary.fmat.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"function automatically adjusts compatibility tokens used certain models: (1) uncased models (e.g., ALBERT), turns tokens lowercase; (2) models use rather [MASK], automatically uses corrected mask token; (3) models require prefix estimate whole words subwords (e.g., ALBERT, RoBERTa), adds certain prefix (usually white space; \\u2581 ALBERT XLM-RoBERTa, \\u0120 RoBERTa DistilRoBERTa). Note changes affect token variable returned data, affect M_word variable. Thus, users may analyze data based unchanged M_word rather token. Note also may extremely trivial differences (5~6 significant digits) raw probability estimates using CPU GPU, differences little impact main results.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"","code":"## Running the examples requires the models downloaded if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") query1 = FMAT_query( c(\"[MASK] is {TARGET}.\", \"[MASK] works as {TARGET}.\"), MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ) data1 = FMAT_run(models, query1) summary(data1, target.pair=FALSE) query2 = FMAT_query( \"The [MASK] {ATTRIB}.\", MASK = .(Male=cc(\"man, boy\"), Female=cc(\"woman, girl\")), ATTRIB = .(Masc=cc(\"is masculine, has a masculine personality\"), Femi=cc(\"is feminine, has a feminine personality\")) ) data2 = FMAT_run(models, query2) summary(data2, mask.pair=FALSE) summary(data2) }"},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"Interrater agreement log probabilities (treated \"ratings\"/rows) among BERT language models (treated \"raters\"/columns), row column (\"two-way\") random effects.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"","code":"ICC_models(data, type = \"agreement\", unit = \"average\")"},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"data Raw data returned FMAT_run. type Interrater \"agreement\" (default) \"consistency\". unit Reliability \"average\" scores (default) \"single\" scores.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"data.table ICC.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":null,"dir":"Reference","previous_headings":"","what":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"Reliability analysis (Cronbach's \\(\\alpha\\)) LPR.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"","code":"LPR_reliability(fmat, item = c(\"query\", \"T_word\", \"A_word\"), by = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"fmat data.table returned summary.fmat. item Reliability multiple \"query\" (default), \"T_word\", \"A_word\". Variable(s) split data . Options can \"model\", \"TARGET\", \"ATTRIB\", combination .","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"data.table Cronbach's \\(\\alpha\\).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":null,"dir":"Reference","previous_headings":"","what":"A simple function equivalent to list. — .","title":"A simple function equivalent to list. — .","text":"simple function equivalent list.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"A simple function equivalent to list. — .","text":"","code":".(...)"},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"A simple function equivalent to list. — .","text":"... Named objects (usually character vectors package).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"A simple function equivalent to list. — .","text":"list named objects.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"A simple function equivalent to list. — .","text":"","code":".(Male=cc(\"he, his\"), Female=cc(\"she, her\")) #> $Male #> [1] \"he\" \"his\" #> #> $Female #> [1] \"she\" \"her\" #> list(Male=cc(\"he, his\"), Female=cc(\"she, her\")) # the same #> $Male #> [1] \"he\" \"his\" #> #> $Female #> [1] \"she\" \"her\" #>"},{"path":"https://psychbruce.github.io/FMAT/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. PsychWordVec cc","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":null,"dir":"Reference","previous_headings":"","what":"[S3 method] Summarize the results for the FMAT. — summary.fmat","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"Summarize results Log Probability Ratio (LPR), indicates relative (vs. absolute) association concepts. LPR just one contrast (e.g., pair attributes) may sufficient proper interpretation results, may require second contrast (e.g., pair targets). Users suggested use linear mixed models (R packages nlme lme4/lmerTest) perform formal analyses hypothesis tests based LPR.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"","code":"# S3 method for fmat summary( object, mask.pair = TRUE, target.pair = TRUE, attrib.pair = TRUE, warning = TRUE, ... )"},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"object data.table (new class fmat) returned FMAT_run. mask.pair, target.pair, attrib.pair Pairwise contrast [MASK], TARGET, ATTRIB? Defaults TRUE. warning Alert warning --vocabulary word(s)? Defaults TRUE. ... arguments (currently used).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"data.table summarized results Log Probability Ratio (LPR).","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"","code":"# see examples in `FMAT_run`"},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20245","dir":"Changelog","previous_headings":"","what":"FMAT 2024.5","title":"FMAT 2024.5","text":"Added BERT_info(). Added add.tokens add.method parameters BERT_vocab() FMAT_run(): experimental functionality add new tokens (e.g., --vocabulary words, compound words, even phrases) [MASK] options. Validation still needed novel practice (one ongoing projects), currently please use risk, waiting publication validation work. functions except BERT_download() now import local model files , without automatically downloading models. Users must first use BERT_download() download models. Deprecating FMAT_load(): Better use FMAT_run() directly.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20244","dir":"Changelog","previous_headings":"","what":"FMAT 2024.4","title":"FMAT 2024.4","text":"CRAN release: 2024-04-29 Added BERT_vocab() ICC_models(). Improved summary.fmat(), FMAT_query(), FMAT_run() (significantly faster now can simultaneously estimate [MASK] options unique query sentence, running time depending number unique queries number [MASK] options). use reticulate package version ≥ 1.36.1, FMAT updated ≥ 2024.4. Otherwise, --vocabulary [MASK] words may identified marked. Now FMAT_run() directly uses model vocabulary token ID match [MASK] words. check [MASK] word model vocabulary, please use BERT_vocab().","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20243","dir":"Changelog","previous_headings":"","what":"FMAT 2024.3","title":"FMAT 2024.3","text":"CRAN release: 2024-03-22 FMAT methodology paper accepted (March 14, 2024) publication Journal Personality Social Psychology: Attitudes Social Cognition (DOI: 10.1037/pspa0000396)! Added BERT_download() (downloading models local cache folder “%USERPROFILE%/.cache/huggingface”) differentiate FMAT_load() (loading saved models local cache). indeed FMAT_load() can also download models silently downloaded. Added gpu parameter (see Guidance GPU Acceleration) FMAT_run() allow specifying NVIDIA GPU device fill-mask pipeline allocated. GPU roughly performs 3x faster CPU fill-mask pipeline. default, FMAT_run() automatically detect use available GPU installed CUDA-supported Python torch package (, use CPU). Added running speed information (queries/min) FMAT_run(). Added device information BERT_download(), FMAT_load(), FMAT_run(). Deprecated parallel FMAT_run(): FMAT_run(model.names, data, gpu=TRUE) fastest. progress bar displayed default progress FMAT_run().","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20238","dir":"Changelog","previous_headings":"","what":"FMAT 2023.8","title":"FMAT 2023.8","text":"CRAN release: 2023-08-11 CRAN package publication. Fixed bugs improved functions. Provided examples. Now use “YYYY.M” package version number.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-009-may-2023","dir":"Changelog","previous_headings":"","what":"FMAT 0.0.9 (May 2023)","title":"FMAT 0.0.9 (May 2023)","text":"Initial public release GitHub.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-001-jan-2023","dir":"Changelog","previous_headings":"","what":"FMAT 0.0.1 (Jan 2023)","title":"FMAT 0.0.1 (Jan 2023)","text":"Designed basic functions.","code":""}] +[{"path":"https://psychbruce.github.io/FMAT/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Han-Wu-Shuang Bao. Author, maintainer.","code":""},{"path":"https://psychbruce.github.io/FMAT/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Bao H (2024). FMAT: Fill-Mask Association Test. R package version 2024.5, https://psychbruce.github.io/FMAT/.","code":"@Manual{, title = {FMAT: The Fill-Mask Association Test}, author = {Han-Wu-Shuang Bao}, year = {2024}, note = {R package version 2024.5}, url = {https://psychbruce.github.io/FMAT/}, }"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"fmat-","dir":"","previous_headings":"","what":"The Fill-Mask Association Test","title":"The Fill-Mask Association Test","text":"😷 Fill-Mask Association Test (掩码填空联系测验). Fill-Mask Association Test (FMAT) integrative probability-based method using BERT Models measure conceptual associations (e.g., attitudes, biases, stereotypes, social norms, cultural values) propositions natural language (Bao, 2024, JPSP). ⚠️ Please update package version ≥ 2024.5 faster robust functionality.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"author","dir":"","previous_headings":"","what":"Author","title":"The Fill-Mask Association Test","text":"Han-Wu-Shuang (Bruce) Bao 包寒吴霜 📬 baohws@foxmail.com 📋 psychbruce.github.io","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"The Fill-Mask Association Test","text":"Note: original citation. Please refer information library(FMAT) APA-7 format version installed. Bao, H.-W.-S. (2024). Fill-Mask Association Test (FMAT): Measuring propositions natural language. Journal Personality Social Psychology. Advance online publication. DOI: 10.1037/pspa0000396 Bao, H.-W.-S., & Gries, P. (2024). Intersectional race–gender stereotypes natural language. British Journal Social Psychology. Advance online publication. https://doi.org/10.1111/bjso.12748","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"The Fill-Mask Association Test","text":"use FMAT, R package FMAT two Python packages (transformers torch) need installed.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"id_1-r-package","dir":"","previous_headings":"Installation","what":"(1) R Package","title":"The Fill-Mask Association Test","text":"","code":"## Method 1: Install from CRAN install.packages(\"FMAT\") ## Method 2: Install from GitHub install.packages(\"devtools\") devtools::install_github(\"psychbruce/FMAT\", force=TRUE)"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"id_2-python-environment-and-packages","dir":"","previous_headings":"Installation","what":"(2) Python Environment and Packages","title":"The Fill-Mask Association Test","text":"Install Anaconda (recommended package manager automatically installs Python, Python IDEs like Spyder, large list necessary Python package dependencies). Specify Python interpreter RStudio. RStudio → Tools → Global/Project Options → Python → Select → Conda Environments → Choose “…/Anaconda3/python.exe” Install “transformers” “torch” Python packages. (Windows Command / Anaconda Prompt / RStudio Terminal) See Guidance GPU Acceleration installation guidance NVIDIA GPU device PC want use GPU accelerate pipeline.","code":"pip install transformers torch"},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-1-download-bert-models","dir":"","previous_headings":"Guidance for FMAT","what":"Step 1: Download BERT Models","title":"The Fill-Mask Association Test","text":"Use BERT_download() download BERT models. Model files saved local folder “%USERPROFILE%/.cache/huggingface”. full list BERT models available Hugging Face. Use BERT_info() BERT_vocab() find detailed information BERT models.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-2-design-fmat-queries","dir":"","previous_headings":"Guidance for FMAT","what":"Step 2: Design FMAT Queries","title":"The Fill-Mask Association Test","text":"Design queries conceptually represent constructs measure (see Bao, 2024, JPSP design queries). Use FMAT_query() /FMAT_query_bind() prepare data.table queries.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"step-3-run-fmat","dir":"","previous_headings":"Guidance for FMAT","what":"Step 3: Run FMAT","title":"The Fill-Mask Association Test","text":"Use FMAT_run() get raw data (probability estimates) analysis. Several steps preprocessing included function easier use (see FMAT_run() details). BERT variants using rather [MASK] mask token, input query automatically modified users can always use [MASK] query design. BERT variants, special prefix characters \\u0120 \\u2581 automatically added match whole words (rather subwords) [MASK].","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"notes","dir":"","previous_headings":"Guidance for FMAT","what":"Notes","title":"The Fill-Mask Association Test","text":"Improvements ongoing, especially adaptation diverse (less popular) BERT models. find bugs problems using functions, please report GitHub Issues send email.","code":""},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"guidance-for-gpu-acceleration","dir":"","previous_headings":"","what":"Guidance for GPU Acceleration","title":"The Fill-Mask Association Test","text":"default, FMAT package uses CPU enable functionality users. advanced users want accelerate pipeline GPU, FMAT_run() function now supports using GPU device, 3x faster CPU. Test results (developer’s computer, depending BERT model size): CPU (Intel 13th-Gen i7-1355U): 500~1000 queries/min GPU (NVIDIA GeForce RTX 2050): 1500~3000 queries/min Checklist: Ensure NVIDIA GPU device (e.g., GeForce RTX Series) NVIDIA GPU driver installed system. Find guidance installation command https://pytorch.org/get-started/locally/. CUDA available Windows Linux, MacOS. installed version torch without CUDA support, please first uninstall (command: pip uninstall torch) install suggested one. may also install corresponding version CUDA Toolkit (e.g., torch version supporting CUDA 12.1, version CUDA Toolkit 12.1 may also installed). Example code installing PyTorch CUDA support: (Windows Command / Anaconda Prompt / RStudio Terminal)","code":"pip install torch --index-url https://download.pytorch.org/whl/cu121"},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"bert-models","dir":"","previous_headings":"","what":"BERT Models","title":"The Fill-Mask Association Test","text":"reliability validity following 12 representative BERT models established research articles, future work needed examine performance models. (model name Hugging Face - downloaded model file size) bert-base-uncased (420 MB) bert-base-cased (416 MB) bert-large-uncased (1283 MB) bert-large-cased (1277 MB) distilbert-base-uncased (256 MB) distilbert-base-cased (251 MB) albert-base-v1 (45 MB) albert-base-v2 (45 MB) roberta-base (476 MB) distilroberta-base (316 MB) vinai/bertweet-base (517 MB) vinai/bertweet-large (1356 MB) new BERT, references can helpful: Fill-Mask? [HuggingFace] Explorable BERT [HuggingFace] BERT Model Documentation [HuggingFace] BERT Explained Breaking BERT Illustrated BERT Visual Guide BERT (Tested 2024-05-16 developer’s computer: HP Probook 450 G10 Notebook PC)","code":"library(FMAT) models = c( \"bert-base-uncased\", \"bert-base-cased\", \"bert-large-uncased\", \"bert-large-cased\", \"distilbert-base-uncased\", \"distilbert-base-cased\", \"albert-base-v1\", \"albert-base-v2\", \"roberta-base\", \"distilroberta-base\", \"vinai/bertweet-base\", \"vinai/bertweet-large\" ) BERT_download(models) ℹ Device Info: R Packages: FMAT 2024.5 reticulate 1.36.1 Python Packages: transformers 4.40.2 torch 2.2.1+cu121 NVIDIA GPU CUDA Support: CUDA Enabled: TRUE CUDA Version: 12.1 GPU (Device): NVIDIA GeForce RTX 2050 ── Downloading model \"bert-base-uncased\" ────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 570/570 [00:00<00:00, 114kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 23.9kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.98MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 440M/440M [00:36<00:00, 12.1MB/s] ✔ Successfully downloaded model \"bert-base-uncased\" ── Downloading model \"bert-base-cased\" ──────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 570/570 [00:00<00:00, 63.3kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 8.66kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 10.1MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 436M/436M [00:37<00:00, 11.6MB/s] ✔ Successfully downloaded model \"bert-base-cased\" ── Downloading model \"bert-large-uncased\" ───────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 571/571 [00:00<00:00, 268kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 12.0kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 1.50MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 1.99MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 1.34G/1.34G [01:36<00:00, 14.0MB/s] ✔ Successfully downloaded model \"bert-large-uncased\" ── Downloading model \"bert-large-cased\" ─────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 762/762 [00:00<00:00, 125kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 12.3kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.41MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 5.39MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 1.34G/1.34G [01:35<00:00, 14.0MB/s] ✔ Successfully downloaded model \"bert-large-cased\" ── Downloading model \"distilbert-base-uncased\" ──────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 483/483 [00:00<00:00, 161kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 9.46kB/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 16.5MB/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 14.8MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 268M/268M [00:19<00:00, 13.5MB/s] ✔ Successfully downloaded model \"distilbert-base-uncased\" ── Downloading model \"distilbert-base-cased\" ────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 465/465 [00:00<00:00, 233kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 49.0/49.0 [00:00<00:00, 9.80kB/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 1.39MB/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 8.70MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 263M/263M [00:24<00:00, 10.9MB/s] ✔ Successfully downloaded model \"distilbert-base-cased\" ── Downloading model \"albert-base-v1\" ───────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 3.57kB/s] spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 4.93MB/s] tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 13.4MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.4MB/s] ✔ Successfully downloaded model \"albert-base-v1\" ── Downloading model \"albert-base-v2\" ───────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 684/684 [00:00<00:00, 137kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 4.17kB/s] spiece.model: 100%|██████████| 760k/760k [00:00<00:00, 5.10MB/s] tokenizer.json: 100%|██████████| 1.31M/1.31M [00:00<00:00, 6.93MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 47.4M/47.4M [00:03<00:00, 13.8MB/s] ✔ Successfully downloaded model \"albert-base-v2\" ── Downloading model \"roberta-base\" ─────────────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 481/481 [00:00<00:00, 80.3kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 6.25kB/s] vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 2.72MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 8.22MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.56MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 499M/499M [00:38<00:00, 12.9MB/s] ✔ Successfully downloaded model \"roberta-base\" ── Downloading model \"distilroberta-base\" ───────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 480/480 [00:00<00:00, 96.4kB/s] → (2) Downloading tokenizer... tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 12.0kB/s] vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 6.59MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 9.46MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 11.5MB/s] → (3) Downloading model... model.safetensors: 100%|██████████| 331M/331M [00:25<00:00, 13.0MB/s] ✔ Successfully downloaded model \"distilroberta-base\" ── Downloading model \"vinai/bertweet-base\" ──────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 558/558 [00:00<00:00, 187kB/s] → (2) Downloading tokenizer... vocab.txt: 100%|██████████| 843k/843k [00:00<00:00, 7.44MB/s] bpe.codes: 100%|██████████| 1.08M/1.08M [00:00<00:00, 7.01MB/s] tokenizer.json: 100%|██████████| 2.91M/2.91M [00:00<00:00, 9.10MB/s] → (3) Downloading model... pytorch_model.bin: 100%|██████████| 543M/543M [00:48<00:00, 11.1MB/s] ✔ Successfully downloaded model \"vinai/bertweet-base\" ── Downloading model \"vinai/bertweet-large\" ─────────────────────────────────────── → (1) Downloading configuration... config.json: 100%|██████████| 614/614 [00:00<00:00, 120kB/s] → (2) Downloading tokenizer... vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 5.90MB/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 7.30MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.31MB/s] → (3) Downloading model... pytorch_model.bin: 100%|██████████| 1.42G/1.42G [02:29<00:00, 9.53MB/s] ✔ Successfully downloaded model \"vinai/bertweet-large\" ── Downloaded models: ── size albert-base-v1 45 MB albert-base-v2 45 MB bert-base-cased 416 MB bert-base-uncased 420 MB bert-large-cased 1277 MB bert-large-uncased 1283 MB distilbert-base-cased 251 MB distilbert-base-uncased 256 MB distilroberta-base 316 MB roberta-base 476 MB vinai/bertweet-base 517 MB vinai/bertweet-large 1356 MB ✔ Downloaded models saved at C:/Users/Bruce/.cache/huggingface/hub (6.52 GB) BERT_info(models) model size vocab dims mask 1: bert-base-uncased 420MB 30522 768 [MASK] 2: bert-base-cased 416MB 28996 768 [MASK] 3: bert-large-uncased 1283MB 30522 1024 [MASK] 4: bert-large-cased 1277MB 28996 1024 [MASK] 5: distilbert-base-uncased 256MB 30522 768 [MASK] 6: distilbert-base-cased 251MB 28996 768 [MASK] 7: albert-base-v1 45MB 30000 128 [MASK] 8: albert-base-v2 45MB 30000 128 [MASK] 9: roberta-base 476MB 50265 768 10: distilroberta-base 316MB 50265 768 11: vinai/bertweet-base 517MB 64001 768 12: vinai/bertweet-large 1356MB 50265 1024 "},{"path":"https://psychbruce.github.io/FMAT/index.html","id":"related-packages","dir":"","previous_headings":"","what":"Related Packages","title":"The Fill-Mask Association Test","text":"FMAT innovative method computational intelligent analysis psychology society, may also seek integrative toolbox text-analytic methods. Another R package developed—PsychWordVec—useful user-friendly word embedding analysis (e.g., Word Embedding Association Test, WEAT). Please refer documentation feel free use .","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":null,"dir":"Reference","previous_headings":"","what":"Download and save BERT models to local cache folder. — BERT_download","title":"Download and save BERT models to local cache folder. — BERT_download","text":"Download save BERT models local cache folder \"%USERPROFILE%/.cache/huggingface\".","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Download and save BERT models to local cache folder. — BERT_download","text":"","code":"BERT_download(models = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Download and save BERT models to local cache folder. — BERT_download","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Download and save BERT models to local cache folder. — BERT_download","text":"return value.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_download.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Download and save BERT models to local cache folder. — BERT_download","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_download(models) BERT_download() # check downloaded models BERT_info() # information of all downloaded models }"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":null,"dir":"Reference","previous_headings":"","what":"Get basic information of BERT models. — BERT_info","title":"Get basic information of BERT models. — BERT_info","text":"Get basic information BERT models.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Get basic information of BERT models. — BERT_info","text":"","code":"BERT_info(models = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Get basic information of BERT models. — BERT_info","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Get basic information of BERT models. — BERT_info","text":"data.table model name, model file size, vocabulary size (word/token embeddings), embedding dimensions (word/token embeddings), [MASK] token.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_info.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Get basic information of BERT models. — BERT_info","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_info(models) BERT_info() # information of all downloaded models }"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":null,"dir":"Reference","previous_headings":"","what":"Check if mask words are in the model vocabulary. — BERT_vocab","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"Check mask words model vocabulary.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"","code":"BERT_vocab( models, mask.words, add.tokens = FALSE, add.method = c(\"sum\", \"mean\") )"},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"models Model names HuggingFace. mask.words Option words filling mask. add.tokens Add new tokens (--vocabulary words even phrases) model vocabulary? Defaults FALSE. temporarily adds tokens tasks change raw model file. add.method Method used produce token embeddings new added tokens. Can \"sum\" (default) \"mean\" subword token embeddings.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"data.table model name, mask word, real token (replaced vocabulary), token id (0~N).","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/BERT_vocab.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Check if mask words are in the model vocabulary. — BERT_vocab","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") BERT_info(models) BERT_vocab(models, c(\"bruce\", \"Bruce\")) BERT_vocab(models, 2020:2025) # some are out-of-vocabulary BERT_vocab(models, 2020:2025, add.tokens=TRUE) # add vocab BERT_vocab(models, c(\"individualism\", \"artificial intelligence\"), add.tokens=TRUE) }"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":null,"dir":"Reference","previous_headings":"","what":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"Load BERT models local cache folder \"%USERPROFILE%/.cache/huggingface\". GPU Acceleration, please directly use FMAT_run. general, FMAT_run always preferred FMAT_load.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"","code":"FMAT_load(models)"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"models Model names HuggingFace.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"named list fill-mask pipelines obtained models. returned object saved RData. need rerun function restart R session.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_load.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"[Deprecated] Load BERT models (useless for GPU). — FMAT_load","text":"","code":"if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") models = FMAT_load(models) # load models from cache }"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":null,"dir":"Reference","previous_headings":"","what":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"Prepare data.table queries variables FMAT.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"","code":"FMAT_query( query = \"Text with [MASK], optionally with {TARGET} and/or {ATTRIB}.\", MASK = .(), TARGET = .(), ATTRIB = .() )"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"query Query text (character string/vector least one [MASK] token). Multiple queries share set MASK, TARGET, ATTRIB. multiple queries different MASK, TARGET, /ATTRIB, please use FMAT_query_bind combine . MASK named list [MASK] target words. Must single words vocabulary certain masked language model. model vocabulary, see, e.g., https://huggingface.co/bert-base-uncased/raw/main/vocab.txt Infrequent words may included model's vocabulary, case may insert words context specifying either TARGET ATTRIB. TARGET, ATTRIB named list Target/Attribute words phrases. specified, query must contain {TARGET} /{ATTRIB} (uppercase braces) replaced words/phrases.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"data.table queries variables.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Prepare a data.table of queries and variables for the FMAT. — FMAT_query","text":"","code":"FMAT_query(\"[MASK] is a nurse.\", MASK = .(Male=\"He\", Female=\"She\")) #> query MASK M_pair M_word #> #> 1: [MASK] is a nurse. Male 1 He #> 2: [MASK] is a nurse. Female 1 She FMAT_query( c(\"[MASK] is {TARGET}.\", \"[MASK] works as {TARGET}.\"), MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ) #> qid query MASK M_pair M_word TARGET #> #> 1: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 2: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 3: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 4: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 5: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 6: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 7: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 8: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> 9: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 10: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> 11: 2 [MASK] works as {TARGET}. Male 1 He Occupation #> 12: 2 [MASK] works as {TARGET}. Female 1 She Occupation #> T_pair T_word #> #> 1: Occupation.1 a doctor #> 2: Occupation.1 a doctor #> 3: Occupation.2 a nurse #> 4: Occupation.2 a nurse #> 5: Occupation.3 an artist #> 6: Occupation.3 an artist #> 7: Occupation.1 a doctor #> 8: Occupation.1 a doctor #> 9: Occupation.2 a nurse #> 10: Occupation.2 a nurse #> 11: Occupation.3 an artist #> 12: Occupation.3 an artist FMAT_query( \"The [MASK] {ATTRIB}.\", MASK = .(Male=cc(\"man, boy\"), Female=cc(\"woman, girl\")), ATTRIB = .(Masc=cc(\"is masculine, has a masculine personality\"), Femi=cc(\"is feminine, has a feminine personality\")) ) #> query MASK M_pair M_word ATTRIB A_pair #> #> 1: The [MASK] {ATTRIB}. Male 1 man Masc Masc-Femi.1 #> 2: The [MASK] {ATTRIB}. Male 2 boy Masc Masc-Femi.1 #> 3: The [MASK] {ATTRIB}. Female 1 woman Masc Masc-Femi.1 #> 4: The [MASK] {ATTRIB}. Female 2 girl Masc Masc-Femi.1 #> 5: The [MASK] {ATTRIB}. Male 1 man Masc Masc-Femi.2 #> 6: The [MASK] {ATTRIB}. Male 2 boy Masc Masc-Femi.2 #> 7: The [MASK] {ATTRIB}. Female 1 woman Masc Masc-Femi.2 #> 8: The [MASK] {ATTRIB}. Female 2 girl Masc Masc-Femi.2 #> 9: The [MASK] {ATTRIB}. Male 1 man Femi Masc-Femi.1 #> 10: The [MASK] {ATTRIB}. Male 2 boy Femi Masc-Femi.1 #> 11: The [MASK] {ATTRIB}. Female 1 woman Femi Masc-Femi.1 #> 12: The [MASK] {ATTRIB}. Female 2 girl Femi Masc-Femi.1 #> 13: The [MASK] {ATTRIB}. Male 1 man Femi Masc-Femi.2 #> 14: The [MASK] {ATTRIB}. Male 2 boy Femi Masc-Femi.2 #> 15: The [MASK] {ATTRIB}. Female 1 woman Femi Masc-Femi.2 #> 16: The [MASK] {ATTRIB}. Female 2 girl Femi Masc-Femi.2 #> A_word #> #> 1: is masculine #> 2: is masculine #> 3: is masculine #> 4: is masculine #> 5: has a masculine personality #> 6: has a masculine personality #> 7: has a masculine personality #> 8: has a masculine personality #> 9: is feminine #> 10: is feminine #> 11: is feminine #> 12: is feminine #> 13: has a feminine personality #> 14: has a feminine personality #> 15: has a feminine personality #> 16: has a feminine personality FMAT_query( \"The association between {TARGET} and {ATTRIB} is [MASK].\", MASK = .(H=\"strong\", L=\"weak\"), TARGET = .(Flower=cc(\"rose, iris, lily\"), Insect=cc(\"ant, cockroach, spider\")), ATTRIB = .(Pos=cc(\"health, happiness, love, peace\"), Neg=cc(\"death, sickness, hatred, disaster\")) ) #> query MASK M_pair #> #> 1: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 2: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 3: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 4: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 5: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 6: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 7: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 8: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 9: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 10: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 11: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 12: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 13: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 14: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 15: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 16: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 17: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 18: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 19: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 20: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 21: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 22: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 23: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 24: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 25: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 26: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 27: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 28: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 29: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 30: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 31: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 32: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 33: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 34: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 35: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 36: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 37: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 38: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 39: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 40: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 41: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 42: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 43: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 44: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 45: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 46: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 47: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 48: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 49: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 50: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 51: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 52: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 53: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 54: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 55: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 56: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 57: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 58: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 59: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 60: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 61: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 62: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 63: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 64: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 65: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 66: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 67: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 68: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 69: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 70: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 71: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 72: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 73: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 74: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 75: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 76: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 77: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 78: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 79: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 80: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 81: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 82: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 83: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 84: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 85: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 86: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 87: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 88: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 89: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 90: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 91: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 92: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 93: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 94: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> 95: The association between {TARGET} and {ATTRIB} is [MASK]. H 1 #> 96: The association between {TARGET} and {ATTRIB} is [MASK]. L 1 #> query MASK M_pair #> M_word TARGET T_word ATTRIB A_word #> #> 1: strong Flower rose Pos health #> 2: weak Flower rose Pos health #> 3: strong Flower iris Pos health #> 4: weak Flower iris Pos health #> 5: strong Flower lily Pos health #> 6: weak Flower lily Pos health #> 7: strong Flower rose Pos happiness #> 8: weak Flower rose Pos happiness #> 9: strong Flower iris Pos happiness #> 10: weak Flower iris Pos happiness #> 11: strong Flower lily Pos happiness #> 12: weak Flower lily Pos happiness #> 13: strong Flower rose Pos love #> 14: weak Flower rose Pos love #> 15: strong Flower iris Pos love #> 16: weak Flower iris Pos love #> 17: strong Flower lily Pos love #> 18: weak Flower lily Pos love #> 19: strong Flower rose Pos peace #> 20: weak Flower rose Pos peace #> 21: strong Flower iris Pos peace #> 22: weak Flower iris Pos peace #> 23: strong Flower lily Pos peace #> 24: weak Flower lily Pos peace #> 25: strong Flower rose Neg death #> 26: weak Flower rose Neg death #> 27: strong Flower iris Neg death #> 28: weak Flower iris Neg death #> 29: strong Flower lily Neg death #> 30: weak Flower lily Neg death #> 31: strong Flower rose Neg sickness #> 32: weak Flower rose Neg sickness #> 33: strong Flower iris Neg sickness #> 34: weak Flower iris Neg sickness #> 35: strong Flower lily Neg sickness #> 36: weak Flower lily Neg sickness #> 37: strong Flower rose Neg hatred #> 38: weak Flower rose Neg hatred #> 39: strong Flower iris Neg hatred #> 40: weak Flower iris Neg hatred #> 41: strong Flower lily Neg hatred #> 42: weak Flower lily Neg hatred #> 43: strong Flower rose Neg disaster #> 44: weak Flower rose Neg disaster #> 45: strong Flower iris Neg disaster #> 46: weak Flower iris Neg disaster #> 47: strong Flower lily Neg disaster #> 48: weak Flower lily Neg disaster #> 49: strong Insect ant Pos health #> 50: weak Insect ant Pos health #> 51: strong Insect cockroach Pos health #> 52: weak Insect cockroach Pos health #> 53: strong Insect spider Pos health #> 54: weak Insect spider Pos health #> 55: strong Insect ant Pos happiness #> 56: weak Insect ant Pos happiness #> 57: strong Insect cockroach Pos happiness #> 58: weak Insect cockroach Pos happiness #> 59: strong Insect spider Pos happiness #> 60: weak Insect spider Pos happiness #> 61: strong Insect ant Pos love #> 62: weak Insect ant Pos love #> 63: strong Insect cockroach Pos love #> 64: weak Insect cockroach Pos love #> 65: strong Insect spider Pos love #> 66: weak Insect spider Pos love #> 67: strong Insect ant Pos peace #> 68: weak Insect ant Pos peace #> 69: strong Insect cockroach Pos peace #> 70: weak Insect cockroach Pos peace #> 71: strong Insect spider Pos peace #> 72: weak Insect spider Pos peace #> 73: strong Insect ant Neg death #> 74: weak Insect ant Neg death #> 75: strong Insect cockroach Neg death #> 76: weak Insect cockroach Neg death #> 77: strong Insect spider Neg death #> 78: weak Insect spider Neg death #> 79: strong Insect ant Neg sickness #> 80: weak Insect ant Neg sickness #> 81: strong Insect cockroach Neg sickness #> 82: weak Insect cockroach Neg sickness #> 83: strong Insect spider Neg sickness #> 84: weak Insect spider Neg sickness #> 85: strong Insect ant Neg hatred #> 86: weak Insect ant Neg hatred #> 87: strong Insect cockroach Neg hatred #> 88: weak Insect cockroach Neg hatred #> 89: strong Insect spider Neg hatred #> 90: weak Insect spider Neg hatred #> 91: strong Insect ant Neg disaster #> 92: weak Insect ant Neg disaster #> 93: strong Insect cockroach Neg disaster #> 94: weak Insect cockroach Neg disaster #> 95: strong Insect spider Neg disaster #> 96: weak Insect spider Neg disaster #> M_word TARGET T_word ATTRIB A_word"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":null,"dir":"Reference","previous_headings":"","what":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"Combine multiple query data.tables renumber query ids.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"","code":"FMAT_query_bind(...)"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"... Query data.tables returned FMAT_query.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"data.table queries variables.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_query_bind.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Combine multiple query data.tables and renumber query ids. — FMAT_query_bind","text":"","code":"FMAT_query_bind( FMAT_query( \"[MASK] is {TARGET}.\", MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ), FMAT_query( \"[MASK] occupation is {TARGET}.\", MASK = .(Male=\"His\", Female=\"Her\"), TARGET = .(Occupation=cc(\"doctor, nurse, artist\")) ) ) #> qid query MASK M_pair M_word TARGET #> #> 1: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 2: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 3: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 4: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 5: 1 [MASK] is {TARGET}. Male 1 He Occupation #> 6: 1 [MASK] is {TARGET}. Female 1 She Occupation #> 7: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 8: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> 9: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 10: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> 11: 2 [MASK] occupation is {TARGET}. Male 1 His Occupation #> 12: 2 [MASK] occupation is {TARGET}. Female 1 Her Occupation #> T_pair T_word #> #> 1: Occupation.1 a doctor #> 2: Occupation.1 a doctor #> 3: Occupation.2 a nurse #> 4: Occupation.2 a nurse #> 5: Occupation.3 an artist #> 6: Occupation.3 an artist #> 7: Occupation.1 doctor #> 8: Occupation.1 doctor #> 9: Occupation.2 nurse #> 10: Occupation.2 nurse #> 11: Occupation.3 artist #> 12: Occupation.3 artist"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":null,"dir":"Reference","previous_headings":"","what":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"Run fill-mask pipeline multiple models CPU GPU (faster requiring NVIDIA GPU device).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"","code":"FMAT_run( models, data, gpu, add.tokens = FALSE, add.method = c(\"sum\", \"mean\"), file = NULL, progress = TRUE, warning = TRUE, na.out = TRUE )"},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"models Options: character vector model names HuggingFace. Can used CPU GPU. returned object FMAT_load. Can used CPU. restart R session, need rerun FMAT_load. data data.table returned FMAT_query FMAT_query_bind. gpu Use GPU (3x faster CPU) run fill-mask pipeline? Defaults missing value automatically use available GPU (available, use CPU). NVIDIA GPU device (e.g., GeForce RTX Series) required use GPU. See Guidance GPU Acceleration. Options passing device parameter Python: FALSE: CPU (device = -1). TRUE: GPU (device = 0). value: passing transformers.pipeline(device=...) defines device (e.g., \"cpu\", \"cuda:0\", GPU device id like 1) pipeline allocated. add.tokens Add new tokens (--vocabulary words even phrases) model vocabulary? Defaults FALSE. temporarily adds tokens tasks change raw model file. add.method Method used produce token embeddings new added tokens. Can \"sum\" (default) \"mean\" subword token embeddings. file File name .RData save returned data. progress Show progress bar? Defaults TRUE. warning Alert warning --vocabulary word(s)? Defaults TRUE. na.Replace probabilities --vocabulary word(s) NA? Defaults TRUE.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"data.table (new class fmat) appending data new variables: model: model name. output: complete sentence output unmasked token. token: actual token filled blank mask (note \"--vocabulary\" added original word found model vocabulary). prob: (raw) conditional probability unmasked token given provided context, estimated masked language model. SUGGESTED directly interpret raw probabilities contrast pair probabilities interpretable. See summary.fmat.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"details","dir":"Reference","previous_headings":"","what":"Details","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"function automatically adjusts compatibility tokens used certain models: (1) uncased models (e.g., ALBERT), turns tokens lowercase; (2) models use rather [MASK], automatically uses corrected mask token; (3) models require prefix estimate whole words subwords (e.g., ALBERT, RoBERTa), adds certain prefix (usually white space; \\u2581 ALBERT XLM-RoBERTa, \\u0120 RoBERTa DistilRoBERTa). Note changes affect token variable returned data, affect M_word variable. Thus, users may analyze data based unchanged M_word rather token. Note also may extremely trivial differences (5~6 significant digits) raw probability estimates using CPU GPU, differences little impact main results.","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/FMAT_run.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run the fill-mask pipeline on multiple models (CPU / GPU). — FMAT_run","text":"","code":"## Running the examples requires the models downloaded if (FALSE) { models = c(\"bert-base-uncased\", \"bert-base-cased\") query1 = FMAT_query( c(\"[MASK] is {TARGET}.\", \"[MASK] works as {TARGET}.\"), MASK = .(Male=\"He\", Female=\"She\"), TARGET = .(Occupation=cc(\"a doctor, a nurse, an artist\")) ) data1 = FMAT_run(models, query1) summary(data1, target.pair=FALSE) query2 = FMAT_query( \"The [MASK] {ATTRIB}.\", MASK = .(Male=cc(\"man, boy\"), Female=cc(\"woman, girl\")), ATTRIB = .(Masc=cc(\"is masculine, has a masculine personality\"), Femi=cc(\"is feminine, has a feminine personality\")) ) data2 = FMAT_run(models, query2) summary(data2, mask.pair=FALSE) summary(data2) }"},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":null,"dir":"Reference","previous_headings":"","what":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"Interrater agreement log probabilities (treated \"ratings\"/rows) among BERT language models (treated \"raters\"/columns), row column (\"two-way\") random effects.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"","code":"ICC_models(data, type = \"agreement\", unit = \"average\")"},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"data Raw data returned FMAT_run. type Interrater \"agreement\" (default) \"consistency\". unit Reliability \"average\" scores (default) \"single\" scores.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/ICC_models.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Intraclass correlation coefficient (ICC) of BERT models. — ICC_models","text":"data.table ICC.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":null,"dir":"Reference","previous_headings":"","what":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"Reliability analysis (Cronbach's \\(\\alpha\\)) LPR.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"","code":"LPR_reliability(fmat, item = c(\"query\", \"T_word\", \"A_word\"), by = NULL)"},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"fmat data.table returned summary.fmat. item Reliability multiple \"query\" (default), \"T_word\", \"A_word\". Variable(s) split data . Options can \"model\", \"TARGET\", \"ATTRIB\", combination .","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/LPR_reliability.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Reliability analysis (Cronbach's \\(\\alpha\\)) of LPR. — LPR_reliability","text":"data.table Cronbach's \\(\\alpha\\).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":null,"dir":"Reference","previous_headings":"","what":"A simple function equivalent to list. — .","title":"A simple function equivalent to list. — .","text":"simple function equivalent list.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"A simple function equivalent to list. — .","text":"","code":".(...)"},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"A simple function equivalent to list. — .","text":"... Named objects (usually character vectors package).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"A simple function equivalent to list. — .","text":"list named objects.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/dot-.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"A simple function equivalent to list. — .","text":"","code":".(Male=cc(\"he, his\"), Female=cc(\"she, her\")) #> $Male #> [1] \"he\" \"his\" #> #> $Female #> [1] \"she\" \"her\" #> list(Male=cc(\"he, his\"), Female=cc(\"she, her\")) # the same #> $Male #> [1] \"he\" \"his\" #> #> $Female #> [1] \"she\" \"her\" #>"},{"path":"https://psychbruce.github.io/FMAT/reference/reexports.html","id":null,"dir":"Reference","previous_headings":"","what":"Objects exported from other packages — reexports","title":"Objects exported from other packages — reexports","text":"objects imported packages. Follow links see documentation. PsychWordVec cc","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":null,"dir":"Reference","previous_headings":"","what":"[S3 method] Summarize the results for the FMAT. — summary.fmat","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"Summarize results Log Probability Ratio (LPR), indicates relative (vs. absolute) association concepts. LPR just one contrast (e.g., pair attributes) may sufficient proper interpretation results, may require second contrast (e.g., pair targets). Users suggested use linear mixed models (R packages nlme lme4/lmerTest) perform formal analyses hypothesis tests based LPR.","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"","code":"# S3 method for fmat summary( object, mask.pair = TRUE, target.pair = TRUE, attrib.pair = TRUE, warning = TRUE, ... )"},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"object data.table (new class fmat) returned FMAT_run. mask.pair, target.pair, attrib.pair Pairwise contrast [MASK], TARGET, ATTRIB? Defaults TRUE. warning Alert warning --vocabulary word(s)? Defaults TRUE. ... arguments (currently used).","code":""},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"data.table summarized results Log Probability Ratio (LPR).","code":""},{"path":[]},{"path":"https://psychbruce.github.io/FMAT/reference/summary.fmat.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"[S3 method] Summarize the results for the FMAT. — summary.fmat","text":"","code":"# see examples in `FMAT_run`"},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20245","dir":"Changelog","previous_headings":"","what":"FMAT 2024.5","title":"FMAT 2024.5","text":"CRAN release: 2024-05-19 Added BERT_info(). Added add.tokens add.method parameters BERT_vocab() FMAT_run(): experimental functionality add new tokens (e.g., --vocabulary words, compound words, even phrases) [MASK] options. Validation still needed novel practice (one ongoing projects), currently please use risk, waiting publication validation work. functions except BERT_download() now import local model files , without automatically downloading models. Users must first use BERT_download() download models. Deprecating FMAT_load(): Better use FMAT_run() directly.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20244","dir":"Changelog","previous_headings":"","what":"FMAT 2024.4","title":"FMAT 2024.4","text":"CRAN release: 2024-04-29 Added BERT_vocab() ICC_models(). Improved summary.fmat(), FMAT_query(), FMAT_run() (significantly faster now can simultaneously estimate [MASK] options unique query sentence, running time depending number unique queries number [MASK] options). use reticulate package version ≥ 1.36.1, FMAT updated ≥ 2024.4. Otherwise, --vocabulary [MASK] words may identified marked. Now FMAT_run() directly uses model vocabulary token ID match [MASK] words. check [MASK] word model vocabulary, please use BERT_vocab().","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20243","dir":"Changelog","previous_headings":"","what":"FMAT 2024.3","title":"FMAT 2024.3","text":"CRAN release: 2024-03-22 FMAT methodology paper accepted (March 14, 2024) publication Journal Personality Social Psychology: Attitudes Social Cognition (DOI: 10.1037/pspa0000396)! Added BERT_download() (downloading models local cache folder “%USERPROFILE%/.cache/huggingface”) differentiate FMAT_load() (loading saved models local cache). indeed FMAT_load() can also download models silently downloaded. Added gpu parameter (see Guidance GPU Acceleration) FMAT_run() allow specifying NVIDIA GPU device fill-mask pipeline allocated. GPU roughly performs 3x faster CPU fill-mask pipeline. default, FMAT_run() automatically detect use available GPU installed CUDA-supported Python torch package (, use CPU). Added running speed information (queries/min) FMAT_run(). Added device information BERT_download(), FMAT_load(), FMAT_run(). Deprecated parallel FMAT_run(): FMAT_run(model.names, data, gpu=TRUE) fastest. progress bar displayed default progress FMAT_run().","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-20238","dir":"Changelog","previous_headings":"","what":"FMAT 2023.8","title":"FMAT 2023.8","text":"CRAN release: 2023-08-11 CRAN package publication. Fixed bugs improved functions. Provided examples. Now use “YYYY.M” package version number.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-009-may-2023","dir":"Changelog","previous_headings":"","what":"FMAT 0.0.9 (May 2023)","title":"FMAT 0.0.9 (May 2023)","text":"Initial public release GitHub.","code":""},{"path":"https://psychbruce.github.io/FMAT/news/index.html","id":"fmat-001-jan-2023","dir":"Changelog","previous_headings":"","what":"FMAT 0.0.1 (Jan 2023)","title":"FMAT 0.0.1 (Jan 2023)","text":"Designed basic functions.","code":""}]