Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, < https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. Forthcoming) tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) .


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("oolong")

0.4.0 by Chung-hong Chan, 5 months ago


https://github.com/chainsawriot/oolong


Report a bug at https://github.com/chainsawriot/oolong/issues


Browse source code at https://github.com/cran/oolong


Authors: Chung-hong Chan [aut, cre] , Marius Sältzer [aut]


Documentation:   PDF Manual  


LGPL (>= 2.1) license


Imports keyATM, purrr, tibble, shiny, miniUI, text2vec, digest, R6, quanteda, irr, ggplot2, cowplot, dplyr, cli, stats, utils

Suggests testthat, BTM, topicmodels, stm, seededlda, covr, stringr, knitr, rmarkdown, fs, quanteda.textmodels, shinytest


See at CRAN