A brief guide to automated text analysis for research & evaluation

In a recent blog post, I described how I used automated text analysis (aka natural language processing) in an evaluation project to help classify activities based on human-written descriptive text. Topic detection is one automated text analysis method that can be quite useful in research and evaluation projects. Two other possibly useful methods are sentiment analysis and summarisation. These involve detecting positive/neutral/negative sentiments in text, and generating automated short summaries of longer text.

To help researchers and evaluators understand the possibilities for using sentiment analysis, topic detection, and summarisation in their work, I’ve written a brief introductory guide. The guide focusses on what is possible rather than technical details, although I do describe the basic ways that these methods work.

Automated text analysis has become quite sophisiticated in recent years but in my opinion it still cannot (yet?) replace human analysis and synthesis of text data. Rather I see these methods as a useful complement for manual analysis, that can help to process sets of text data that are too large to review manually, or to open up new possibilities for further analysis.

I hope this guide is useful, and If you have any feedback about how I could improve it, please let me know.