Algorithm to Catch Lies

If you read some of my previous posts on Technology you probably noticed that I've been playing and experimenting with some aspects of text elaboration (see posts on: generated stylized text, Google translator, Babelfish translator). The following falls into that category and caught my attention.

According to the research carried by Prof. James Pennebaker at the University of Austin in Texas and Prof. David Skillcorn at Queen's University in Ontario, there is a text elaboration algorithm to catch lies and potential fraud in any English text.

Pennbaker developed a deception model that can be applied to speeches, emails or any other text and automatically score the text based on its supposed deception level. The symptoms of a deceptive communication were identified by Pennbaker as (1) a decreased frequency of first-person pronouns (2) a decreased frequency of exception words, such as `however' and `unless' (3) an increased frequency of negative emotion words (4) an increased frequency of action words. This deception model was coded into an algorithm that he called LIWC (Linguistic Inquiry and Word Count). The scientific base for such algorithm is Pennebaker research on the psychology of word use.

Skillscorn applied some of these ideas to study organized behavior, fraud detection and potential terrorist communication. The results seem to be promising, but the research continues.

Imagine the possibilities and ramifications of the results of this research. Imagine if email clients were able to score the level of potential deception of any email that you receive. Would you be interested in having that kind of tool?
While this technology is interesting and potentially useful, it also suffers from an implicit self-destructive defect that, I believe, will make it hard to succeed in the long term. I call discoveries with this defect "Suicidal Models".
In fact, if this technology was wide-spread and all email clients or word processors integrated a deception scoring system, then anybody could modify a text to make it look truthful to the specific algorithm. Would this be a tool in the arsenal of deceiving minds instead of a lie detector? Something that helps them lie even better? You decide.